Discovery may treat disease that keeps kids from eating

Researchers have identified a new treatment for EOE, a chronic immune system disease that can prevent children from eating.

Eosinophilic esophagitis (EoE) is triggered by food allergies or airborne allergens which cause a type of white blood cell, eosinophils, to build up in the lining of the esophagus. This causes the esophagus to shorten and the esophageal wall to thicken, making swallowing difficult and causing food to get stuck in the throat.

The disease occurs in an estimated 1 in 2,000 adults but more frequently affects children (1 in 1,500) where symptoms can be harder to diagnose and pose greater risks as difficulty eating can lead to malnutrition, weight loss, and poor growth.

The findings, published in the journal Nature Communications Biology, show the disease is caused by Interleukin-18 (IL-18), a protein involved in the innate immune response that can cause inflammation if produced in excess.

When a food allergen enters the body, it activates a pathway responsible for regulating the innate immune system, resulting in the release of proinflammatory proteins like IL-18. This produces the eosinophils which damage the esophagus.

The researchers discovered that successfully inhibiting this pathway, called the NLRP3 pathway, and the release of IL-18 prevented the development of EoE from both food and airborne allergens.

“Parents and doctors may not be aware of this, but this is a very prominent and serious disease in the pediatric population, and it is increasing in number because it is directly related to food allergens, which are also on the rise,” says lead author Anil Mishra, director of the Eosinophilic Disorder Center at the Tulane University School of Medicine. “In this study, we show that after treating the disease in animals, the disease is gone and completely in remission.”

The findings are crucial for a disease that was not identified until the 1990s. For many years, EoE was misdiagnosed as gastrointestinal reflux disease (GERD), despite GERD medication being ineffective for treating EoE. Additionally, the study’s findings replace decades of thinking that Th2 cells play a major role in triggering EoE.

“Given the paucity of mechanistic information and treatment strategies for EoE, we feel the proposed studies are highly relevant and are poised to have a major impact on establishing the significance of NLRP3-IL-18 pathway in the initiation of EoE pathogenesis,” Mishra says.

The study identifies one existing drug, VX-765, as an inhibitor that may work as a treatment for humans. Importantly, this inhibitor would only deplete pathogenic eosinophils generated and transformed by IL-18 and not affect white blood cells created by IL-5, a protein important for maintaining innate immunity.

Mishra says a clinical trial would be the next step to determining the treatment’s effectiveness.

Source: Tulane University

source

AI ethics teams lack ‘support, resources, and authority’

Because tech industry ethics teams lack resources and authority, their effectiveness is spotty at best, according to a new study.

In recent years, AI companies have been publicly chided for generating machine learning algorithms that discriminate against historically marginalized groups. To quell that criticism, many companies pledged to ensure their products are fair, transparent, and accountable, but these promises are frequently criticized as being mere “ethics washing,” says Sanna Ali, who recently received her PhD from the Stanford University communication department. “There’s a concern that these companies talk the talk but don’t walk the walk.”

To explore whether that’s the case, Ali interviewed AI ethics workers from some of the largest companies in the field. The research project, coauthored with assistant professor of communication Angèle Christin, Google researcher Andrew Smart, and professor of management science and engineering Riitta Katila, was partially funded by a seed grant from Stanford HAI and published in the Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT ’23).

The study found that ethics initiatives and interventions were difficult to implement in the tech industry’s institutional environment. Specifically, Ali found, teams were largely under-resourced and under-supported by leadership, and they lacked authority to act on problems they identified.

“Without leadership buy-in, individual workers had to employ persuasive skills and interpersonal strategies in order to make any headway,” Ali says. The result: They succeed in working with some teams and not with others, and are often called for a consultation too close to product launch dates and without the authority to require important ethics fixes.

Ethics incentives?

Ali’s interviews suggest some solutions: Leadership should incentivize product teams to incorporate ethics considerations into product development processes, she says, and there should be bureaucratic support to empower ethics teams in their work and give them the authority to implement necessary ethics fixes before products are released.

“It’s unlikely that these companies are going to change their priority of frequently releasing new products,” Ali says. “But at least they could provide incentives so that ethics can be part of that conversation early on.”

Many tech companies have released statements of principles around accountability, transparency, and fairness, Ali says. They have also developed toolkits for evaluating algorithmic fairness; held seminars about how to implement responsible AI; and hired ethics teams, which go by various names such as “Trust and Safety” or “Responsible AI.”

In theory, these ethics teams provide expert support for such things as addressing problems with training data; identifying and implementing various fairness fixes to machine learning models after they’ve been trained; or evaluating whether a model is sufficiently explainable for its intended use.

That’s all well and good, Ali says, “but we wanted to look at the challenges of implementing those initiatives and interventions on the ground.”

Who has the authority?

To do that, she interviewed 25 tech industry employees, including 21 who either currently work or have worked as part of responsible AI initiatives—many of them at more than one company—at businesses with 6,000 to hundreds of thousands of employees.

Prior research has identified some important characteristics of the tech industry, including businesses’ tendency to be informal and nonhierarchical; to value rapid product innovation over all other concerns; and to think that tech can fix tech.

Based on this background, Ali hypothesized that upper-level leadership’s responsible AI principles were likely to be “decoupled” from the horizontally distributed product teams where they would be implemented. She compares it to a school principal who states an educational policy but often has little control over what happens in individual teachers’ classrooms.

This would, she further hypothesized, leave ethics teams in the position of acting as “ethics entrepreneurs” who would have to “sell” their services to individual product teams. In essence, they would be left to their own resources to build relationships with product managers in hopes of gaining their cooperation in an ethics review of their products.

Based on her interviews, Ali says, many of her predictions about the institutional environment and its expected impact on ethics teams panned out. Company policies were indeed decoupled from implementation by distributed product development teams, and ethics teams had to carefully cultivate relationships with product teams to get anything done.

Ali found that the implementation of responsible AI policies in the tech industry is inconsistent at best. Indeed, products might be released without ethics team input for multiple reasons. Sometimes a product team wouldn’t want to work with the ethics team at all and ethics personnel lacked authority to mandate an ethics review. Sometimes the ethics team would be invited to provide input too close to a product launch date when there was only enough support or authority to implement a few of the necessary fixes before launch. Sometimes product teams believed that ethics workers’ fairness goals would conflict with other important goals such as user engagement. Sometimes the ethics team would ask management to delay a product launch so they could implement an ethics fix, but the manager in charge declined the request.

“It just takes a person with more authority than the ethics worker to speak up,” Ali says. “But that’s not happening because all of the incentives are around launching the product immediately.”

Lack of ‘support, resources, and authority’

In some cases, companies have implemented a more formal ethics review process where, early in the development of a new product idea, the product team completes an impact assessment that the ethics team reviews. If the product relates to a sensitive use—bail or sentencing, for example—the ethics and product teams then work together to determine the product’s potential to treat certain demographic groups unfairly. If indeed there exists such a potential, then the ethics team will be included in the development of the product from start to finish. “In that setting, the team might have more resources and authority to do something to make sure the AI is deployed responsibly,” Ali says.

While Ali favors this more bureaucratic approach, there’s a risk of it becoming a box-checking exercise, she says. “The product team might, for example, check the boxes identifying steps they are willing to take while ignoring those that would require deeply thinking about the real ethical issues.” And, because responsible AI is a relatively new field, there is a need for such deep thinking, she says. For example, there are still debates about what fairness means, how to measure it, and how fair is fair enough. Yet ethics workers are tasked with navigating that uncertainty without support, resources, and authority to act, all while functioning inside a business context where fast innovation is prioritized. It’s a daunting task, Ali says.

Diplomatically approaching one product team after another in hopes of collaborating only gets ethics workers so far. They need some formal authority to require that problems be addressed, Ali says. “An ethics worker who approaches product teams on an equal footing can simply be ignored,” she says.

And if ethics teams are going to implement that authority in the horizontal, nonhierarchical world of the tech industry, there need to be formal bureaucratic structures requiring ethics reviews at the very beginning of the product development process, Ali says. “Bureaucracy can set rules and requirements so that ethics workers don’t have to convince people of the value of their work.”

Product teams also need to be incentivized to work with ethics teams, Ali says. “Right now, they are very much incentivized by moving fast, which can be directly counter to slowly, carefully, and responsibly examining the effects of your technology,” Ali says. Some interviewees suggested rewarding teams by giving them “ethics champion” bonuses when a product is made less biased or when the plug is pulled on a product that has a serious problem. “It would be good to acknowledge the ethical stance that people are taking within the company by rewarding it in some way,” Ali says.

By creating some bureaucracy, empowering ethics teams, and incentivizing other employees to work with the ethics teams, tech companies’ promises of fairness will no longer be decoupled from work on the ground. Then, Ali says, “real institutional change may be possible.”

Source: Katharine Miller for Stanford University

source

Breath test flags COVID-19 in less than a minute

A new breath test quickly identifies people infected with the virus that causes COVID-19.

The device requires only one or two breaths and provides results in less than a minute.

The same group of researchers that developed the device recently published a new study in Nature Communications about an air monitor to detect airborne SARS-CoV-2—the virus that causes COVID-19—within about five minutes in hospitals, schools, and other public places.

The new breath test could become a tool for use in doctors’ offices to quickly diagnose people infected with the virus. If and when new strains of COVID-19 or other airborne pathogenic diseases arise, such devices also could be used to screen people at public events.

The breath test also has potential to help prevent outbreaks in situations where many people live or interact in close quarters—for example aboard ships, in nursing homes, in residence halls at colleges and universities, or on military bases, the researchers say.

“With this test, there are no nasal swabs and no waiting 15 minutes for results, as with home tests,” says co-corresponding author Rajan K. Chakrabarty, associate professor of energy, environment, and chemical engineering at the McKelvey School of Engineering at Washington University in St. Louis. “A person simply blows into a tube in the device, and an electrochemical biosensor detects whether the virus is there. Results are available in about a minute.”

The researchers adapted the biosensor used in the device from an Alzheimer’s disease-related technology developed by scientists at Washington University School of Medicine in St. Louis to detect amyloid beta and other Alzheimer’s disease-related proteins in the brains of mice.

The School of Medicine’s John R. Cirrito, a professor of neurology, and Carla M. Yuede, an associate professor of psychiatry—both also co-corresponding authors of the study—used a nanobody, an antibody from llamas, to detect the virus that causes COVID-19.

The breath test could be modified to simultaneously detect other viruses, including influenza and respiratory syncytial virus (RSV), Chakrabarty and Cirrito say. They also believe they can develop a biodetector for any newly emerging pathogen within two weeks of receiving samples of it.

“It’s a bit like a breathalyzer test that an impaired driver might be given,” Cirrito says. “And, for example, if people are in line to enter a hospital, a sports arena, or the White House Situation Room, 15-minute nasal swab tests aren’t practical, and PCR tests take even longer. Plus, home tests are about 60% to 70% accurate, and they produce a lot of false negatives. This device will have diagnostic accuracy.”

The researchers began working on the breath test device—made with 3D printers—after receiving a grant from the National Institutes of Health (NIH) in August 2020, during the first year of the pandemic. Since receiving the grant, they’ve tested prototypes in the laboratory and in the Washington University Infectious Diseases Clinical Research Unit. The team continues to test the device to further improve its efficacy at detecting the virus in people.

For the study, the research team tested COVID-positive individuals, each of whom exhaled into the device two, four, or eight times. The breath test produced no false negatives and gave accurate reads after two breaths from each person tested. The clinical study is ongoing to test COVID-positive and -negative individuals to further test and optimize the device.

The researchers also found that the breath test successfully detected several different strains of SARS-CoV-2, including the original strain and the Omicron variant. Their clinical studies are measuring active strains in the St. Louis area.

To conduct the breath test, the researchers insert a straw into the device. A patient blows into the straw, and then aerosols from the person’s breath collect on a biosensor inside the device. The device then is plugged into a small machine that reads signals from the biosensor, and in less than a minute, the machine reveals a positive or negative finding of COVID-19.

Clinical studies are continuing, and the researchers soon plan to use the device in clinics beyond Washington University’s Infectious Diseases Clinical Research Unit. In addition, Y2X Life Sciences, a New York-based company, has an exclusive option to license the technology. That company has consulted with the research team from the beginning of the project and during the device’s design stages to facilitate possible commercialization of the test in the future.

The study appears in the journal ACS Sensors.

The National Institutes of Health, the National Institute of Neurological Disorders and Stroke Intramural Research Program, the Uniformed Services University of Health Sciences, and the NIH SARS-CoV-2 Assessment of Viral Evolution (SAVE) Program funded the work.

Source: Washington University in St. Louis

source

What’s at stake in the actors and writers strikes?

Newman: One of the big problems in negotiating these deals is people don’t know what is going to exist in the future, so they don’t know what to negotiate. The standard movie contracts were always, you know, “theater, television, and any device not yet invented by mankind.”

Burgess: For the 2007 writers’ strike, one of the biggest sticking points was DVD residuals. That was one of the hardest issues for them to come together on, and it ended up being the least big deal. So that just goes to show that nobody has a crystal ball. It’s very hard to write a contract that will protect people from unknown innovations, and the exploitation that might come with those innovations.

Newman: The biggest problem of this, I believe, is because of the streaming wars, and the huge losses that the legacy studios have absorbed. Netflix is not an entertainment company. It’s a tech company. If this was strictly a negotiation between the studios and the networks and the deal, it would be solved. But Netflix and to a certain extent, Hulu and whatever remains of the studio streaming services have no desire to. They spent too much money. They wanted quantity over quality. They’re actually comfortable sitting it out right now and not spending money. When they come back, unfortunately I think one of the effects is going to be far, far fewer shows, and higher prices for film tickets and streaming subscriptions for consumers.

Brandon J. Dirden: I was in one of the first original Netflix series, The Get Down. In those days we didn’t know if it was a viable concept, streaming. So we actors were taking a chance along with Netflix. So we worked at a reduced residual structure than we were previously accustomed to. But that was 10 years ago. Now there’s so much secrecy involved and we don’t know what a fair deal is. We need a concrete plan of how this is going to be sustainable. What is the structure going to look like where we can actually support ourselves?

Joseph Vinciguerra: This is the part of why we’re not necessarily going to come to an agreement anytime soon. Artists are now dealing with tech companies that see the world through algorithms of efficiency. And the companies have run into the world of creativity and so we’re having a kind of philosophical difference over what is commerce and what is art? Is the world of entertainment commerce, or is it art?

Burgess: What the writers are asking for is the number of weeks and the number of collaborators they need to make movies and television that people will love. Bean counters on Wall Street are saying, “I’ll give you the bare minimum number of people and hours to make a thing that might turn a profit,” even if it’s not as good as what it could be. Studio executives are worried about their financial quarter, but the writers are worried about and trying to take care of the art form.

source

School meals are healthier but still not healthy enough

Fully synchronizing student school meals with  the 2020-2025 Dietary Guidelines for Americans could positively affect hundreds of thousands of children into their adulthood, a new study shows.

The research shows that doing so would have the added benefit of saving billions in lifetime medical costs.

Today’s school meals are much healthier than they were for the parents of American kids, but still 1 in 4 school meals are of poor nutritional quality. The new dietary guidelines in place for 2020-25, call for meals with less sugar and salt and with more whole grains.

By modeling the national implementation of updated school lunch guidelines, the researchers found even incomplete compliance by schools would lead to overall reductions in short- and long-term health issues for participating K-12 students.

“On average, school meals are healthier than the food American children consume from any other source including at home, but we’re at a critical time to further strengthen their nutrition,” says senior author Dariush Mozaffarian, a cardiologist and professor of nutrition at the Friedman School at Tufts University.

“Our findings suggest a real positive impact on long-term health and health care costs with even modest updates to the current school meal nutrition standards.”

The researchers used a simulation model to derive a data-driven estimate of three changes to the school meal program, including limiting percent of energy from added sugar to lower than 10% of total energy per meal, requiring all grain foods to be whole grain, and lowering sodium content to the Chronic Disease Risk Reduction amount for sodium intake in the 2020-2025 DGA.

The researchers estimated a portion (35%) of these dietary changes to continue into adulthood. If all schools fully complied with the new standards, these were estimated to prevent more than 10,600 deaths per year due to fewer diet-related diseases, saving over $19 billion annually in health care-related costs during later adulthood. The worst-case estimate, in which schools remained with their current food offerings, saved a little over half as many lives and health care dollars.

School meals aligned to new dietary guidelines for added sugars, sodium, and whole grains would have modest, but important, short-term health benefits for children. For example, these changes were estimated to reduce elementary and middle school students’ body mass index (BMI) by 0.14 and systolic blood pressure by 0.13 mm Hg.

Benefits were about half as large for high school students because fewer older students eat school-provided meals.

“Using a comparative risk assessment model, our estimations are based on the best available, nationally representative data on children and adults and the best available evidence on how dietary changes in childhood relate to BMI and blood pressure, how dietary changes persist into adulthood, and how diet influences disease in adulthood,” says first author Lu Wang, a postdoctoral fellow at the Friedman School. “Our new results indicate that even small changes to strengthen school nutrition policies can help students live longer, healthier lives.”

The study’s findings, which cannot prove the outcomes they describe but are derived from a mathematical model based on the best available demographic and health data, are timely given the United States Department of Agriculture’s recent commitment to updating the school meal nutrition standards to align with the 2020-2025 dietary guidelines.

The price to fully implement new school meal standards is yet to be determined, but previous alignments suggest it would add at least another $1 billion nationally to the cost of these programs, or only about 5% of the total predicted annual long-term health care savings this change would yield.

The study appears in the American Journal of Clinical Nutrition.

Source: Joseph Caputo for Tufts University

source

Why declawing is really bad for tigers

Declawing larger cat species like lions and tigers negatively affects their muscular capabilities, a new study shows.

Declawing house cats to keep them from scratching people and furniture is controversial—and even banned in some countries and areas in the US—but the practice is not limited to house cats.

While it is illegal in the US to surgically modify an exotic animal, declawing is still done on large cats, often in an effort to allow cubs to more safely be handled in photo opportunities or for entertainment purposes.

“What people might not realize is that declawing a cat is not like trimming our fingernails; rather, it is removing part or all of the last bone of each digit,” says Adam Hartstone-Rose, professor of biological sciences at North Carolina State University and corresponding author of the research. “Like us, each cat finger has three bones, and declawing is literally cutting that third bone off at the joint.”

For the study, published in the journal Animals, the researchers looked at the muscular anatomy of over a dozen exotic cats—from smaller species including bobcats, servals, and ocelots, to lions and tigers—to determine the effect of declawing on their forelimb musculature.

They measured muscle density and mass, and also examined muscle fibers from both clawed and declawed exotic cats. They found that for the larger species declawing resulted in 73% lighter musculature in the forearm’s digital flexors.

These muscles are involved in unsheathing the claws. They also found that overall, forelimb strength decreased by 46% to 66%, depending on the size of the animal, and that other muscles in the forelimb did not compensate for these reductions.

“When you think about what declawing does functionally to a housecat, you hear about changes in scratching, walking, or using the litter box,” says lead author Lara Martens, an undergraduate student at NC State.

“But with big cats, there’s more force being put through the paws. So if you alter them, it is likely that the effects will be more extreme.”

This is because paw size and body mass don’t scale up at a 1:1 ratio. Paw area increases at a slower rate than does body mass (which is proportional to volume), so larger cats have smaller feet relative to their body size, and their paws must withstand more pressure.

“Additionally, big cats are more reliant on their forelimbs—they bear most of the weight, and these bigger cats use their forelimbs to grapple because they hunt much larger prey,” Martens says. “So biomechanically speaking, declawing has a more anatomically devastating effect in larger species.”

“As scientists, it is our job to objectively document the effects of this surgery on the animals,” Hartstone-Rose says, “but it is hard to ignore the cruelty of this practice. These are amazing animals, and we should not be allowed to cripple them, or any animals, in this way.”

The work was done in partnership with colleagues from Carolina Tiger Rescue, a sanctuary that rescues exotic carnivores, especially big cats, who have often been neglected or mistreated.

Source: NC State

source

Grades suggest students don’t cheat on online exams

Unsupervised, online exams aren’t rife with cheating, findings show.

When Iowa State University switched from in-person to remote learning halfway through the spring semester of 2020, psychology professor Jason Chan was worried. Would unsupervised, online exams unleash rampant cheating?

His initial reaction flipped to surprise as test results rolled in. Individual student scores were slightly higher but consistent with their results from in-person, proctored exams. Those receiving B’s before the COVID-19 lockdown were still pulling in B’s when the tests were online and unsupervised. This pattern held true for students up and down the grading scale.

“The fact that the student rankings stayed mostly the same regardless of whether they were taking in-person or online exams indicated that cheating was either not prevalent or that it was ineffective at significantly boosting scores,” says Chan.

To know if this was happening at a broader level, Chan and Dahwi Ahn, a PhD candidate in psychology, analyzed test score data from nearly 2,000 students across 18 classes during the spring 2020 semester. Their sample ranged from large, lecture-style courses with high enrollment, like introduction to statistics, to advanced courses in engineering and veterinary medicine.

Across different academic disciplines, class sizes, course levels, and test styles (i.e., predominantly multiple choice or short answer), the researchers found the same results. Unsupervised, online exams produced scores very similar to in-person, proctored exams, indicating they can provide a valid and reliable assessment of student learning.

“Before conducting this research, I had doubts about online and unproctored exams, and I was quite hesitant to use them if there was an option to have them in-person. But after seeing the data, I feel more confident and hope other instructors will, as well,” says Ahn.

Both researchers say they’ve continued to give exams online, even for in-person classes. Chan says this format provides more flexibility for students who have part-time jobs or travel for sports and extracurriculars. It also expands options for teaching remote classes. Ahn led her first online course over the summer.

Why might cheating have had a minimal effect on test scores?

The researchers say students more likely to cheat might be underperforming in the class and anxious about failing. Perhaps they’ve skipped lectures, fallen behind with studying, or feel uncomfortable asking for help. Even with the option of searching Google during an unmonitored exam, students may struggle to find the correct answer if they don’t understand the content. In their paper, the researchers point to evidence from previous studies comparing test scores from open-book and closed-book exams.

Another factor that may deter cheating is academic integrity or a sense of fairness, something many students value, says Chan. Those who have studied hard and take pride in their grades may be more inclined to protect their exam answers from students they view as freeloaders.

Still, the researchers say instructors should be aware of potential weak spots with unsupervised, online exams. For example, some platforms have the option of showing students the correct answer immediately after they select a multiple-choice option. This makes it much easier for students to share answers in a group text.

To counter this and other forms of cheating, instructors can:

  • Wait to release exam answers until the test window closes
  • Use larger, randomized question banks
  • Add more options in multiple-choice questions and making the right choice less obvious
  • Adjust grade cutoffs.

Chan and Ahn say the spring 2020 semester provided a unique opportunity to research the validity of online exams for student evaluations. However, there were some limitations. For example, it wasn’t clear what role stress and other COVID-19-related impacts may have played on students, faculty, and teaching assistants. Perhaps instructors were more lenient with grading or gave longer windows of time to complete exams.

The researchers say another limitation was not knowing if the 18 classes in the sample normally get easier or harder as the semester progresses. In an ideal experiment, half of the students would have taken online exams for the first half of the semester and in-person exams for the second half.

They attempted to account for these two concerns by looking at older test score data from a subset of the 18 classes during semesters when they were fully in-person. The researchers found the distribution of grades in each class was consistent with the spring 2020 semester and concluded that the materials covered in the first and second halves of the semester did not differ in their difficulty.

At the time of data collection for this study, ChatGPT wasn’t available to students. But the researchers acknowledge AI writing tools are a gamechanger in education and could make it much harder for instructors to evaluate their students. Understanding how instructors should approach online exams with the advent of ChatGPT is something Ahn intends to research.

The research findings appear in the Proceedings of the National Academy of Sciences. The study had support from the National Science Foundation Science of Learning and Augmented Intelligence Grant.

Source: Iowa State University

source

How breast milk benefits newborn brains

A micronutrient in human breast milk provides significant benefit to the developing brains of newborns, a new study suggests.

The finding further illuminates the link between nutrition and brain health and could help improve infant formulas used in circumstances when breastfeeding isn’t possible.

The study, published in the Proceedings of the National Academy of Sciences, also paves the way to study what role this micronutrient might play in the brain as we age.

Researchers found that the micronutrient, a sugar molecule called myo-inositol, was most prominent in human breast milk during the first months of lactation, when neuronal connections called synapses are forming rapidly in the infant brain.

This was true regardless of the mother’s ethnicity or background; the researchers profiled and compared human milk samples collected across sites in Mexico City, Shanghai, and Cincinnati by the Global Exploration of Human Milk study, which included healthy mothers of term singleton infants.

Further testing using rodent models as well as human neurons showed that myo-inositol increased both the size and number of synaptic connections between neurons in the developing brain, indicating stronger connectivity.

“Forming and refining brain connectivity from birth is guided by genetic and environmental forces as well as by human experiences,” says senior author Thomas Biederer, a senior scientist at the Jean Mayer USDA Human Nutrition Research Center on Aging (HNRCA) at Tufts University and faculty member at the Yale School of Medicine, where he leads a research group in the neurology department. “The impact of these factors is particularly important at two stages of life—during infancy, and later in life as one ages and synapses are gradually lost.”

Diet is one of the environmental forces that offers many opportunities for study. In early infancy, the brain may be particularly sensitive to dietary factors because the blood-brain barrier is more permeable, and small molecules taken in as food can more easily pass from the blood to the brain.

“As a neuroscientist, it’s intriguing to me how profound the effects of micronutrients are on the brain,” says Biederer. “It’s also amazing how complex and rich human breast milk is, and I now think it is conceivable that its composition is dynamically changing to support different stages of infant brain development.”

Similar levels of myo-inositol across women in very different geographic locations point to its generally important role in human brain development, he says.

Previous research by others has shown that brain inositol levels decline over time as infants develop. In adults, lower than normal brain inositol levels have been found in patients with major depressive disorders and bipolar disease. Genetic alterations in myo-inositol transporters have been linked to schizophrenia. In contrast, in people with Down syndrome and patients with Alzheimer’s disease and Down syndrome, higher than normal accumulations of myo-inositol have been identified.

“The current research does indicate that for circumstances where breastfeeding is not possible, it may be beneficial to increase the levels of myo-inositol in infant formula,” Biederer says.

However, Biederer says it is too soon to recommend that adults consume more myo-inositol, which can be found in significant quantities in certain grains, beans, bran, citrus fruits, and cantaloupe (but which is not present in great quantities in cow’s milk).

“We don’t know why inositol levels are lower in adults with certain psychiatric conditions, or higher in those with certain other diseases,” he says.

A host of research questions remain: Are lower inositol levels in people with depression or bipolar disease a cause of those diseases, or a side effect of drugs used to treat them? Do higher than normal levels in people with Down syndrome and Alzheimer’s disease suggest that too much myo-inositol is problematic? What is the “right” level of myo-inositol to have in one’s brain for optimal brain health at various stages of life?

“My colleagues at the HNRCA and I are now pursuing research to test how micronutrients like myo-inositol may impact cells and connectivity in the aging brain,” says Biederer. “We hope this work leads to a better understanding of how dietary factors interplay with age-related brain aberrations.”

Reckitt Benckiser/Mead Johnson Nutrition and the Robert and Margaret Patricelli Family Foundation supported the work. Complete information on authors, funders, methodology, and conflicts of interest is available in the published paper.

The content is solely the responsibility of the authors and does not necessarily represent the official views of Reckitt Benckiser/Mead Johnson Nutrition or the Robert and Margaret Patricelli Family Foundation.

Source: Julia Rafferty for Tufts University

source

Team solves puzzle of when bees first evolved

Bees first evolved on an ancient supercontinent more than 120 million years ago, diversifying faster and spreading wider than previously suspected, a new study shows.

The study, published in Current Biology, also reconstructs the evolutionary history of bees, estimates their antiquity, and identifies their likely geographical expansion around the world.

The results indicate their point of origin was in western Gondwana, an ancient supercontinent that at that time included today’s continents of Africa and South America.

“There’s been a longstanding puzzle about the spatial origin of bees,” says Silas Bossert, assistant professor in the entomology department at Washington State University, who co-led the project with Eduardo Almeida, associate professor at the University of São Paulo, Brazil.

The researchers sequenced and compared genes from more than 200 bee species. They compared them with traits from 185 different bee fossils, as well as extinct species, developing an evolutionary history and genealogical models for historical bee distribution.

In what may be the broadest genomic study of bees to date, they analyzed hundreds to thousands of genes at a time to make sure that the relationships they inferred were correct.

“This is the first time we have broad genome-scale data for all seven bee families,” says coauthor Elizabeth Murray, an assistant professor of entomology at Washington State.

Previous research established that the first bees likely evolved from wasps, transitioning from predators to collectors of nectar and pollen. This study shows they arose in arid regions of western Gondwana during the early Cretaceous period.

“For the first time, we have statistical evidence that bees originated on Gondwana,” Bossert says. “We now know that bees are originally southern hemisphere insects.”

The researchers found evidence that as the new continents formed, bees moved north, diversifying and spreading in a parallel partnership with angiosperms, the flowering plants. Later, they colonized India and Australia. All major families of bees appeared to split off prior to the dawn of the Tertiary period, 65 million years ago—the era when dinosaurs became extinct.

The tropical regions of the western hemisphere have an exceptionally rich flora, and that diversity may be due to their longtime association with bees, the authors note. One quarter of all flowering plants belong to the large and diverse rose family, which make up a significant share of the tropical and temperate host plants for bees.

Bossert’s team plans to continue their efforts, sequencing and studying the genetics and history of more species of bees. Their findings are a useful step in revealing how bees and flowering plants evolved together. Understanding how bees spread and filled their modern ecological niches could also help keep pollinator populations healthy.

“People are paying more attention to the conservation of bees and are trying to keep these species alive where they are,” Murray says. “This work opens the way for more studies on the historical and ecological stage.”

Additional coauthors are from Harvard University; Cornell University; the Smithsonian Institution; the Federal University of Paraná, Brazil; the State Museum of Natural History, Stuttgart; York University; the University of Kiel; the US Department of Agriculture; and Washington State.

Source: Washington State University

source

Citizen science motivates Girl Scouts to tackle problems

A program designed to get Girl Scouts involved in citizen science motivated them to tackle scientific or environmental problems in their own communities, researchers report.

The findings demonstrate the impacts projects involving citizen science—programs where members of the public can participate in real scientific research—can have on their participants and offer lessons for other organizations on how to structure STEM-focused learning opportunities using citizen science.

“We’ve found that after participating in citizen science, students do not just learn more science content or the process of science, or have better attitudes or trust in science,” says study coauthor Caren Cooper, professor of public science at North Carolina State University. “It can be the basis for motivating action.”

The study evaluated the impact of a partnership between the Girl Scouts of the USA and SciStarter.org, an online hub for citizen science projects. Between 2017 and early 2020, more than 200 Girl Scout troops with girls between the ages of 4 and 11 participated in a program called “Think Like a Citizen Scientist.”

First, the Girl Scouts learned about making observations or predictions, collecting data, and doing data analysis. Then, they signed up to participate in a citizen science project through SciStarter. The most popular project was one led by applied ecology researchers at NC State called “Ant Picnic,” where volunteers created a picnic for ants, waited an hour, and recorded the number of ants that showed up.

“These citizen science projects were chosen by Girl Scouts and SciStarter to be age-appropriate, and were part of a multi-stage curriculum, or ‘journey,’ designed to introduce girls to citizen science,” says lead author Haley Smith, a PhD candidate in NC State’s Fisheries, Wildlife, and Conservation Biology program.

“The Girl Scouts’ program design is a really great model, where learning activities are structured to build off of each other. Girls gain more independence as they go along. That’s something other organizations could use to set up participants in citizen science projects for success.”

After participating in the citizen science projects, the Girl Scouts completed “Take Action Projects” in their communities. These projects included activities such as installing recycling bins; educating family members about water pollution; creating gardens at a YMCA to attract native insects; raising money to buy science books for the local library; and even sewing sleeping bags for a hedgehog at a nature center.

Most girls (81%) chose projects addressing science or environmental topics, and many of the projects (66%) were designed to educate or inspire others.

“In the overwhelming majority of cases, we saw girls taking what they had done with citizen science and extending that in some way, such as by supporting science literacy, promoting environmental goals like recycling, or providing habitat,” Smith says.

Girl Scout troop leaders reported in surveys that girls who completed the program learned about science and environmental topics and about the process of science, developed confidence in STEM, and received other benefits. Most importantly, they learned to identify and address problems in their communities.

“Because of this research, we now have empirical qualitative data that reinforces the crucial role of organizations like the Girl Scouts in facilitating participation in citizen science, and therefore in expanding awareness of, access to, and engagement in, science and related local actions,” says Darlene Cavalier, coauthor and founder of SciStarter.

The study appears in the journal Environmental Education Research.

The National Science Foundation supported the work.

Source: NC State

source