To woo a mate, male whales rather fight than sing

Male whales along Australia’s eastern seaboard are giving up singing to attract a mate, switching instead to fighting their male competition.

Researchers analyzed almost two decades of data on humpback whale behavior and found singing may no longer be in vogue when it comes to seduction.

“…humans aren’t the only ones subject to big social changes when it comes to mating rituals.”

“In 1997, a singing male whale was almost twice as likely to be seen trying to breed with a female when compared to a non-singing male,” says Rebecca Dunlop, associate professor at the University of Queensland’s School of Biological Sciences.

“But by 2015 it had flipped, with non-singing males almost five times more likely to be recorded trying to breed than singing males. It’s quite a big change in behavior so humans aren’t the only ones subject to big social changes when it comes to mating rituals.”

The researchers believe the change has happened progressively as populations recovered after the widespread cessation of whaling in the 1960s.

“If competition is fierce, the last thing the male wants to do is advertise that there is a female in the area, because it might attract other males which could out-compete the singer for the female,” Dunlop says.

“By switching to non-singing behavior, males may be less likely to attract competition and more likely to keep the female. If other males do find them, then they either compete, or leave.

“With humpbacks, physical aggression tends to express itself as ramming, charging, and trying to head slap each other. This runs the risk of physical injury, so males must weigh up the costs and benefits of each tactic.”

“Male whales were less likely to sing when in the presence of other males. Singing was the dominant mating tactic in 1997, but within the space of seven years this has turned around,” she says.

“It will be fascinating to see how whale mating behavior continues to be shaped in the future.”

Celine Frere, an associate professor and study coauthor, says previous work from Professor Michael Noad found the whale population grew from approximately 3,700 whales to 27,000 between 1997 and 2015.

“We used this rich dataset, collected off Queensland’s Peregian Beach, to explore how this big change in whale social dynamics could lead to changes in their mating behavior,” Frere says.

“We tested the hypothesis that whales may be less likely to use singing as a mating tactic when the population size is larger, to avoid attracting other males to their potential mate.”

The research appears in Communications Biology.

Source: University of Queensland

source

3 faulty genes may clarify some severe COVID in kids

A trio of faulty genes that fail to put the brakes on the immune system’s all-out assault on SARS-CoV-2 may help explain some severe COVID cases in kids.

One of the most terrifying aspects of the COVID pandemic has been its unpredictably severe impact on some children. While most infected kids have few or no symptoms, one in 10,000 fall suddenly and dramatically ill about a month after a mild infection, landing in the hospital with inflamed hearts, lungs, kidneys, and brains, spiked temperatures, skin rashes, and abdominal pain. Researchers call it MIS-C—multisystem inflammatory syndrome in children.

Some suspected that MIS-C is a SARS-CoV-2-specific form of Kawasaki disease, a rare childhood inflammatory condition that has long puzzled clinicians and seems to be triggered by many different viruses.

The new findings in Science constitute the first mechanistic explanation of any Kawasaki disease.

“The patients are sick not because of the virus,” says Rockefeller University geneticist Jean-Laurent Casanova. “They’re sick because they excessively respond to the virus.”

An enduring mystery of COVID has been its wildly varied impact on individuals, with one person getting a sore throat and another winding up on a ventilator—or worse. In February 2020, Casanova and his collaborators in the CHGE, an international consortium of researchers seeking the human genetic and immunological bases of all the different ways a SARS-CoV-2 infection can manifest, began searching for inborn errors (genetic mutations) of immunity among healthy people who had severe forms of COVID. Among their targets were children with MIS-C.

Casanova and his CHGE colleagues assembled an ever-growing database of hundreds of fully sequenced genomes of COVID victims from hospitals across North America, Asia, Europe, Latin America, Oceania, and the Middle East. They have since made several discoveries about the genetic predispositions of individuals who develop severe COVID.

For the current study, the researchers hypothesized that in some children, MIS-C could be caused by a gene defect that rendered them vulnerable to an inflammatory condition provoked by a SARS-CoV-2 infection, says Casanova, professor in and head of the St. Giles Laboratory of Human Genetics of Infectious Diseases at Rockefeller.

To find out, they analyzed the genomes of 558 children who’d had MIS-C. Five unrelated kids from four countries—Turkey, Spain, the Philippines, and Canada—shared mutations in three closely related genes controlling the OAS-RNase L pathway, which is involved in viral response.

Normally, this pathway is induced by type 1 interferons and activated by viral infection, which induce OAS1, OAS2, and OAS3 molecules. These in turn activate RNase L, an antiviral enzyme that chops up single-strand viral and cellular RNA, shutting down the cell. When a cell goes dark, the virus can’t hijack its replication machinery to spread disease.

But in the five children with these mutations, the pathway failed to activate in response to the presence of SARS-CoV-2. The cell instead sensed the viral RNA using another pathway known as MAVS, which provokes an army of dendritic cells, phagocytes, monocytes, and macrophages to attack the viral invaders en masse. The MAVS pathway acts as a sort of accelerator of the immunological response.

The OAS-RNase L pathway, on the other hand, is supposed to act as the brake. But in MIS-C, the brake fails, and the response careens out of control.

“Phagocytes produce excessive levels of inflammatory cytokines and chemokines and growth factors and interferons—you name it,” Casanova says. Massive inflammation ensues.

Because MIS-C is clinically and immunologically so aligned with other examples of Kawasaki disease, the researchers believe that MIS-C is a variety of the disease driven by a SARS-CoV-2 infection—the first such provocateur of Kawasaki to be pinpointed.

Why this reaction only takes place about a month after infection remains unknown. “We now understand the molecular and cellular basis of the disease, but we don’t understand the timing,” Casanova says.

Although the findings shed light on how problem genes can kick off MIS-C in some populations, it only accounts for 1% of the children in the study. As for the rest of the children who had COVID only to wind up hospitalized weeks later—the vast majority of whom recover quickly with treatment—the researchers plan to seek out other mutations in the OAS-RNase L pathway or in related pathways.

“We clearly now have one pathway that is causal of disease when it’s disrupted,” he says. “There’s every good reason to believe that there will be many other patients with MIS-C who have mutated genes in this pathway. Is that going to be 5%, 10%, 50%, 100%? I don’t know. But for sure, there will be mutations in other genes controlling this pathway.”

Source: Rockefeller University

source

Can CBD help smokers quit?

Cannabidiol or CBD inhibits the metabolism of nicotine, meaning it could help tobacco users curb the urge for that next cigarette, according to a new study.

Researchers tested the effects of CBD, a non-psychoactive component of cannabis, and its major metabolite on human liver tissue and cell samples, showing that it inhibited a key enzyme for nicotine metabolism.

For the nicotine-addicted, slowing metabolism of the drug could allow them to wait before feeling the need to inhale more of it along with all the other harmful things found in cigarette smoke.

More research is needed to confirm these effects in humans and determine dosage levels, but these findings show promise, says Philip Lazarus, professor of pharmaceutical sciences at Washington State University.

“The whole mission is to decrease harm from smoking, which is not from the nicotine per se, but all the carcinogens and other chemicals that are in tobacco smoke,” says Lazarus, senior author of the study in the journal Chemical Research in Toxicology. “If we can minimize that harm, it would be a great thing for human health.”

Cigarette smoking is still a major health problem with one in five people in the US dying every year from smoking-related causes. While often seen as less harmful, many other nicotine delivery methods including vaping, snuff, and chew also contain chemicals that can cause cancer and other illnesses.

For the current study, researchers tested CBD and its major metabolite, meaning what it converts to in the body, 7-hyroxycannabidiol, on microsomes from human liver tissue as well as on microsomes from specialized cell lines that allowed them to focus on individual enzymes related to nicotine metabolism.

They found that CBD inhibited several of these enzymes, including the major one for nicotine metabolism, identified as CYP2A6. Other research has found that more than 70% of nicotine is metabolized by this enzyme in the majority of tobacco users. The impact of CBD on this particular enzyme appeared quite strong, inhibiting its activity by 50% at relatively low CBD concentrations.

“In other words, it appears that you don’t need much CBD to see the effect,” says Lazarus.

Lazarus’ team is currently developing a clinical study to examine the effects of CBD on nicotine levels in smokers, measuring nicotine levels in their blood versus smokers taking a placebo over the course of six to eight hours. Then, they hope to do a much larger study looking at CBD and nicotine addiction.

Additional coauthors are from Penn State and Washington State. The National Institutes of Health supported the work.

Source: Washington State

source

To ease loneliness, volunteer 100 hours each year

Volunteering more than 100 hours per year is particularly good at alleviating the loneliness of older adults, research finds.

Loneliness among older adults is a major public health problem. Numerous research studies have consistently documented the adverse effects of loneliness on mortality, physical and mental health, cognitive functions, and health behaviors.

The study in the Journal of Gerontological Social Work examines the connection between volunteering and the occurrence of loneliness among older adults. What made this study different from other published works were the number of years after follow-up and consideration of any differences based on gender.

Researchers used data from the Health and Retirement Study (2006-2018), and the sample included 5,000 individuals aged 60 and over who did not experience loneliness in 2006. Participants reported how often they were in formal volunteer work—or efforts done under the management of an organization: none; less than 100 hours per year; or more than 100 hours per year. They were also asked about the frequency of feeling lonely.

At the 12-year follow-up, individuals who reported more than 100 hours per year were associated with a lower risk of loneliness compared to non-volunteers. This protective effect was not observed for those who volunteered less than 100 hours per year, the study indicated.

The benefits of volunteering in mitigating loneliness did not differ by gender, says Joonyoung Cho, the study’s lead author and doctoral student of psychology and social work at the University of Michigan.

Cho, along with coauthor Xiaoling Xiang, assistant professor of social work, says more volunteering programs—such as Experience Corps and Foster Grandparents—can be offered to older adults to reduce loneliness in later life.

Source: University of Michigan

source

Asphalt volcanoes are rare habitat for lots of fishes

Researchers offer the first description of the animal communities around the asphalt volcanoes about 10 miles off the coast of Santa Barbara, California.

Santa Barbara Channel’s natural oil seeps are a beach-goer’s bane, flecking the shores with blobs of tar. But the leaking petroleum also creates fascinating geologic and biologic features. These asphalt volcanoes, virtually unique in the world, provide a rare habitat in a region known for its underwater biodiversity.

The findings, published in the Bulletin of Marine Science, detail the different kinds of fishes that live on and around the volcanoes.

Scientists first discovered asphalt volcanoes in the Gulf of Mexico. These vents erupt hot tar instead of lava, slowly building up smooth mounds that can be several dozen feet tall. In 2010, a team led by UC Santa Barbara professor Dave Valentine documented two volcanoes in the Santa Barbara Channel, which they named Il Duomo and Il Duomito; the taller of the two, Il Duomo, is about 65 feet tall. The group published an account of the geology and characterized the habitat. Since then, scientists have found only one other site, off the coast of Angola.

“Even in our channel, that has lots of seeps, there’s only two asphalt volcanoes that we know of,” says lead author Milton Love, a researcher at UC Santa Barbara’s Marine Science Institute. “So it takes an almost unique set of circumstances to form these.”

Yet, virtually nothing was known about the animals living at asphalt volcanoes aside from a brief description Valentine and his coauthors provided in their 2010 paper. So, Love and his colleagues used footage from an autonomous underwater vehicle to characterize the fish communities that inhabit these remarkable features. Their goal was to figure out who lives where and why. The team combed through eight hours of surveys—encompassing 2,743 still images—gradually building up a roster of the neighborhood.

Although fish densities were low, the team found a relatively diverse assemblage of species. Altogether, they observed 1,836 fish representing no less than 43 species. And at least 53.5% of these species were rockfishes. “This is what you would expect to find if you surveyed a tall and fairly smooth rock reef in this location,” Love says.

Certain fish preferred the volcanoes’ uniform slopes, including rockfishes like the swordspine, greenblotched, and greenspotted. Meanwhile, a variety of poachers and flatfishes populated the muddy sea bottom surrounding the mounds. Oddly enough, there were haloes several meters wide around the volcanoes devoid of flatfishes. Love suspects those fish that ventured too close were spotted against the black tar and eaten.

The researchers observed a few taxa that moved between the mud and the edges of the asphalt, such as shortspine combfish, greenstriped rockfish, and spotted ratfish. Notably rare were the “sheltering guild” of fishes, such as bocaccio and cowcod, which require nooks and crannies that are absent on the asphalt volcanoes’ smooth slopes, as well as the surrounding sea floor. However, “Even small amounts of asphalt in an image had a substantial effect on the species that were observed,” the authors write, as soft-seafloor fishes kept away from the hard tar.

Although dormant now, the asphalt volcanoes are relatively new features. “They probably developed around 40,000 years ago,” Love says. And he was quick to point out that they were quite different just a few thousand years ago. “What we see now we wouldn’t have seen even 20,000 years ago, when you had these glacial maximums and sea level minimums,” he says. At that time, the highest of these features would have been just a few dozen feet below the surface. “It would’ve had an entirely different group of fishes and invertebrates, and it would’ve had algae all over it.”

Today, the volcanoes have a stark beauty. Colorful invertebrates pop out in sharp relief against the black substrate. “You have all kinds of sponges and deep-water corals,” Love says. A particularly striking orange animal seems to rim the edges of cracks and fissures. “Is it a sea anemone? Nobody seems to know.” The group hopes to publish an account of the invertebrate assemblages in the future.

Unfortunately, the team also found evidence of illegal fishing, including lost lines, weights, and even a rockfish carcass still on the hook. Although the fish communities are typical of the area, Love believes California should protect these sites given how unique they are. “Not only are there only three places known that have this habitat, but this is the only one in shallow water,” he says.

Source: UC Santa Barbara

source

New $1 test is a better way to detect COVID

A new diagnostic test is 1,000 times more sensitive than conventional tests, researchers report.

When Srikanth Singamaneni and Guy Genin, both professors of mechanical engineering and materials science at the McKelvey School of Engineering at Washington University in St. Louis, established a new collaboration with researchers from the School of Medicine in late 2019, they didn’t know the landscape of infectious disease research was about to shift dramatically. The team had one goal in mind: tackle the biggest infectious disease problem facing the world right then.

“Srikanth and I had a vision of a simple, quantitative diagnostic tool, so we connected with infectious disease physicians here at WashU and asked them, ‘What are the most important questions that could be answered if you could get really detailed information cheaply at the point of care?’” says Genin, professor of mechanical engineering.

“Greg Storch told us that one of the most important challenges facing the field of infectious disease is finding a way to figure out quickly if a patient has a bacterial infection and should get antibiotics or has a viral infection, for which antibiotics will not be effective.”

Storch, professor of pediatrics at the School of Medicine, was interested in diseases that affect most people regularly—colds, strep throat, or the flu—but that weren’t getting as much research attention as rarer diseases.

“Even with great advances that have been made in infectious disease diagnostics, there is still a niche for tests that are simple, rapid, and sensitive,” Storch says. “It would be especially powerful if they could provide quantitative information. Tests with these characteristics could be employed in sophisticated laboratories or in the field.”

Drawing on his years of experience in developing nanomaterials for applications in biology and medicine, Singamaneni sought to overcome these limitations in point-of-care diagnostic tests. Singamaneni and his lab developed ultrabright fluorescent nanolabels called plasmonic-fluors, which could be quickly integrated into a common testing platform, the lateral flow assay (LFA).

Plasmon-enhanced LFAs (p-LFAs) improve inexpensive, readily available rapid tests to levels of sensitivity required by physicians for confidence in test results without the need for lab-based confirmation.

According to new findings, the team’s p-LFAs are 1,000 times more sensitive than conventional LFAs, which show results via a visual color and fluorescence signal on the strip.

When analyzed using a fluorescence scanner, p-LFAs are also substantially faster than gold-standard lab tests, returning results in only 20 minutes instead of several hours, with comparable or improved sensitivity.

The p-LFAs can detect and quantify concentrations of proteins, enabling them to detect bacterial and viral infections as well as markers of inflammation that point to other diseases.

“Plasmonic-fluors are composed of metal nanoparticles that serve as antennae to pull in the light and enhance the fluorescence emission of molecular fluorophores, thus making it an ultrabright nanoparticle,” Singamaneni explains.

“Our p-LFAs can pick up even very small concentrations of antibodies and antigens, typical markers of infection, and give clinicians clear, quick results without the need for specialized equipment. For quantitative testing beyond the initial screening, the same LFA strip can be scanned with a fluorescence reader, enabling rapid and ultrasensitive colorimetric and fluorometric detection of disease markers with only one test.”

“It’s like turning up the volume on standard color-changing test strips. Instead of getting a faint line indicating only a positive or negative result, the new p-LFAs give clearer results with fewer particles, enabling one to move from simply ‘yes or no?’ to exactly ‘how much?’ with the aid of an inexpensive, portable scanner,” says Jeremiah Morrissey, a research professor in anesthesiology in the Division of Clinical and Translational Research at the School of Medicine. Morrissey is a coauthor of the new study and a long-term collaborator with the Singamaneni lab.

This improved testing capability has obvious benefits for a population now all too familiar with the need for quick and reliable test results and the risk of false negatives.

“When we took on this problem in 2019, we thought our biggest challenge would be getting an adequate number of samples from sick people,” Genin recalls. “Where on Earth could we find a massive set of samples from patients whose symptoms were carefully documented and whose diagnosis was verified by slow and expensive PCR tests?”

In a matter of months, COVID-19 would erase that obstacle while introducing a whole host of new challenges and opportunities.

“The pandemic was a big shift for us, like it was for everyone,” says first author Rohit Gupta, who worked on the p-LFA study as a graduate student in Singamaneni’s lab and is now a senior scientist at Pfizer.

“We had to move away from our original focus on distinguishing viruses from bacteria, but it turned out to be an opportunity to do practical science with real stakes. We were working with epidemiologists to get samples for testing, with diagnosticians to compare our test to what was available, and with clinicians to gain insights into the real needs for patient care.”

Input from the entire collaboration helped Gupta and Singamaneni refine the design of the p-LFAs, which ultimately achieved 95% clinical sensitivity and 100% specificity for SARS-CoV-2 antibodies and antigens. Genin describes the results as stunning.

“We didn’t know it was going to work so well,” he says. “We knew it would be good, but we didn’t know this $1 test with a $300 readout device would be so much better—10 times better—than state of the art that we all used during the COVID pandemic.”

Now that they’ve proven p-LFAs can outperform standard lab tests in sensitivity, speed, convenience and cost for one disease, the team is looking to develop new applications for the technology, including returning to their original goal of identifying bacterial versus viral infections and getting their diagnostic tool into the hands of physicians around the world.

The p-LFA technology has been licensed to Auragent Bioscience LLC by Washington University’s Office of Technology Management. Singamaneni and Morrissey are among the cofounders of Auragent.

“We expect to have p-LFAs commercially available in the next one to two years,” Singamaneni says. “Right now, we’re working on improving our portable scanner technology, which adds a more sensitive, fluorescent reading capability to the test strips in addition to the color change that can be seen with the naked eye. We think we can get that cost down to a point where it’s accessible to rural clinics in the US and abroad, which was one of our original goals.”

“We’re also excited about the potential to detect many more diseases than COVID, possibly using a skin patch that can take a painless sample,” Singamaneni adds.

“This technology has the potential to detect any number of diseases, ranging from STIs to respiratory infections and more, as well as cytokines indicative of inflammation seen in conditions such as rheumatoid arthritis and sepsis.”

The research appears in Nature Biomedical Engineering.

Support for the research came from the National Science Foundation, the National Cancer Institute-Innovative Molecular Analysis Technologies, and the Washington University Institute of Clinical and Translational Sciences from the National Center for Advancing Translational Sciences (NCATS) of the National Institutes of Health (NIH).

Source: Washington University in St. Louis

source

PFAS can thwart immune system ‘first responders’

New research in cells finds that the PFAS chemical GenX suppresses the neutrophil respiratory burst—the method white blood cells known as neutrophils use to kill invading pathogens.

The study is an important first step in understanding how both legacy and emerging PFAS chemicals might affect the body’s innate immune system.

PFAS are a class of per- and polyfluoroalkyl chemicals used to make consumer and industrial products more resistant to water, stains, and grease. According to the US Environmental Protection Agency, there are more than 12,000 known PFAS, which also include fluoroethers such as GenX.

“It’s pretty well-established that PFAS are toxic to the adaptive immune system, but there hasn’t been as much research done on their effects on the innate immune system,” says Drake Phelps, a former PhD student at North Carolina State University and first author of the study.

The human immune system has two branches: adaptive and innate. The adaptive branch contains T cells and B cells that “remember” pathogens the body has encountered, but it is slow to mount a defense, acting days—sometimes weeks—after it detects a pathogen.

The innate immune system serves as the body’s first responders, and contains white blood cells that can be dispatched to the site of an invasion within hours. These white blood cells include neutrophils, which can dump reactive oxygen species—think tiny amounts of bleach or hydrogen peroxide that neutrophils manufacture inside their cells—directly onto pathogens, killing them. That process is called the respiratory burst.

Drake and the research team looked at the effect of nine environmentally relevant legacy and emerging PFAS on neutrophils from zebrafish embryos, neutrophil-like cells (cells that can be chemically treated to behave like neutrophils), and human neutrophil cells cultured from donor blood.

Emerging PFAS are chemicals, like GenX, developed to replace older, legacy PFAS that had proven toxic. All of the PFAS included in this study were detected in both the Cape Fear River in North Carolina and the blood serum of residents whose drinking water came from the Cape Fear River.

The embryos and cells were exposed to 80 micromolar solutions of each chemical:
perfluorooctanoic acid (PFOA), perfluorooctane sulfonic acid potassium salt (PFOS-K), perfluorononanoic acid (PFNA), perfluorohexanoic acid (PFHxA), perfluorohexane sulfonic acid (PFHxS), perfluorobutane sulfonic acid (PFBS), ammonium perfluoro(2-methyl-3-oxahexanoate) (GenX), 7H-perfluoro-4-methyl-3,6-dioxa-octane sulfonic acid (Nafion byproduct 2), and perfluoromethoxyacetic acid sodium salt (PFMOAA-Na).

Of the nine PFAS tested, only GenX suppressed the neutrophil respiratory burst in embryonic zebrafish, neutrophil-like cells, and human neutrophils. PFHxA also suppressed the respiratory burst, but only in embryonic zebrafish and neutrophil-like cells.

The researchers caution that while the results of this preliminary study are interesting, they raise more questions than they answer.

“The longest chemical exposure in our study was four days, so obviously we can’t compare that to real human exposure of four decades,” says Jeff Yoder, professor of comparative immunology and corresponding author of the work. “We looked at a high dose of single PFAS over a short period, whereas people in the Cape Fear River basin were exposed to a mixture of PFAS—a low dose over a long period.

“So while we can say that we see a toxic effect from a high dose in the cell lines, we can’t yet say what effects long-term exposure may ultimately have on the immune system. This paper isn’t the end of the road—it’s the first step. Hopefully our work may help prioritize further study of these two chemicals.”

The study appears in the Journal of Immunotoxicology and had support from the National Institute of Environmental Health Sciences (NIEHS), the North Carolina State University Center for Environmental and Health Effects of PFAS, and the North Carolina State University Center for Human Health and the Environment (CHHE). Jamie DeWitt, professor of pharmacology and toxicology at East Carolina University, is coauthor.

Source: NC State

source

Inhalable powder could shield lungs from COVID

Researchers have developed an inhalable powder that could protect lungs and airways from viral invasion.

The powder, called Spherical Hydrogel Inhalation for Enhanced Lung Defense, or SHIELD, reduced infection in both mouse and non-human primate models over a 24-hour period, and can be taken repeatedly without affecting normal lung function.

“The idea behind this work is simple—viruses have to penetrate the mucus in order to reach and infect the cells, so we’ve created an inhalable bioadhesive that combines with your own mucus to prevent viruses from getting to your lung cells,” says Ke Cheng, corresponding author of the paper describing the work. “Mucus is the body’s natural hydrogel barrier; we are just enhancing that barrier.”

Cheng is a professor in regenerative medicine at North Carolina State University’s College of Veterinary Medicine and a professor in the NC State/UNC-Chapel Hill joint department of biomedical engineering.

“SHIELD… works like an ‘invisible mask’ for people in situations where masking is difficult…”

The inhalable powder microparticles are composed of gelatin and poly(acrylic acid) grafted with a non-toxic ester. When introduced to a moist environment—such as the respiratory tract and lungs—the microparticles swell and adhere to the mucosal layer, increasing the “stickiness” of the mucus.

The effects are most potent during the first eight hours after inhalation. SHIELD biodegrades over a 48-hour period, and is completely cleared from the body.

In a mouse model, SHIELD blocked SARS-CoV-2 pseudovirus particles with 75% efficiency four hours after inhalation, which fell to 18% after 24 hours. The researchers found similar results when testing against pneumonia and H1N1 viruses.

In a non-human primate model of both the original and Delta SARS-CoV-2 variants, SHIELD-treated subjects had reduced viral loads—from 50 to 300-fold less than control subjects—and none of the symptoms commonly associated with infection in primates, such as lung inflammation or fibrosis. Since primates do not exhibit the same symptoms of infection as humans, viral load is the standard marker used to determine exposure.

The researchers also looked at potential toxicity both in vitro and in vivo: 95% of cell cultures exposed to a high concentration (10 mg ml-1) of SHIELD remained healthy, and mice who were given daily doses for two weeks retained normal lung and respiratory function.

“SHIELD is easier and safer to use than other physical barriers or anti-virus chemicals,” Cheng says. “It works like an ‘invisible mask’ for people in situations where masking is difficult, for example during heavy exercise, while eating or drinking, or in close social interactions. People can also use SHIELD on top of physical masking to have better protection.

“But the beauty of SHIELD is that it isn’t necessarily limited to protecting against COVID-19 or flu. We’re looking at whether it could also be used to protect against things like allergens or even air pollution—anything that could potentially harm the lungs.”

The study appears in Nature Materials. Funding comes from the National Institutes of Health, the American Heart Association, and special funding from the NC State Provost’s Office. The researchers have filed a patent and are working on FDA approval for human use.

Source: NC State

source

How does ChatGPT differ from human intelligence?

If ChatGPT sounds like a human, does that mean it learns like one, too? And just how similar is the computer brain to a human brain?

ChatGPT, a new technology developed by OpenAI, is so uncannily adept at mimicking human communication that it will soon take over the world—and all the jobs in it. Or at least that’s what the headlines would lead the world to believe.

In a February 8 conversation organized by Brown University’s Carney Institute for Brain Science, two Brown scholars from different fields of study discussed the parallels between artificial intelligence and human intelligence. The discussion on the neuroscience of ChatGPT offered attendees a peek under the hood of the machine learning model-of-the-moment.

Ellie Pavlick is an assistant professor of computer science and a research scientist at Google AI who studies how language works and how to get computers to understand language the way that humans do.

Thomas Serre is a professor of cognitive, linguistic, and psychological sciences and of computer science who studies the neural computations supporting visual perception, focusing on the intersection of biological and artificial vision. Joining them as moderators were Carney Institute director and associate director Diane Lipscombe and Christopher Moore, respectively.

Pavlick and Serre offered complementary explanations of how ChatGPT functions relative to human brains, and what that reveals about what the technology can and can’t do. For all the chatter around the new technology, the model isn’t that complicated and it isn’t even new, Pavlick said. At its most basic level, she explained, ChatGPT is a machine learning model designed to predict the next word in a sentence, and the next word, and so on.

This type of predictive-learning model has been around for decades, said Pavlick, who specializes in natural language processing. Computer scientists have long tried to build models that exhibit this behavior and can talk with humans in natural language. To do so, a model needs access to a database of traditional computing components that allow it to “reason” overly complex ideas.

What is new is the way ChatGPT is trained, or developed. It has access to unfathomably large amounts of data—as Pavlick said, “all the sentences on the internet.”

“ChatGPT, itself, is not the inflection point,” Pavlick said. “The inflection point has been that sometime over the past five years, there’s been this increase in building models that are fundamentally the same, but they’ve been getting bigger. And what’s happening is that as they get bigger and bigger, they perform better.”

What’s also new is the way that the ChatGPT and its competitors are available for free public use. To interact with a system like ChatGPT even a year ago, Pavlick said, a person would need access to a system like Brown’s Compute Grid, a specialized tool available to students, faculty, and staff only with certain permissions, and would also require a fair amount of technological savvy. But now anyone, of any technological ability, can play around with the sleek, streamlined interface of ChatGPT.

Does ChatGPT really think like a human?

Pavlick said that the result of training a computer system with such a massive data set is that it seems to pick up general patterns and gives the appearance of being able to generate very realistic-sounding articles, stories, poems, dialogues, plays, and more. It can generate fake news reports, fake scientific findings, and produce all sorts of surprisingly effective results—or “outputs.”

The effectiveness of their results have prompted many people to believe that machine learning models have the ability to think like humans. But do they?

ChatGPT is a type of artificial neural network, explained Serre, whose background is in neuroscience, computer science, and engineering. That means that the hardware and the programming are based on an interconnected group of nodes inspired by a simplification of neurons in a brain.

Serre said that there are indeed a number of fascinating similarities in the way that the computer brain and the human brain learn new information and use it to perform tasks.

“There is work starting to suggest that at least superficially, there might be some connections between the kinds of word and sentence representations that algorithms like ChatGPT use and leverage to process language information, vs. what the brain seems to be doing,” Serre said.

For example, he said, the backbone of ChatGPT is a state-of-the-art kind of artificial neural network called a transformer network. These networks, which came out of the study of natural language processing, have recently come to dominate the entire field of artificial intelligence. Transformer networks have a particular mechanism that computer scientists call “self-attention,” which is related to the attentional mechanisms that are known to take place in the human brain.

Another similarity to the human brain is a key aspect of what has enabled the technology to become so advanced, Serre said. In the past, he explained, training a computer’s artificial neural networks to learn and use language or perform image recognition would require scientists to perform tedious, time-consuming manual tasks like building databases and labeling categories of objects.

Modern large language models, such as the ones used in ChatGPT, are trained without the need for this explicit human supervision. And that seems to be related to what Serre referred to as an influential brain theory known as the predictive coding theory. This is the assumption that when a human hears someone speak, the brain is constantly making predictions and developing expectations about what will be said next.

While the theory was postulated decades ago, Serre said that it has not been fully tested in neuroscience. However, it is driving a lot of experimental work at the moment.

“I would say, at least at those two levels, the level of attention mechanisms at the core engine of this networks that are consistently making predictions about what is going to be said, that seems to be, at a very coarse level, consistent with ideas related to neuroscience,” Serre said during the event.

There has been recent research that relates the strategies used by large language models to actual brain processes, he noted: “There is still a lot that we need to understand, but there is a growing body of research in neuroscience suggesting that what these large language models and vision models do [in computers] is not entirely disconnected with the kinds of things that our brains do when we process natural language.”

On a darker note, in the same way that the human learning process is susceptible to bias or corruption, so are artificial intelligence models. These systems learn by statistical association, Serre said. Whatever is dominant in the data set will take over and push out other information.

“This is an area of great concern for AI, and it’s not specific to languages,” Serre said. He cited how the overrepresentation of Caucasian men on the internet has biased some facial recognition systems to the point where they have failed to recognize faces that don’t appear to be white or male.

“The systems are only as good as the training data we feed them with, and we know that the training data isn’t that great in the first place,” Serre said. The data also isn’t limitless, he added, especially considering the size of these systems and the voraciousness of their appetite.

The latest iteration of ChatCPT, Pavlick said, includes reinforcement learning layers that function as guardrails and help prevent the production of harmful or hateful content. But these are still a work in progress.

“Part of the challenge is that… you can’t, give the model a rule—you can’t just say, ‘never generate such-and-such,’” Pavlick said. “It learns by example, so you give it lots of examples of things and say, ‘Don’t do stuff like this. Do do things like this.’ And so it’s always going to be possible to find some little trick to get it to do the bad thing.”

Nope, ChatGPT doesn’t dream

One area in which human brains and neural networks diverge is in sleep—specifically, while dreaming. Despite AI-generated text or images that seem surreal, abstract, or nonsensical, Pavlick said there’s no evidence to support the notion of functional parallels between the biological dreaming process and the computational process of generative AI. She said that it’s important to understand that applications like ChatGPT are steady-state systems—in other words, they aren’t evolving and changing online, in real-time, even though they may be constantly refined offline.

“It’s not like [ChatGPT is] replaying and thinking and trying to combine things in new ways in order to cement what it knows or whatever kinds of things happen in the brain,” Pavlick said. “It’s more like: it’s done. This is the system. We call it a forward pass through the network—there’s no feedback from that. It’s not reflecting on what it just did and updating its ways.”

Pavlick said that when AI is asked to produce, for example, a rap song about the Krebs cycle, or a trippy image of someone’s dog, the output may seem impressively creative, but it’s actually just a mash-up of tasks the system has already been trained to do. Unlike a human language user, each output is not automatically changing each subsequent output, or reinforcing function, or working in the way that dreams are believed to work.

The caveats to any discussion of human intelligence or artificial intelligence, Serre and Pavlick emphasized, are that scientists still have a lot to learn about both systems. As for the hype about ChatGPT, specifically, and the success of neural networks in creating chatbots that are almost more human than human, Pavlick said it has been well-deserved, especially from a technological and engineering perspective.

“It’s very exciting!” she said. “We’ve wanted systems like this for a long time.”

Source: Brown University

source

4th- and 8th-grade data literacy skills have declined

Data literacy skills among fourth and eighth-grade students in the United States have declined significantly over the last decade even as these skills have become increasingly essential, according to a new report.

Based on data from the latest National Assessment of Educational Progress (NAEP) results, the report uncovered several trends that raise concerns about whether the nation’s educational system is sufficiently preparing young people for a world reshaped by the rise of big data and artificial intelligence.

Key findings include:

  • The pandemic decline is part of a much longer-term trend. Between 2019 and 2022, scores in the data analysis, statistics, and probability section of the NAEP math exam fell by 10 points for eighth-graders and by four points for fourth-graders. Declining scores are part of a longer-term trend, with scores down 17 points for eighth-graders and down 10 points for fourth-graders over the last decade. That means today’s eighth-graders have the data literacy of sixth-graders from a decade ago, and today’s fourth-graders have the data literacy of third-graders from a decade ago.
  • There are large racial gaps in scores. These gaps exist across all grade levels but are at times most dramatic in the middle and high school levels. For instance, fourth-grade Black students scored 28 points lower—the equivalent of nearly three grade levels—than their white peers in data analysis, statistics, and probability.
  • Data-related instruction is in decline. Every state except Alabama reported a decline or stagnant trend in data-related instruction, with some states—like Maryland and Iowa—seeing double-digit drops. The national share of fourth-grade math teachers reporting “moderate” or “heavy” emphasis on data analysis dropped five percentage points between 2019 and 2022.

“The ability to interpret, understand, and work with data is central to so many aspects of our lives and careers today. Data literacy is a must-have for every employee, every business owner, and every participant in our democracy,” says Zarek Drozda, the director of Data Science 4 Everyone, based at the University of Chicago, and author of the report.

“Schools that prioritize teaching these skills are setting their students up for success in the modern economy, opening doors to a wider variety of options post-graduation, and building confidence for students to pursue these disciplines in higher education, including in STEM.”

Beyond STEM, the report recommends that schools build data literacy connections within subjects across the curriculum, such as social studies or English.

“Digital Humanities” is an emerging field that uses data to reveal new insights into literature and history, for example. Data Science 4 Everyone is similarly encouraging cross-disciplinary collaboration via their lesson plan challenge, which provides cash prizes to teachers working together to teach data science principles.

Source: University of Chicago

source