AI tools are surprisingly vulnerable to targeted attacks

Artificial intelligence tools are more vulnerable than previously thought to targeted attacks that effectively force AI systems to make bad decisions, a study finds.

At issue are so-called “adversarial attacks,” in which someone manipulates the data being fed into an AI system to confuse it. For example, someone might know that putting a specific type of sticker at a specific spot on a stop sign could effectively make the stop sign invisible to an AI system. Or a hacker could install code on an X-ray machine that alters the image data in a way that causes an AI system to make inaccurate diagnoses.

“For the most part, you can make all sorts of changes to a stop sign, and an AI that has been trained to identify stop signs will still know it’s a stop sign,” says Tianfu Wu, coauthor of a paper on the new work and an associate professor of electrical and computer engineering at North Carolina State University. “However, if the AI has a vulnerability, and an attacker knows the vulnerability, the attacker could take advantage of the vulnerability and cause an accident.”

The new study from Wu and his collaborators focuses on determining how common these sorts of adversarial vulnerabilities are in AI deep neural networks. They find that the vulnerabilities are much more common than previously thought.

“What’s more, we found that attackers can take advantage of these vulnerabilities to force the AI to interpret the data to be whatever they want,” Wu says. “Using the stop sign example, you could make the AI system think the stop sign is a mailbox, or a speed limit sign, or a green light, and so on, simply by using slightly different stickers—or whatever the vulnerability is.

“This is incredibly important, because if an AI system is not robust against these sorts of attacks, you don’t want to put the system into practical use—particularly for applications that can affect human lives.”

To test the vulnerability of deep neural networks to these adversarial attacks, the researchers developed a piece of software called QuadAttacK. The software can be used to test any deep neural network for adversarial vulnerabilities.

“Basically, if you have a trained AI system, and you test it with clean data, the AI system will behave as predicted. QuadAttacK watches these operations and learns how the AI is making decisions related to the data. This allows QuadAttacK to determine how the data could be manipulated to fool the AI. QuadAttacK then begins sending manipulated data to the AI system to see how the AI responds. If QuadAttacK has identified a vulnerability it can quickly make the AI see whatever QuadAttacK wants it to see.”

In proof-of-concept testing, the researchers used QuadAttacK to test four deep neural networks: two convolutional neural networks (ResNet-50 and DenseNet-121) and two vision transformers (ViT-B and DEiT-S). These four networks were chosen because they are in widespread use in AI systems around the world.

“We were surprised to find that all four of these networks were very vulnerable to adversarial attacks,” Wu says. “We were particularly surprised at the extent to which we could fine-tune the attacks to make the networks see what we wanted them to see.”

The research team has made QuadAttacK publicly available, so that the research community can use it themselves to test neural networks for vulnerabilities. The program can be found here: https://thomaspaniagua.github.io/quadattack_web/.

“Now that we can better identify these vulnerabilities, the next step is to find ways to minimize those vulnerabilities,” Wu says. “We already have some potential solutions—but the results of that work are still forthcoming.”

The paper will be presented December 16 at the 37th Annual Conference on Neural Information Processing Systems (NeurIPS 2023), which will take place in New Orleans, Louisiana.

The work had support from the US Army Research Office and the National Science Foundation.

Source: NC State

source

Very early antiretroviral therapy benefits babies with HIV

Giving antiretroviral therapy to babies with HIV within the first days of life, rather than within weeks or months, can safely suppress amounts of HIV in the blood to undetectable levels.

HIV (human immunodeficiency virus) attacks the immune system. There is no cure for the disease caused by HIV, however, ART can help control the virus and prevent it from progressing to AIDS, the most severe and potentially lethal stage of HIV infection.

A pregnant person with HIV who is not receiving ART has a 15% to 45% chance of transmitting HIV during pregnancy, childbirth, or breastfeeding, according to the World Health Organization. This chance decreases to less than 1% if women with HIV receive ART. About 1.5 million children around the world have HIV.

In 2013, Deborah Persaud, a professor of pediatrics at the Johns Hopkins University School of Medicine, was part of a research team studying the child known as the Mississippi baby case, which was believed to be the first documented case of HIV remission in a child born with the virus. The baby, given ART within 30 hours of birth, remained free of active HIV for 27 months after stopping ART.

Persaud says standard treatment of babies with HIV typically starts at age 2 to 3 months, often due to delays in test results—particularly outside of the US, in countries where the burden of HIV is highest and ART drugs are less available. ART side effects have been a concern and include anemia, nausea, vomiting, and diarrhea.

With the new study, published in the journal The Lancet HIV, Persaud and the clinical trials team have been seeking to replicate the case of the Mississippi baby by starting infants on what they call “very early treatment,” defined as in the first 48 hours after birth, which they believe halted the formation of hard-to-treat viral reservoirs—cells that carry genetic material of latent viruses and are unreachable for antiviral drugs, typically allowing HIV to survive in the body for life.

“We sought proof of the concept that if you can safely treat babies with a three-drug regimen within 48 hours of life, you can limit the buildup of these reservoirs and get them to very low levels that may lead to ART-free remission, where the virus doesn’t come back quickly if the ART is stopped,” Persaud says.

The team enrolled 54 newborns into two groups at 30 sites in 11 countries, mostly in sub-Saharan Africa but also in Brazil, Thailand, the US, and other countries, from January 2015 to December 2017.

One group of 34 infants (23 females and 11 males, whose mothers had HIV and were not on ART during pregnancy) were started on a three-drug oral ART regimen of azidothymidine (AZT) or abacavir, lamivudine (3TC), and nevirapine within two days of life. All of the drugs had previously been shown to help prevent HIV transfer to newborns.

In the second group of 20 infants (10 females and 10 males, whose mothers had HIV and were on ART during pregnancy) were started on the same three-drug regimen, but with a lower dose of nevirapine shortly after birth. They then were switched to the same study regimen as the first group by 10 days of age, once enrolled in the study.

A fourth medicine, lopinavir-ritonavir, was also added to the regimen for all babies who were HIV positive after being about 14 days old, an age considered safe for use of the medicine based on previous research. Both groups were on ART through the infants’ first two years of life during this phase of the study.

“Overall, these four drugs are not the most potent ART regimen, but they were the only drugs approved for the prevention of HIV in newborns at the time of the study,” Persaud says.

The researchers estimated that infants had a 33% chance (group 1) or 57% chance (group 2) of reaching and maintaining undetectable plasma levels of HIV in the blood beyond age 2 years.

At the end of the study period through age 2, among the participants who remained with virologic suppression, 83% in group 1 and 100% in group 2 tested negative for HIV antibodies, and 64% in group 1 and 71% in group 2 had no detectable HIV DNA. Among the 54 infants who received very early ART, 19% met all of the study’s criteria for becoming eligible to stop treatment in later phases of the ongoing trial.

“If you treat at 2 to 3 months of age, when most children start a regimen, very, very, very few kids would actually get to this undetectable stage by 2 years of age,” Persaud says. “It would actually take them until 5 years of age and older to get to a low HIV DNA level, and it’s never to this undetectable level.”

The researchers say a majority of participants in both groups could not be followed to the end of the study period, mostly because their virus was not suppressed to undetectable levels, likely due to lack of daily adherence to the therapy.

For infants to be considered as trial participants, researchers were able to diagnose HIV in the infants within a strict timeline and monitor them at frequent intervals. The team acknowledged that these same practices can be feasible in future clinical research related to early infant diagnosis and treatment, but they remain challenging in clinical care settings because of lack of testing availability and other limitations. Therefore, the researchers say these practices should be prioritized for HIV testing programs globally.

The investigators believe their study shows that very early ART is safe and is key to suppressing HIV to undetectable levels during early childhood periods of rapid growth. Persaud says a very early treatment strategy is a first step toward getting more infants in a good place for remission so they can be kept off of anti-retroviral drugs for longer periods, and they won’t need to face the stigma still in place in many settings regarding taking HIV medicine every day.

As the trial continues, the team, Persaud says, will evaluate newer and more effective treatment regimens, and will share results of its research on ART-free remission among infants.

The National Institutes of Health, and the International Maternal Pediatric Adolescent AIDS Clinical Trials Network funded the work. No authors declared conflicts of interest under Johns Hopkins University School of Medicine policies.

Source: Johns Hopkins University

source

Serious risks follow multiple rounds of pregnancy complications

Women who experience more than one episode of gestational diabetes or gestational hypertension have double or even triple the risk of heart attack or stroke, a study finds.

“Women who have had even one occurrence of gestational diabetes or gestational hypertension need support to take action. In women with more than one occurrence, the need for action is even more urgent. We hope our findings activate these individuals, their health care providers, their communities, and the government to find the tools to prevent heart disease and stroke,” says Kaberi Dasgupta, senior scientist in the Metabolic Disorders and Complications Program at the Research Institute of the McGill University Health Centre (RI-MUHC) and senior author of the study in Diabetes Research and Clinical Practice.

The researchers included in their sample nearly half a million women who had had at least two deliveries. They excluded women who had diabetes or hypertension before becoming pregnant or between pregnancies. They then examined what had happened to these women after their second pregnancy, over an average period of more than 15 years, and calculated the risks using statistical programs.

The lowest risk women were the ones who didn’t have gestational diabetes or gestational hypertension in either pregnancy. Compared to them, the ones with one occurrence of either gestational diabetes or gestational hypertension in either pregnancy had a 50% higher chance of a heart attack or stroke in the future. Women with two occurrences had double the risk. Those with three or more occurrences had triple the risk.

“Pregnancy is usually a time when younger adults are interested in addressing health issues to improve the short- and long-term health of their families. We are hoping our work incentivizes mothers to consume heart-healthy foods, exercise routinely, and regularly attend follow-up hospital visits after the birth of their baby,” says Joseph Mussa, PhD candidate in epidemiology at McGill University, RI-MUHC trainee at the Centre for Outcomes Research and Evaluation, and first author of the study.

Heart disease and stroke are challenging conditions that can affect productivity and day-to-day functioning. Yet there are ways of reducing the risks. Being more physically active, eating more fruits and vegetables and less processed foods, and having opportunities and resources to be more active and eat in a healthy way can all lower heart attack and stroke risk.

“If women are supported in their prevention efforts, especially by their families and health care teams, the impact will be positive for their health and for all of those people with whom they are connected and who rely on them—their families, colleagues, friends, and communities,” adds Dasgupta, who is also a full professor in the department of medicine.

This complex work had funding from the Heart & Stroke Foundation of Canada (Heart & Stroke), and the analyses took place in the secure data centers of the Quebec Statistical Institute.

Source: McGill University

source

For inclusive hiring, findings back ethical leadership style

New findings indicate that white hiring managers should demonstrate an ethical leadership style during interviews with Black job candidates.

As many companies aim to build diverse workforces, candidates from historically marginalized communities continue to report unfair recruitment practices and limited opportunities. Building an equitable organization starts during the hiring process, with potential supervisors playing a major role in making applicants feel comfortable.

New research examines the impact of leadership style on prospective Black employees who apply for jobs in less-diverse companies. During selection, these applicants often experience stereotype threat, or a fear of being mistreated due to negative stereotypes about their racial group. This can cause candidates to withdraw from the hiring process.

The researchers studied the effects of two moral leadership styles—ethical and authentic—on candidates. Authentic leadership follows an internal compass, drawing on personal experiences and values. Ethical leadership complies with community norms: Ethical leaders provide candidates with an outline of group values and accepted behaviors, emphasize universal ethical principles, and establish a clear reward and punishment system. When supervisors demonstrate their leadership styles during the hiring process, they offer candidates an idea of how a future employee will be treated in daily interactions.

The study, forthcoming in the Journal of Applied Psychology, shows that when a white manager is interviewing a Black candidate, ethical leadership is more helpful in reducing threat and increasing willingness to apply for a job.

“Our data aligned with the idea that, if I’m a Black person and my would-be manager is white and showing authentic leadership, it’s going to be hard for me to predict what that’s going to mean,” says Andrew Hafenbrack, coauthor and associate professor of management and organization in the University of Washington Foster School of Business. “And if I do predict it, racism is so common that I might predict something that wouldn’t help me.

“It’s better in this case if my would-be manager is using an ethical style, therefore following community norms, so I can learn those norms. In other words: If there are rules to the game, and they’re going to follow them, then I can figure out the rules and we can work together well.”

Authentic leadership, which emphasizes individual experiences and beliefs, makes applicants feel more comfortable when the supervisor is from the same racial group. In this case, applicants are likely to identify with someone from a similar background. Since candidates in this situation are typically no longer concerned about unfair treatment, they are more likely to view authentic leadership as an opportunity to develop their own individuality in the workplace.

To reach these conclusions, the researchers conducted five experiments with nearly 500 Black residents of Brazilian favelas, or impoverished neighborhoods. Two of the studies included real-world job recruitment processes and physiological measures of stress: salivary cortisol and blood pressure. The results suggest that interactions with potential direct supervisors can reduce stereotype threat, which boosts Black applicants’ desire to join an organization. However, the leader’s identity determines whether these interactions feel positive or not.

“During recruitment, people from marginalized groups can experience this unpleasant feeling that they will be negatively stereotyped and face discrimination in their future job,” says Urszula Lagowska, corresponding author and assistant professor at Neoma Business School in France. “Because of that, they decide to either avoid these threatening companies or withdraw their application from the hiring process.”

Past research shows that job interviews and site visits are likely to trigger stereotype threat due to high pressure and the fact that most managers at many organizations are white. While organizational strategies—such as diversity-oriented policies—have benefits, they can also create the perception that marginalized groups are being singled out.

“Our research provides an answer for what a recruiting manager can do in their own capacity regardless of company policies,” Lagowska says.

Since a behavioral adjustment from leadership can make candidates from marginalized groups feel more comfortable, there is reason to reevaluate the increasing use of automated recruitment tools. Companies often use such tools to optimize searches and avoid bias, but these results suggest that increasing interactions with managers could help attract more diverse talent.

“If you’re really trying to get leadership styles right, it has to go beyond the policies themselves,” Hafenbrack says. “You can’t mandate something like this. This is about intuition and the understanding of the situation. Assuming a manager cares about inclusion and wants to reduce stereotype threat, it’s a nuanced process for them to make this shift. If you’re using an ethical leadership style, it must feel real.”

Additional coauthors are from the Brazilian School of Public and Business Administration in Rio de Janeiro and the IESEG School of Management in Paris. Funding for this came from Coordination of Superior Level Staff Improvement, the Brazilian Research Council and the Rio de Janeiro Research Foundation.

Source: Lauren Kirschman University of Washington

source

Forests will offer mammals refuge as climate warms

As the climate warms, forest cover will become increasingly important for wildlife conservation, researchers report.

The findings show that North American mammals, including pumas, wolves, bears, rabbits, deer, and opossums consistently depend on forests and avoid cities, farms, and other human-dominated areas in hotter climes.

In fact, mammals are, on average, 50% more likely to occupy forests than open habitats in hot regions. The opposite is true in the coldest regions.

“Different populations of the same species respond differently to habitat based on where they are,” says lead author Mahdieh Tourani, who conducted the study while a postdoctoral researcher at the University of California, Davis who is now an assistant professor of quantitative ecology at the University of Montana, Missoula. “Climate is mediating that difference.”

Tourani points to the eastern cottontail as an example. The study showed the common rabbit preferred forests in hotter areas while preferring human-dominated habitat, such as agricultural areas, in colder regions.

Her example illustrates “intraspecific variation,” which the study found to be pervasive across all North America’s mammals. This runs contrary to a longstanding practice in conservation biology of categorizing species as those that live well alongside people and those that don’t. The authors say there is growing recognition of ecological flexibility, and that species are more complicated than those two categories suggest.

“We can’t take a one-size-fits all approach to habitat conservation,” says senior author Daniel Karp, associate professor in the wildlife, fish, and conservation biology department at UC Davis. “It turns out climate has a large role in how species respond to habitat loss.”

For example, if elk are managed under the assumption that they can only live in protected areas, then conservation managers may miss opportunities to conserve them in human-dominated landscapes.

“On the other hand, if we assume a species will always be able to live alongside us, then we might be wasting our effort trying to improve the conservation value of human-dominated landscapes in areas where it is simply too hot for the species,” Karp says.

For the study, the authors leveraged Snapshot USA, a collaborative monitoring program with thousands of camera trap locations across the country.

“We analyzed 150,000 records of 29 mammal species using community occupancy models,” Tourani says. “These models allowed us to study how mammals respond to habitat types across their ranges while accounting for the fact that species may be in an area, but we did not record their presence because the species is rare or elusive.”

The study provides a pathway for conservation managers to tailor efforts to conserve and establish protected areas, as well as enhance working landscapes, like farms, pastures, and developed areas.

“If we’re trying to conserve species in working landscapes, it might behoove us to provide more shade for species,” says Karp, whose recent study about birds and climate change drew a similar conclusion, with forests providing a protective buffer against high temperatures.

“We can maintain patches of native vegetation, scattered trees, and hedgerows that provide local refugia for wildlife, especially in places that are going to get warmer with climate change.”

The study is published in the Proceedings of the National Academy of Sciences. Additional coauthors are from the Leibniz Institute for Zoo and Wildlife Research, North Carolina State University, Arizona State University, and Conservation International.

Conservation International funded the work.

Source: UC Davis

source

New wearable can send health data up to 15 miles away

A new wearable device can send health data up to 15 miles, 2,400 times the distance of WiFi, and without need for significant network infrastructure.

Wearable devices that use sensors to monitor biological signals can play an important role in health care. These devices provide valuable information that allows providers to predict, diagnose, and treat a variety of conditions while improving access to care and reducing costs.

However, wearables currently require significant infrastructure—such as satellites or arrays of antennas that use cell signals—to transmit data, making many of those devices inaccessible to rural and under-resourced communities.

Researchers hope the new device will help make digital health access more equitable.

The COVID-19 pandemic, and the strain it placed on the global health care system, brought attention to the need for accurate, fast, and robust remote patient monitoring, says Philipp Gutruf, an assistant professor of biomedical engineering in the College of Engineering at the University of Arizona and lead author of the study in the Proceedings of the National Academy of Sciences.

Non-invasive wearable devices currently use the internet to connect clinicians to patient data for aggregation and investigation.

“These internet-based communication protocols are effective and well-developed, but they require cell coverage or internet connectivity and main-line power sources,” says Gutruf, who is also a member of the BIO5 Institute. “These requirements often leave individuals in remote or resource-constrained environments underserved.”

In contrast, the system the Gutruf Lab developed uses a low power wide area network, or LPWAN, that offers 2,400 times the distance of WiFi and 533 times that of Bluetooth. The new system uses LoRa, a patented type of LPWAN technology.

“The choice of LoRa helped address previous limitations associated with power and electromagnetic constraints,” says Tucker Stuart, a biomedical engineering doctoral alumnus.

Alongside the implementation of this protocol, the lab developed circuitry and an antenna, which, in usual LoRa-enabled devices, is a large box that seamlessly integrates into the soft wearable. These electromagnetic, electronic, and mechanical features enable it to send data to the receiver over a long distance.

To make the device almost imperceptible to the wearer, the lab also enables recharge of its batteries over 2 meters (about 6.5 feet) of distance. The soft electronics, and the device’s ability to harvest power, are the keys to the performance of this first-of-its-kind monitoring system, Gutruf says.

The Gutruf Lab calls the soft mesh wearable biosymbiotic, meaning it is custom 3D-printed to fit the user and is so unobtrusive it almost begins to feel like part of their body. The device, worn on the low forearm, stays in place even during exercise, ensuring high-quality data collection, Gutruf says. The user wears the device at all times, and it charges without removal or effort.

“Our device allows for continuous operation over weeks due to its wireless power transfer feature for interaction-free recharging—all realized within a small package that even includes onboard computation of health metrics,” says coauthor Max Farley, an undergraduate student studying biomedical engineering.

The researchers plan to further improve and extend communication distances with the implementation of LoRa wireless area network gateways that could serve hundreds of square miles and hundreds of device users, using only a small number of connection points.

The wearable device and its communication system have the potential to aid remote monitoring in underserved rural communities, ensure high-fidelity recording in war zones, and monitor health in bustling cities, says Gutruf, whose long-term goal is to make the technology available to the communities with the most need.

“This effort is not just a scientific endeavor,” he says. “It’s a step toward making digital medicine more accessible, irrespective of geographical and resource constraints.”

Source: University of Arizona

source

Gear without PFAS coatings could put firefighters at risk

Transitioning away from firefighter turnout gear with per- and polyfluoroalkyl substances could bring potential performance tradeoffs, according to new research.

The chemicals offer water- and oil-repelling properties on the outer shells of firefighter turnout gear.

The study showed that turnout gear without PFAS outer shell coatings were not oil-repellent, posing a potential flammability hazard to firefighters if exposed to oil and flame, says Bryan Ormond, assistant professor of textile engineering, chemistry, and science at North Carolina State University. Ormond is corresponding author of the study in the Journal of Industrial Textiles.

“All oil repellents can also repel water, but all water repellents don’t necessarily repel oil. Diesel fuel is really difficult to repel, as is hydraulic fluid; in our testing, PFAS-treated materials repel both,” he says. “In our tests, turnout gear without PFAS repelled water but not oil or hydraulic fluid. Further, oils seem to spread out even more on the PFAS-free gear, potentially increasing the hazard.”

PFAS chemicals—known as forever chemicals because of their environmental persistence—are used in food packaging, cookware, and cosmetics, among other uses, but have recently been implicated in higher risks of cancer, higher cholesterol levels, and compromised immune systems in humans.

In response, firefighters have sought alternative chemical compounds—like the hydrocarbon wax coating used in the study—on turnout gear to repel water and oils.

Besides testing the oil- and water-repelling properties of PFAS-treated and PFAS-free outer garments, the researchers also compared how the outer shells aged in job-related exposures like weathering, high heat, and repeated laundering, and whether the garments remained durable and withstood tears and rips.

The study showed that PFAS-treated and PFAS-free outer shells performed similarly after exposure to UV rays and various levels of heat and moisture, as well as passes through heating equipment—similar to a pizza oven—and through washing machines.

“Laundering the gear is actually very damaging to turnout gear because of the washing machine’s agitation and cleaning agents used,” Ormond says.

“We also performed chemical analyses to see what’s happening during the weathering process,” says lead author Nur Mazumder, a doctoral student in fiber and polymer science.

“Are we losing the PFAS chemistries, the PFAS-free chemistries, or both when we age the garments? It turns out that we lost significant amounts of both of these finishes after the aging tests.”

Both types of garments performed similarly when tested for strength against tearing the outer shell fabric. The researchers say the PFAS and PFAS-free coatings didn’t seem to affect this attribute.

Future work will explore how much oil repellency is needed by firefighters out in the field, Ormond says.

“Even with PFAS treatment, you see a difference between a splash of fluid and soaked-in fluid,” he says. “For all of its benefits, PFAS-treated gear, when soaked, is dangerous to firefighters.

“So we need to really ask ‘What do firefighters need?’ If you’re not experiencing this need for oil repellency, there’s no worry about switching to non-PFAS gear. But firefighters need to know the non-PFAS gear will absorb oil, regardless of what those oils are.”

Coauthor Andrew Hall, another doctoral student in fiber and polymer science, is also testing dermal absorption, or taking the aged outer shell materials and placing them on a skin surrogate for a day or two. Are outer shell chemicals absorbed in the skin surrogate after these admittedly extreme exposure durations?

“Firefighting as a job is classified as a carcinogen and it shouldn’t be,” Ormond says. “How do we make better gear for them? How do we come up with better finishes and strategies for them? These aren’t just fabrics. They are highly engineered pieces of material that aren’t easily replaced.”

The Federal Emergency Management Agency’s Assistance to Firefighters Grants Program funded the work.

Source: NC State

source

Canada taps into U.S. military satellites for mobile communications

WASHINGTON — The Canadian Department of National Defense became the first international partner to access the U.S. Mobile User Objective System (MUOS) satellite network, the U.S. Space Force announced Nov. 30.

MUOS, developed by the U.S. Navy, is used for voice, video and data transmissions over a narrowband network of satellites in geosynchronous orbit — four operational satellites and one on-orbit spare. The Navy in March 2023 handed over the system to the Space Force. 

Canadian operators in a demonstration in October used MUOS for voice and data transmission using military tactical radios from two locations in Ottawa.

Canadian officers used secure military radios to make point-to-point calls, transfer files, and access group chat services on the network. The MUOS satellites, made by Lockheed Martin, operate in the ultra-high frequency band and use 3G cellular telephone technology to provide digital narrowband signals for mobile forces. 

The project to allow Canada access to the MUOS network started four years ago when Canada initiated a foreign military sales agreement with the United States.

Demonstration called a success

Canadian users showed they could push-to-talk and connect from one Canadian radio terminal to another, Thomas Cesear, head of the MUOS integration lab, said in a news release. 

“They also were able to successfully accomplish other services like chat, file transfer, email, as well as group calls,” he said. 

Another demonstration is scheduled for March 2024,  said Canada’s Department of National Defense project leader Scott Mackenzie.

source

Team cracks the mystery of maize’s origins

All modern maize descends from a hybrid created just over 5,000 years ago in central Mexico, thousands of years after the plant was first domesticated, researchers report.

Maize is one of the world’s most widely grown crops. It is used for both human and animal foods and holds great cultural significance, especially for Indigenous peoples in the Americas. Yet despite its importance, the origins of the grain have been hotly debated for more than a century.

The findings, published in the journal Science, have implications both for improving one of the world’s most important crops and for understanding how the histories of people and their crops influence each other.

“It’s a new model for the origins and spread of maize, and how it became a staple across the Americas,” says senior author Jeffrey Ross-Ibarra, professor in the evolution and ecology department at the University of California, Davis.

For the last few decades, the consensus has been that maize (Zea mays) was domesticated once from a single wild grass—called teosinte—in the lowlands of southwest Mexico about 9,000 to 10,000 years ago.

Known as corn in the United States, maize is not only a staple of diets around the globe, but also can be processed into sweeteners, ethanol fuel, and other uses.

More recently, though, it’s become clear that the genome of modern maize also contains a hefty dose of DNA from a second teosinte that grows in the highlands of central Mexico.

Ross-Ibarra and collaborators analyzed the genomes of over a thousand samples of maize and wild relatives. They found that about 20% of the genome of all maize worldwide comes from this second highland teosinte.

The new findings suggest that, though maize was domesticated around 10,000 years ago, it was not until 4,000 years later, when it hybridized with highland teosinte, that maize really took off as a popular crop and food staple. This is also supported by archaeological evidence of the increasing importance of maize around the same time.

The new crop spread rapidly through the Americas and later worldwide. Today, about 1.2 billion metric tons are harvested each year globally.

The hunt for why highland teosinte enabled maize to become a staple is still underway, Ross-Ibarra says. The researchers did find genes related to cob size—perhaps representing an increased yield potential—and flowering time, which likely helped maize, a tropical crop, to grow at higher latitudes with longer days.

Hybridization may also have brought “hybrid vigor,” where a hybrid organism is more vigorous than either of its parents. The researchers observed that genomic segments from highland teosinte contained fewer harmful mutations than did other parts of the genome.

While the initial hybridization may have been accidental, it’s likely that Indigenous farmers recognized and took advantage of the novel variation introduced from highland maize, Ross-Ibarra says. Even today, he says, “If you talk to Mexican farmers, some will tell you that letting wild maize grow near the fields makes their crops stronger.”

A team led by Ross-Ibarra with Graham Coop, a professor at UC Davis, archaeologists at UC Santa Barbara, and geneticists at Swedish University of Agricultural Sciences will now study the coevolution of humans and maize in the Americas. They will use genetics to look at how humans and maize spread across the continent and how populations of both maize and humans grew and shrank as they interacted with each other.

“We will incorporate human genetic data, maize genetics, and archaeological data in an effort to answer many of the questions raised by our new model of maize origins,” Ross-Ibarra says.

Additional coauthors of the Science paper are from UC Davis, Purdue University, Iowa State University, UC Santa Barbara, Penn State, Cornell University, Huazhong Agricultural University, Jilin Academy of Agricultural Sciences, Laboratorio Nacional de Genómica para la Biodiversidad, Yunnan Academy of Agricultural Sciences, Sichuan Agricultural University, and China Agricultural University.

The National Natural Science Foundation of China, the US National Science Foundation, and the US Department of Agriculture supported the work.

Source: UC Davis

source

Material mimics structures that make bluebirds blue

A new material replicates the structure responsible for the blue feathers of eastern bluebirds and other songbirds. The new material could be used in batteries or filtration.

The blue color of the eastern bluebird doesn’t come from pigments but from the special structure of the feather. Viewed under the microscope, the feathers are traversed by a network of channels with a diameter of just a few hundred nanometers. By way of classification, a nanometer is a billionth of a meter.

The blue of the bluebird came to the attention of ETH Zurich researchers from the Laboratory of Soft and Living Materials led by former ETH professor Eric Dufresne. So much so that they decided to replicate this material in the laboratory. They have now succeeded with a new method: they have developed a material that exhibits the same structural design of the bluebird feathers, while offering potential for practical applications thanks to its nanonetworks.

The researchers used as a starting material a transparent silicone rubber that can be stretched and deformed. The scientists placed this rubber in an oily solution and left it to swell for several days in an oven at temperatures of 60 degrees Celsius (140 degrees Fahrenheit). They then cooled it and extracted the rubber from the oily solution.

The researchers were able to observe under the microscope how the nanostructure of the rubber had changed during the procedure, and they identified similar network structures to those that give the bluebird feather its blue color. The main difference is the thickness of the channels formed—the bird’s feather measured approximately 200 nanometers and the synthetic material 800 nanometers.

The principle behind the network formation is phase separation. This phenomenon can be observed in the kitchen with a salad dressing made of oil and vinegar. Mixing the two liquids is not easy and is best achieved by shaking vigorously. The liquids separate again as soon as the shaking stops. However, it is also possible to mix them by means of heating and then cooling them again to separate them. This is precisely the principle the researchers applied to mix the silicone rubber and oily solution. This resulted in the formation of an entire microscopic network of channels inside the rubber.

“We are able to control and select the conditions in such a way that channels are formed during phase separation,” says lead author Carla Fernández Rico. “We have succeeded in halting the procedure before the two phases merge with each other completely again.” This channel-like structure is very similar to the structure of the bird’s feathers.

The advantage of this new method is that the new material is several centimeters in size and remains scalable. “In principle you could use a piece of rubbery plastic of any size. However, you’d then also need correspondingly large containers and ovens,” says Fernández Rico.

The novelty of this material processing method is generating a lot of interest in the physics community. “We have a simple system made of only two ingredients, but the final structure obtained is very complex and controlled by the properties of the ingredients,” says Fernández Rico. “We have been approached by several theoretical groups that are proposing the use of physical models in order to understand the key physical principles of this new process and to predict its outcome.”

The new material offers potential for technical and sustainable applications. Batteries are one possible field of application. Ions in batteries typically move between electrodes through a liquid called the electrolyte. One of the main reasons batteries lose their charging capacity over time, or even end up failing, is because the ions react with the liquid electrolyte, which causes the two electrodes to establish physical contact and damage the battery. Liquid electrolytes could be replaced by solid electrolytes with a network structure of interconnected channels—such as the one shown by the researchers, which would avoid physical contact between electrodes, while maintaining good ion transport through the battery.

Water filters could be another application. Good transport properties across the interconnected channels and large surface areas are advantageous here. The ratio of surface to volume is enormous in the case of channel-like structures. This enables the efficient removal of contaminants such as bacteria or other particles from water.

“However, the product is still a long way from being ready for market,” says Fernández Rico. “While the rubbery material is cheap and easy to obtain, the oily phase is quite expensive. A less expensive pair of materials would be required here.”

Fernández Rico wishes to develop her future research with a view to sustainability: “Many natural polymers, such as cellulose or chitin, have a structure similar to the rubber used in our work.” However, working with a natural material such as cellulose is (more) environmentally friendly than with silicone rubbers derived from petroleum. The postdoctoral researcher therefore wishes to find out in future how such materials can be made more functional to exploit their potential.

The findings appear in Nature Materials.

Source: Deborah Kyburz for ETH Zurich

source