Showing posts with label Universe. Show all posts
Showing posts with label Universe. Show all posts

Saturday, June 25, 2016

Moral dilemma with driverless cars: Who gets protected, the driver or pedestrians?


Driverless cars pose a quandary when it comes to safety. These autonomous vehicles are programmed with a set of safety rules, and it is not hard to construct a scenario in which those rules come into conflict with each other. Suppose a driverless car must either hit a pedestrian or swerve in such a way that it crashes and harms its passengers. What should it be instructed to do?
A newly published study co-authored by an MIT professor shows that the public is conflicted over such scenarios, taking a notably inconsistent approach to the safety of autonomous vehicles, should they become a reality on the roads.
In a series of surveys taken last year, the researchers found that people generally take a utilitarian approach to safety ethics: They would prefer autonomous vehicles to minimize casualties in situations of extreme danger. That would mean, say, having a car with one rider swerve off the road and crash to avoid a crowd of 10 pedestrians. At the same time, the survey's respondents said, they would be much less likely to use a vehicle programmed that way.
Essentially, people want driverless cars that are as pedestrian-friendly as possible -- except for the vehicles they would be riding in.
"Most people want to live in in a world where cars will minimize casualties," says Iyad Rahwan, an associate professor in the MIT Media Lab and co-author of a new paper outlining the study. "But everybody want their own car to protect them at all costs."
The result is what the researchers call a "social dilemma," in which people could end up making conditions less safe for everyone by acting in their own self-interest.
"If everybody does that, then we would end up in a tragedy ... whereby the cars will not minimize casualties," Rahwan adds.
Or, as the researchers write in the new paper, "For the time being, there seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest."
The paper, "The social dilemma of autonomous vehicles," is being published in the journal Science. The authors are Jean-Francois Bonnefon of the Toulouse School of Economics; Azim Shariff, an assistant professor of psychology at the University of Oregon; and Rahwan, the AT&T Career Development Professor and an associate professor of media arts and sciences at the MIT Media Lab.
Survey says
The researchers conducted six surveys, using the online Mechanical Turk public-opinion tool, between June 2015 and November 2015.
The results consistently showed that people will take a utilitarian approach to the ethics of autonomous vehicles, one emphasizing the sheer number of lives that could be saved. For instance, 76 percent of respondents believe it is more moral for an autonomous vehicle, should such a circumstance arise, to sacrifice one passenger rather than 10 pedestrians.
But the surveys also revealed a lack of enthusiasm for buying or using a driverless car programmed to avoid pedestrians at the expense of its own passengers. One question asked respondents to rate the morality of an autonomous vehicle programmed to crash and kill its own passenger to save 10 pedestrians; the rating dropped by a third when respondents considered the possibility of riding in such a car.
Similarly, people were strongly opposed to the idea of the government regulating driverless cars to ensure they would be programmed with utilitarian principles. In the survey, respondents said they were only one-third as likely to purchase a vehicle regulated this way, as opposed to an unregulated vehicle, which could presumably be programmed in any fashion.
"This is a challenge that should be on the mind of carmakers and regulators alike," the scholars write. Moreover, if autonomous vehicles actually turned out to be safer than regular cars, unease over the dilemmas of regulation "may paradoxically increase casualties by postponing the adoption of a safer technology."
Empirically informed
The aggregate performance of autonomous vehicles on a mass scale is, of course, yet to be determined. For now, ethicists say the survey offers interesting and novel data in an area of emerging moral interest.
The researchers, for their part, acknowledge that public-opinion polling on this issue is at a very early stage, which means any current findings "are not guaranteed to persist," as they write in the paper, if the landscape of driverless cars evolves.
Still, concludes Rahwan, "I think it was important to not just have a theoretical discussion of this, but to actually have an empirically informed discussion."

Story Source:
The above post is reprinted from materials provided by Massachusetts Institute of Technology. The original item was written by Peter Dizikes.Note: Materials may be edited for content and length.

Faster detection of pathogens in the lungs


What used to take several weeks is now possible in two days: thanks to new molecular-based methods, mycobacterial pathogens that cause pulmonary infections or tuberculosis can now be detected much more quickly. Time-consuming bacteria cultures no longer need to be taken from the patient samples, meaning that a suitable therapy can be started quickly.
Mycobacteria cause various illnesses. Mycobacterium tuberculosis, the main representative of this genus, is the causative agent of tuberculosis, which killed around 1.5 million people worldwide in 2014. Nontuberculous mycobacteria can trigger pulmonary infections, lymph node infections and skin diseases in patients with compromised immune systems. On account of more and more people with chronic lung diseases and the success of transplants, these difficult-to-treat infections have been on the rise continuously in recent decades.
A few days instead of several weeks
A research group from the Institute of Medical Microbiology at the University of Zurich and the National Centre for Mycobacteria used a large-scale study with more than 6,800 patient samples to examine molecular-based methods for the detection of mycobacterial pathogens. Because many mycobacteria only grow at a very slow pace, routine detection using bacteria cultures in highly specialized and expensive high-safety labs takes several weeks to complete. The subsequent susceptibility test to determine the appropriate medicine also takes one to two weeks. "For patients and doctors, this long waiting period is an unnecessary test of their patience," says Dr. Peter Keller from the Institute of Medical Microbiology at the UZH. "By comparison, with molecular detection methods, most patients know after one or two days whether they have an infection with tuberculosis pathogens or with nontuberculous mycobacteria."
For their study, the researchers developed a diagnostic algorithm to detect mycobacteria directly from the patient sample using genetic analysis. With this ultra-fast molecular detection method, the patient samples were examined continuously over three years and compared with the results from the bacteria cultures for more than 3,000 patients. The new molecular-based methods were found to be just as accurate as the lengthy culture-based techniques used to date.
Testing resistance to standard medicines
In addition, the molecular analysis makes it possible for the first time to also detect the nontuberculous mycobacteria directly from the patient sample within just a few hours. This means that suitable therapeutic measures can be initiated much more quickly. By contrast, if the patient has a tuberculosis infection, a further molecular assay is carried out to test susceptibility to the main tuberculosis drugs "rifampicin" and "isoniazid." "This also showed that the molecular-based method reliably predicts the culture-based resistance results. It is possible to obtain certainty much sooner of whether the therapy with standard medicines chosen is likely to be successful," explains Keller.

Story Source:
The above post is reprinted from materials provided by University of Zurich.Note: Materials may be edited for content and length.

Friday, June 24, 2016

Fix for 3-billion-year-old genetic error could dramatically improve genetic sequencing


For 3 billion years, one of the major carriers of information needed for life, RNA, has had a glitch that creates errors when making copies of genetic information. Researchers at The University of Texas at Austin have developed a fix that allows RNA to accurately proofread for the first time. The new discovery, published June 23 in the journal Science, will increase precision in genetic research and could dramatically improve medicine based on a person's genetic makeup.
Certain viruses called retroviruses can cause RNA to make copies of DNA, a process called reverse transcription. This process is notoriously prone to errors because an evolutionary ancestor of all viruses never had the ability to accurately copy genetic material.
The new innovation engineered at UT Austin is an enzyme that performs reverse transcription but can also "proofread," or check its work while copying genetic code. The enzyme allows, for the first time, for large amounts of RNA information to be copied with near perfect accuracy.
"We created a new group of enzymes that can read the genetic information inside living cells with unprecedented accuracy," says Jared Ellefson, a postdoctoral fellow in UT Austin's Center for Systems and Synthetic Biology. "Overlooked by evolution, our enzyme can correct errors while copying RNA."

Reverse transcription is mainly associated with retroviruses such as HIV. In nature, these viruses' inability to copy DNA accurately may have helped create variety in species over time, contributing to the complexity of life as we know it.
Since discovering reverse transcription, scientists have used it to better understand genetic information related to inheritable diseases and other aspects of human health. Still, the error-prone nature of existing RNA sequencing is a problem for scientists.
"With proofreading, our new enzyme increases precision and fidelity of RNA sequencing," says Ellefson. "Without the ability to faithfully read RNA, we cannot accurately determine the inner workings of cells. These errors can lead to misleading data in the research lab and potential misdiagnosis in the clinical lab."
Ellefson and the team of researchers engineered the new enzyme using directed evolution to train a high-fidelity (proofreading) DNA polymerase to use RNA templates. The new enzyme, called RTX, retains the highly accurate and efficient proofreading function, while copying RNA. Accuracy is improved at least threefold, and it may be up to 10 times as accurate. This new enzyme could enhance the methods used to read RNA from cells.
"As we move towards an age of personalized medicine where everyone's transcripts will be read out almost as easily as taking a pulse, the accuracy of the sequence information will become increasingly important," said Andy Ellington, a professor of molecular biosciences. "The significance of this is that we can now also copy large amounts of RNA information found in modern genomes, in the form of the RNA transcripts that encode almost every aspect of our physiology. This means that diagnoses made based on genomic information are far more likely to be accurate. "
In addition to Ellefson and Ellington, authors include Jimmy Gollihar, Raghav Shroff, Haridha Shivram and Vishwanath Iyer. All are affiliated with the Department of Molecular Biosciences at The University of Texas at Austin.
This research was supported by grants from the Defense Advanced Research Projects Agency, National Security Science and Engineering Faculty Fellows, NASA and the Welch Foundation. A provisional patent was filed on the new sequence of the enzyme.


Story Source:
The above post is reprinted from materials provided by University of Texas at AustinNote: Materials may be edited for content and length.

Probing giant planets' dark hydrogen


Hydrogen is the most-abundant element in the universe. It's also the simplest--sporting only a single electron in each atom. But that simplicity is deceptive, because there is still so much we have to learn about hydrogen.
One of the biggest unknowns is its transformation under the extreme pressures and temperatures found in the interiors of giant planets, where it is squeezed until it becomes liquid metal, capable of conducting electricity. New work published in Physical Review Letters by Carnegie's Alexander Goncharov and University of Edinburgh's Stewart McWilliams measures the conditions under which hydrogen undergoes this transition in the lab and finds an intermediate state between gas and metal, which they're calling "dark hydrogen."
On the surface of giant planets like Jupiter, hydrogen is a gas. But between this gaseous surface and the liquid metal hydrogen in the planet's core lies a layer of dark hydrogen, according to findings gleaned from the team's lab mimicry.
Using a laser-heated diamond anvil cell to create the conditions likely to be found in gas giant planetary interiors, the team probed the physics of hydrogen under a range of pressures from 10,000 to 1.5 million times normal atmospheric pressure and up to 10,000 degrees Fahrenheit.
They discovered this unexpected intermediate phase, which does not reflect or transmit visible light, but does transmit infrared radiation, or heat.
"This observation would explain how heat can easily escape from gas giant planets like Saturn," explained Goncharov.
They also found that this intermediate dark hydrogen is somewhat metallic, meaning it can conduct an electric current, albeit poorly. This means that it could play a role in the process by which churning metallic hydrogen in gas giant planetary cores produces a magnetic field around these bodies, in the same way that the motion of liquid iron in Earth's core created and sustains our own magnetic field.
"This dark hydrogen layer was unexpected and inconsistent with what modeling research had led us to believe about the change from hydrogen gas to metallic hydrogen inside of celestial objects," Goncharov added.

Story Source:
The above post is reprinted from materials provided by Carnegie Institution for ScienceNote: Materials may be edited for content and length.

Nanoscientists develop the 'ultimate discovery tool'


The discovery power of the gene chip is coming to nanotechnology. A Northwestern University research team is developing a tool to rapidly test millions and perhaps even billions or more different nanoparticles at one time to zero in on the best particle for a specific use.
When materials are miniaturized, their properties -- optical, structural, electrical, mechanical and chemical -- change, offering new possibilities. But determining what nanoparticle size and composition are best for a given application, such as catalysts, biodiagnostic labels, pharmaceuticals and electronic devices, is a daunting task.
"As scientists, we've only just begun to investigate what materials can be made on the nanoscale," said Northwestern's Chad A. Mirkin, a world leader in nanotechnology research and its application, who led the study. "Screening a million potentially useful nanoparticles, for example, could take several lifetimes. Once optimized, our tool will enable researchers to pick the winner much faster than conventional methods. We have the ultimate discovery tool."

Using a Northwestern technique that deposits materials on a surface, Mirkin and his team figured out how to make combinatorial libraries of nanoparticles in a very controlled way. (A combinatorial library is a collection of systematically varied structures encoded at specific sites on a surface.) Their study will be published June 24 by the journal Science.
The nanoparticle libraries are much like a gene chip, Mirkin says, where thousands of different spots of DNA are used to identify the presence of a disease or toxin. Thousands of reactions can be done simultaneously, providing results in just a few hours. Similarly, Mirkin and his team's libraries will enable scientists to rapidly make and screen millions to billions of nanoparticles of different compositions and sizes for desirable physical and chemical properties.
"The ability to make libraries of nanoparticles will open a new field of nanocombinatorics, where size -- on a scale that matters -- and composition become tunable parameters," Mirkin said. "This is a powerful approach to discovery science."
Mirkin is the George B. Rathmann Professor of Chemistry in the Weinberg College of Arts and Sciences and founding director of Northwestern's International Institute for Nanotechnology.
"I liken our combinatorial nanopatterning approach to providing a broad palette of bold colors to an artist who previously had been working with a handful of dull and pale black, white and grey pastels," said co-author Vinayak P. Dravid, the Abraham Harris Professor of Materials Science and Engineering in the McCormick School of Engineering.

Using five metallic elements -- gold, silver, cobalt, copper and nickel -- Mirkin and his team developed an array of unique structures by varying every elemental combination. In previous work, the researchers had shown that particle diameter also can be varied deliberately on the 1- to 100-nanometer length scale.
Some of the compositions can be found in nature, but more than half of them have never existed before on Earth. And when pictured using high-powered imaging techniques, the nanoparticles appear like an array of colorful Easter eggs, each compositional element contributing to the palette.
To build the combinatorial libraries, Mirkin and his team used Dip-Pen Nanolithography, a technique developed at Northwestern in 1999, to deposit onto a surface individual polymer "dots," each loaded with different metal salts of interest. The researchers then heated the polymer dots, reducing the salts to metal atoms and forming a single nanoparticle. The size of the polymer dot can be varied to change the size of the final nanoparticle.
This control of both size and composition of nanoparticles is very important, Mirkin stressed. Having demonstrated control, the researchers used the tool to systematically generate a library of 31 nanostructures using the five different metals.
To help analyze the complex elemental compositions and size/shape of the nanoparticles down to the sub-nanometer scale, the team turned to Dravid, Mirkin's longtime friend and collaborator. Dravid, founding director of Northwestern's NUANCE Center, contributed his expertise and the advanced electron microscopes of NUANCE to spatially map the compositional trajectories of the combinatorial nanoparticles.
Now, scientists can begin to study these nanoparticles as well as build other useful combinatorial libraries consisting of billions of structures that subtly differ in size and composition. These structures may become the next materials that power fuel cells, efficiently harvest solar energy and convert it into useful fuels, and catalyze reactions that take low-value feedstocks from the petroleum industry and turn them into high-value products useful in the chemical and pharmaceutical industries.

Story Source:
The above post is reprinted from materials provided by Northwestern University. The original item was written by Megan Fellman. Note: Materials may be edited for content and length.

Tuesday, June 21, 2016

SHOCKING: A child is traumatized by the violence that made an educator (VIDEO)

The shock that boy 3 year old raped in the garden collection "Diren" seems to have been bigger after what we all saw in the video, where the administrator of the garden, Agnes Shabani grab the hair to ear and throws in hands . Day 3-year-old child was raped has experienced a severe psychological condition. Screenshots that show 'Stop' airing tonight are sad.

Monday, June 20, 2016

Rare, blind catfish never before found in US discovered in national park cave in Texas


An extremely rare eyeless catfish species previously known to exist only in Mexico has been discovered in Texas.
Dean Hendrickson, curator of ichthyology at The University of Texas at Austin, identified the live fish, discovered in a deep limestone cave at Amistad National Recreation Area near Del Rio, Texas, as the endangered Mexican blindcat (Prietella phreatophila). The pair of small catfish, collected by a team in May, have been relocated to the San Antonio Zoo.
The Mexican blindcat, a species that grows to no more than 3 inches in length, is known to dwell only in areas supported by the Edwards-Trinity Aquifer that underlies the Rio Grande basin in Texas and Coahuila. The new blindcat finding lends additional weight to a theory that water-filled caves below the Rio Grande may connect the Texas and Mexico portions of the aquifer.
"Since the 1960s there have been rumors of sightings of blind, white catfishes in that area, but this is the first confirmation," Hendrickson said. "I've seen more of these things than anybody, and these specimens look just like the ones from Mexico."
Jack Johnson, a caver and National Park Service resource manager at Amistad, first spotted some of the slow-moving, pinkish-white fish with no eyes in April 2015. After several attempts to relocate the species, Johnson and biologist Peter Sprouse of Zara Environmental LLC led the team that found the fish again last month. Mexican blindcats are a pale pink color because their blood can be seen through the translucent skin, and they dwell exclusively in groundwater.
"Cave-dwelling animals are fascinating in that they have lost many of the characteristics we are familiar with in surface animals, such as eyes, pigmentation for camouflage, and speed," Sprouse said. "They have found an ecological niche where none of those things are needed, and in there they have evolved extra-sensory abilities to succeed in total darkness."
The Mexican blindcat was originally described in 1954 when found in wells and springs near Melchor Múzquiz in the northern Mexican state of Coahuila. It was subsequently listed as an endangered species by the Mexican government, and as a foreign endangered species by the U.S. Fish and Wildlife Service. Hendrickson led efforts to locate additional blindcat sites in Mexico and Texas for years but only located them in Mexico on previous expeditions.
"Aquifer systems like the one that supports this rare fish are also the lifeblood of human populations and face threats from contamination and over-pumping of groundwater," Johnson said. "The health of rare and endangered species like this fish at Amistad can help indicate the overall health of the aquifer and water resources upon which many people depend."
The fish are not yet on public display. They will be maintained alive in a special facility designed to accommodate cave and aquifer species at the San Antonio Zoo's Department of Conservation and Research.
"The San Antonio Zoo has a series of labs specially designed to keep subterranean wildlife safe and healthy," said Danté Fenolio, vice president of conservation and research at the San Antonio Zoo. "The fact that the zoo can participate now and house these very special catfish demonstrates the zoo's commitment to the conservation of creatures that live in groundwater."
Others involved in the discovery were Andy Gluesenkamp and Ben Hutchins of Texas Parks and Wildlife, Gary Garrett and Adam Cohen of UT Austin and Jean Krejca of Zara Environmental.
The finding brings the number of blind catfish species within the U.S. to three, all found only in Texas. The two other species of blind catfish in Texas, the toothless blindcat (Trogloglanis pattersoni) and the widemouth blindcat (Satan eurystomus), live in part of the Edwards Aquifer complex, the deep Edwards pool below the city of San Antonio.

Story Source:
The above post is reprinted from materials provided by University of Texas at Austin. The original item was written by Christine S Sinatra. Note: Materials may be edited for content and length.

Bright spots shine light on the future of coral reefs


Researchers have discovered a handful of 'bright spots' among the world's embattled coral reefs, offering the promise of a radical new approach to conservation.
In one of the largest global studies of its kind, researchers conducted over 6,000 reef surveys in 46 countries across the globe, and discovered 15 'bright spots' -- places where, against all the odds, there were a lot more fish on coral reefs than expected.
"Given the widespread depletion of coral reef fisheries globally, we were really excited to find these bright spots that were doing much better than we anticipated," says lead author Professor Josh Cinner from the ARC Centre of Excellence for Coral Reef Studies at James Cook University.
"These 'bright spots' are reefs with more fish than expected based on their exposure to pressures like human population, poverty, and unfavourable environmental conditions.
"To be clear, bright spots are not necessarily pristine reefs, but rather reefs that have more fish than they should, given the pressures they face.
"We wanted to know why these reefs could 'punch above their weight' so-to-speak, and whether there are lessons we can learn about how to avoid the degradation often associated with overfishing."
Co-author, Professor Nick Graham of Lancaster University says globally, coral reefs are in decline and current strategies for preserving them are insufficient.
"Our bright spots approach has identified places we did not previously know were so successful, and the really interesting thing is that they are not necessarily untouched by man," he says.
"We believe their discovery offers the potential to develop exciting new solutions for coral reef conservation."
"Importantly, the bright spots had a few things in common, which, if applied to other places, might help promote better reef conditions."
"Many bright spots had strong local involvement in how the reefs were managed, local ownership rights, and traditional management practices," says co-author Dr. Christina Hicks of Lancaster and Stanford Universities.
The scientists also identified 35 'dark spots' -- these were reefs with fish stocks in worse shape than expected.
"Dark spots also had a few defining characteristics; they were subject to intensive netting activities and there was easy access to freezers so people could stockpile fish to send to the market," says Dr. Hicks.
This type of bright spots analysis has been used in fields such as human health to improve the wellbeing of millions of people. It is the first time it has been rigorously developed for conservation.
"We believe that the bright spots offer hope and some solutions that can be applied more broadly across the world's coral reefs," says Prof. Cinner.
"Specifically, investments that foster local involvement and provide people with ownership rights can allow people to develop creative solutions that help defy expectations of reef fisheries depletion.
"Conversely, dark spots may highlight development or management pathways to avoid."
Bright spots were typically found in the Pacific Ocean in places like the Solomon Islands, parts of Indonesia, Papua New Guinea and Kiribati. Dark spots were more globally distributed and found in every major ocean basin.
The study has been published in the journal Nature. Thirty nine scientists from 34 different universities and conservation groups conducted the research.

Story Source:
The above post is reprinted from materials provided by ARC Centre of Excellence in Coral Reef StudiesNote: Materials may be edited for content and length.

Laser manipulates electronic properties


A new semiconducting material that is only three atomic-layers thick has emerged with more exotic, malleable electronic properties than those of traditional semiconductors. These properties come from electrons, like a ball rolling down a hill to a valley, that prefer the lower energy levels at the bottom of electronic energy "valleys." Now, the valley depth can be shifted optically and with extreme speed with sculpted laser pulses.
Layered materials where electrons are constrained to two dimensions can be engineered into novel electronic structures with unique electronic and optical properties. Optical manipulation of electrons can lead to new modes of energy conversion and computational devices such as electronics based on energy valleys states instead of conventional electronics based on charge flows and accumulation.
A new ultra-thin semiconducting material (tungsten disulfide or WS2) consists of three atomic layers in an "atomic sandwich" configuration with a heavy metal tungsten atom layer between lighter element sulfur atom layers. The material is quasi-two dimensional, just three atoms thick, and exhibits a unique electronic structure with subtle differences depending on the direction relative to the repeating atomic bonds in the layer plane. The differences create two distinct but similar valleys in the electronic structure, where electrons can accumulate with different directions, but with associated energy levels and gaps that are equal.
Adding a specific type of laser beam, where the polarization direction (the direction of interacting electric forces) rotates as the light propagates, creates a new type of electron energy structure that can be probed and manipulated. The electron structural landscape, which began with two similar valleys, can also be made to have different distinct valleys with different energy depths.
The valley depth can now be manipulated with light to encode information like zeroes and ones for conventional data. Additionally, changing the energy of the valleys with light allows for fast manipulation without the need for contact with complex electrodes as in conventional electronics.
This work was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences (optical laser experiments and data analysis); the National Science Foundation (material growth, initial characterization, and theoretical interpretation); and the Ministry of Science and Technology of the Republic of China. Publications

Story Source:
The above post is reprinted from materials provided by Department of Energy, Office of ScienceNote: Materials may be edited for content and length.

Black holes and the prospects for measuring gravitational waves


The supermassive black holes found at the centre of every galaxy, including our own Milky Way, may, on average, be smaller than we thought, according to work led by University of Southampton astronomer Dr Francesco Shankar. If he and his colleagues are right, then the gravitational waves produced when they merge will be harder to detect than previously assumed. The international team of scientists publish their result in Monthly Notices of the Royal Astronomical Society.
Black holes play a fundamental role in astronomy, gravitation, and particle physics. They are enormously concentrated masses, sometimes millions to billions of times more massive than the Sun, and have gravitational fields that are so powerful that not even light travels fast enough to escape their grasp, hence the name ‘black hole’.
Supermassive black holes have been found lurking in the cores of all galaxies observed with high enough sensitivity. Despite this, little is known about how they formed. What is known is that the mass of a supermassive black hole at the centre of a galaxy is related to the total mass and the typical speeds (the "velocity dispersion") of the stars in its host.
The very existence of this relationship suggests a close co-evolution between black holes and their host galaxies, and understanding their origin is vital for a proper model of how galaxies and black holes form and evolve. This is because many galaxy evolution models invoke powerful winds and/or jets from the central supermassive black hole to control or even stop star formation in the host galaxy (so-called "quasar feedback"). Alternatively, multiple mergers of galaxies - and their central black holes - are also often suggested as the primary drivers behind the evolution of massive galaxies.
Despite major theoretical and observational efforts in the last decades, it remains unclear whether quasar feedback actually ever occurred in galaxies, and to what extent mergers have truly shaped galaxies and their black holes. Some of this is because modellers have had a tough time reproducing the observed black-hole galaxy scaling relations, and in reconciling the properties of nearby black holes with more distant populations.
The new work shows that selection effects – where what is observed is not representative - have significantly biased the view of the local black hole population. This bias has led to significantly overestimated black hole masses. It suggests that modellers should look to velocity dispersion rather than stellar mass as the key to unlocking the decades-old puzzles of both quasar feedback and the history of galaxies.
With less mass than previously thought, supermassive black holes have on average weaker gravitational fields. Despite this, they were still able to power quasars, making them bright enough to be observed over distances of billions of light years.
Unfortunately, it also implies a substantial reduction in the expected gravitational wave signal detectable from pulsar timing array experiments. Ripples in spacetime that were first predicted by Albert Einstein in his general theory of relativity in 1915; gravitational waves were finally detected last year and announced by the LIGO team this February. The hope is that coming observatories can observe many more gravitational wave events, and that it will provide astronomers with a new technique for observing the universe.
Dr Shankar comments: “Gravitational wave astronomy is opening up an entirely new way of observing the universe. Our results though illustrate how challenging a complete census of the gravitational background could be, with the signals from the largest black holes being paradoxically among the most difficult to detect with present technology.”
Researchers expect pairs of supermassive black holes, found in merging galaxies, to be the strongest sources of gravitational waves in the universe. However, the more massive the pairs, the lower the frequencies of the emitted waves, which become inaccessible to ground based interferometers like LIGO. Gravitational waves from supermassive black holes can however be detected from space via dedicated gravitational telescopes (such as the present and future ESA missions LISA pathfinder and eLISA), or by a different method using ‘pulsar timing arrays’.
These devices monitor the collapsed, rapidly rotating remnants of massive stars, which have pulsating signals. Even this method is though still a few years from making a detection, according to a follow-up study by the same team expected to appear in another Monthly Notices paper later this year.

Story Source:
The above post is reprinted from materials provided by Royal Astronomical Society (RAS)Note: Materials may be edited for content and length.

Need to remember something? Exercise four hours later!


A new study suggests an intriguing strategy to boost memory for what you've just learned: hit the gym four hours later. The findings reported in the Cell Press journal Current Biology on June 16 show that physical exercise after learning improves memory and memory traces, but only if the exercise is done in a specific time window and not immediately after learning.
"It shows that we can improve memory consolidation by doing sports after learning," says Guillén Fernández of the Donders Institute at the Radboud University Medical Center in the Netherlands.
In the new study, Fernández, along with Eelco van Dongen and their colleagues, tested the effects of a single session of physical exercise after learning on memory consolidation and long-term memory. Seventy-two study participants learned 90 picture-location associations over a period of approximately 40 minutes before being randomly assigned to one of three groups: one group performed exercise immediately, the second performed exercise four hours later, and the third did not perform any exercise. The exercise consisted of 35 minutes of interval training on an exercise bike at an intensity of up to 80 percent of participants' maximum heart rates. Forty-eight hours later, participants returned for a test to show how much they remembered while their brains were imaged via magnetic resonance imaging (MRI).
The researchers found that those who exercised four hours after their learning session retained the information better two days later than those who exercised either immediately or not at all. The brain images also showed that exercise after a time delay was associated with more precise representations in the hippocampus, an area important to learning and memory, when an individual answered a question correctly.
"Our results suggest that appropriately timed physical exercise can improve long-term memory and highlight the potential of exercise as an intervention in educational and clinical settings," the researchers conclude.
It's not yet clear exactly how or why delayed exercise has this effect on memory. However, earlier studies of laboratory animals suggest that naturally occurring chemical compounds in the body known as catecholamines, including dopamine and norepinephrine, can improve memory consolidation, the researchers say. One way to boost catecholamines is through physical exercise.
Fernández says they will now use a similar experimental setup to study the timing and molecular underpinnings of exercise and its influence on learning and memory in more detail.

Story Source:
The above post is reprinted from materials provided by Cell PressNote: Materials may be edited for content and length.

Sunday, June 19, 2016

Changed gut bug mix linked to C-section, antibiotics and formula lasts through baby's first year


Birth by C-section, exposure to antibiotics and formula feeding slow the development and decrease the diversity of a baby's microbes through the first year of life. That is the finding of a study led by researchers from NYU Langone Medical Center and published June 15 in the journal Science Translational Medicine.
The study results center on the microbiome, the mix of bacterial species that live on human skin and in our guts, and that co-evolved with humans to play roles in digestion, metabolism and immunity. As rates of children's exposure to C-sections, antibiotic use, and formula feeding have increased in recent decades, the incidence of asthma, autoimmune diseases and obesity has more than doubled. Many studies have now linked these trends, but only a few experiments in mice have shown microbial differences to directly increase disease risk.
"Our results provide evidence that modern practices change a baby's microbial communities in ways that last through the first year," says Martin Blaser, MD, the Muriel G. and George W. Singer Professor of Translational Medicine at NYU School of Medicine, and the study's senior author. "The big, remaining question is whether or not changes in this timeframe, even if resolved later on, affect the founding of microbiomes with lifetime consequences for a child's immune function and metabolism."
"The reasons for links seen between microbial changes and many diseases remain unclear, but our study supports the concept of altered microbiota assembly as a plausible explanation," adds Blaser, also professor of Microbiology at NYU Langone.
In the new study, researchers assessed the effects of modern practices on intestinal microbiota development in 43 New York children, and found multiple changes in the gut microbiomes of the babies based on delivery mode, exposure to antibiotics, and feeding method over. More than 1,000 stool samples were collected monthly from these infants, 24 of whom were born by vaginal delivery and 19 by C-section.
The team then used genomic and statistical techniques to analyze the millions of pieces of bacterial DNA in the samples. Past studies had already matched key DNA sequences to known bacterial species, enabling the team to define each baby's microbiome, and to watch the effect of each practice at intervals over the first two years of life.
The study results reflect concepts in ecology such as diversity and dominance of species. The different ways in which babies acquired their original microbiomes, combined with the factors that changed species balance, were found to determine not only bacterial profile, but also which species came to dominate and when.
Unexpectedly, the study authors found that, compared to vaginally born infants, those delivered by C-section showed significantly greater species diversity in the weeks after birth. However, these measures declined in cesarean-born infants during their first month, after which they displayed lower diversity up to two years of age. The change in birth mode interrupted the natural interplay between diversity and dominance, says Blaser.
The research team also found that antibiotic treatment significantly diminished diversity of bacterial species immediately following birth. Diversity then recovered during the first year of life to resemble that of infants not exposed to antibiotics. Species diversity was decreased in formula-fed children as well during the second year of life.
Beyond the analysis of microbial diversity, the research team also compared each child's set of microbes to an index of species shown to track with normal microbiome development over time. Studies in recent years found a standard pattern of turnover (succession) in children's microbiomes as they age, with species better adapted for the womb giving way to those associated with a diet of solid food.
Using this model, the research team found that microbiota maturation stagnated in C-section delivered infants between six months and two years of age. Similarly, children exposed to antibiotics showed delayed microbiota maturation compared to those not exposed, as did children fed by formula instead of breast milk, but during the second year of life. In total, each of these practices affected the maturation and succession of the microbiota.
As for potential solutions, a pilot study led by co-author Maria Gloria Dominguez Bello, PhD, published in Nature Medicine in February found that swabbing babies born by C-section with their mother's birth fluid partially restored the mix of bacteria that coat a newborn's body when delivered vaginally. Larger studies are planned moving forward.

Story Source:
The above post is reprinted from materials provided by NYU Langone Medical Center / New York University School of MedicineNote: Materials may be edited for content and length.

New imaging method reveals nanoscale details about DNA


Researchers have developed a new enhanced DNA imaging technique that can probe the structure of individual DNA strands at the nanoscale. Since DNA is at the root of many disease processes, the technique could help scientists gain important insights into what goes wrong when DNA becomes damaged or when other cellular processes affect gene expression.
The new imaging method builds on a technique called single-molecule microscopy by adding information about the orientation and movement of fluorescent dyes attached to the DNA strand.
W. E. Moerner, Stanford University, USA, is the founder of single-molecule spectroscopy, a breakthrough method from 1989 that allowed scientists to visualize single molecules with optical microscopy for the first time. Of the 2014 Nobel Laureates for optical microscopy beyond the diffraction limit (Moerner, Hell & Betzig), Moerner and Betzig used single molecules to image a dense array of molecules at different times.
In The Optical Society's journal for high impact research, Optica, the research team led by Moerner describes their new technique and demonstrates it by obtaining super-resolution images and orientation measurements for thousands of single fluorescent dye molecules attached to DNA strands.
"You can think of these new measurements as providing little double-headed arrows that show the orientation of the molecules attached along the DNA strand," said Moerner. "This orientation information reports on the local structure of the DNA bases because they constrain the molecule. If we didn't have this orientation information the image would just be a spot."
Adding more nanoscale information
A strand of DNA is a very long, but narrow string, just a few nanometers across. Single-molecule microscopy, together with fluorescent dyes that attach to DNA, can be used to better visualize this tiny string. Until now, it was difficult to understand how those dyes were oriented and impossible to know if the fluorescent dye was attached to the DNA in a rigid or somewhat loose way.
Adam S. Backer, first author of the paper, developed a fairly simple way to obtain orientation and rotational dynamics from thousands of single molecules in parallel. "Our new imaging technique examines how each individual dye molecule labeling the DNA is aligned relative to the much larger structure of DNA," said Backer. "We are also measuring how wobbly each of these molecules is, which can tell us whether this molecule is stuck in one particular alignment or whether it flops around over the course of our measurement sequence."
The new technique offers more detailed information than today's so-called "ensemble" methods, which average the orientations for a group of molecules, and it is much faster than confocal microscopy techniques, which analyze one molecule at a time. The new method can even be used for molecules that are relatively dim.
Because the technique provides nanoscale information about the DNA itself, it could be useful for monitoring DNA conformation changes or damage to a particular region of the DNA, which would show up as changes in the orientation of dye molecules. It could also be used to monitor interactions between DNA and proteins, which drive many cellular processes.
30,000 single-molecule orientations
The researchers tested the enhanced DNA imaging technique by using it to analyze an intercalating dye; a type of fluorescent dye that slides into the areas between DNA bases. In a typical imaging experiment, they acquire up to 300,000 single molecule locations and 30,000 single-molecule orientation measurements in just over 13 minutes. The analysis showed that the individual dye molecules were oriented perpendicular to the DNA strand's axis and that while the molecules tended to orient in this perpendicular direction, they also moved around within a constrained cone.
The investigators next performed a similar analysis using a different type of fluorescent dye that consists of two parts: one part that attaches to the side of the DNA and a fluorescent part that is connected via a floppy tether. The enhanced DNA imaging technique detected this floppiness, showing that the method could be useful in helping scientists understand, on a molecule by molecule basis, whether different labels attach to DNA in a mobile or fixed way.
In the paper, the researchers demonstrated a spatial resolution of around 25 nanometers and single-molecule orientation measurements with an accuracy of around 5 degrees. They also measured the rotational dynamics, or floppiness, of single-molecules with an accuracy of about 20 degrees.
How it works
To acquire single-molecule orientation information, the researchers used a well-studied technique that adds an optical element called an electro-optic modulator to the single-molecule microscope. For each camera frame, this device changed the polarization of the laser light used to illuminate all the fluorescent dyes.
Since fluorescent dye molecules with orientations most closely aligned with the laser light's polarization will appear brightest, measuring the brightness of each molecule in each camera frame allowed the researchers to quantify orientation and rotational dynamics on a molecule-by-molecule basis. Molecules that switched between bright and dark in sequential frames were rigidly constrained at a particular orientation while those that appeared bright for sequential frames were not rigidly holding their orientation.
"If someone has a single-molecule microscope, they can perform our technique pretty easily by adding the electro-optic modulator," said Backer. "We've used fairly standard tools in a slightly different way and analyzed the data in a new way to gain additional biological and physical insight."

Story Source:
The above post is reprinted from materials provided by The Optical Society.Note: Materials may be edited for content and length.

Ad Inside Post

Comments system

Disqus Shortname