The word intuition is derived from the Latin intueor– to see; intuition is thus often invoked to explain how the mind can “see” answers to problems or decisions in the absence of explicit reasoning – a “gut reaction”.
Several recent popular psychology books have emphasised this “power of intuition” and our ability to “think without thinking”, sometimes suggesting we should rely more heavily on intuition than deliberative (slow) or “rational” thought processes. Such books also argue that most of the time we act intuitively – that is, without knowing why we do things we do. But what is the evidence for these claims? And what is intuition anyway?
Albert Einstein once noted “intuition is nothing but the outcome of earlier intellectual experience”. In a similar vein, the American psychologist Herbert A. Simon (a fellow Nobel Laureate) stated that intuition was “nothing more and nothing less than recognition”.
These definitions are very useful because they remind us that intuition need not refer to some magical process by which answers pop into our minds from thin air or from deep within the unconscious. On the contrary: intuitive decisions are often a product of previous intense and/or extensive explicit thinking. Such decisions may appear subjectively fast and effortless because they are made on the basis of recognition.
As a simple example, consider the decision to take an umbrella when you leave for work in the morning. A quick glance at the sky can provide a cue (such as portentous clouds); the cue gives us access to information stored in memory (rain is likely); and this information provides an answer (take an umbrella). When such cues are not so readily apparent, or information in memory is either absent or more difficult to access, our decisions shift to become more deliberative.
Intuitive thought lacks awareness of intermediate cognitive steps (because there aren’t any) and does not feel effortful (because the cues trigger the response). Intuition is characterised by feelings of familiarity and fluency.
But intuition can also be misleading.
In a contrasting body of work, decision psychologist Daniel Kahneman (yet another Nobel Laureate) illustrated the flaws inherent in an over-reliance on intuition. To illustrate such an error, he considered this simple problem: If a bat and a ball cost $1.10 in total and the bat costs $1 more than the ball, how much does the ball cost?
If you are like many people, your immediate – intuitive? – answer would be “10 cents”. The total readily separates into a $1 and 10 cents, and 10 cents seems like a plausible amount. But a little more thinking reveals that this intuitive answer is wrong. If the ball cost 10 cents the bat would have to be $1.10 and the total would be $1.20! So the ball must cost 5 cents.
So why does intuition lead us astray in this example? Because here intuition is not based on skilled recognition, but rather on simple associations that come to mind readily (i.e., the association between the $1 and the 10 cents).
Kahneman and Tversky famously argued these simple associations are relied upon because we often like to use heuristics, or shortcuts, that make thinking easier. The take-home message from the psychological study of intuition is that we need to exercise caution and attempt to use intuition adaptively.
When we are in situations we have experienced lots of times (such as making judgements about the weather), intuition – or rapid recognition of relevant “cues” – can be a good guide. But if we find ourselves in novel territory or in situations in which valid cues are hard to come by (such as stock market predictions), relying on our “gut” may not be wise. Our inherent tendency to get away with the minimum amount of thinking could lead to slip-ups in our reasoning.
A record-setting blast of gamma rays from a dying star in a distant galaxy has wowed astronomers around the world. The eruption, which is classified as a gamma-ray burst, or GRB, and designated GRB 130427A, produced the highest-energy light ever detected from such an event.
“We have waited a long time for a gamma-ray burst this shockingly, eye-wateringly bright,” said Julie McEnery, project scientist for the Fermi Gamma-ray Space Telescope at NASA’s Goddard Space Flight Center in Greenbelt, Md. “The GRB lasted so long that a record number of telescopes on the ground were able to catch it while space-based observations were still ongoing.”
The burst subsequently was detected in optical, infrared and radio wavelengths by ground-based observatories, based on the rapid accurate position from Swift. Astronomers quickly learned that the GRB was located about 3.6 billion light-years away, which for these events is relatively close.
Gamma-ray bursts are the universe’s most luminous explosions. Astronomers think most occur when massive stars run out of nuclear fuel and collapse under their own weight. As the core collapses into a black hole, jets of material shoot outward at nearly the speed of light.
The jets bore all the way through the collapsing star and continue into space, where they interact with gas previously shed by the star and generate bright afterglows that fade with time.
If the GRB is near enough, astronomers usually discover a supernova at the site a week or so after the outburst.
“This GRB is in the closest 5 percent of bursts, so the big push now is to find an emerging supernova, which accompanies nearly all long GRBs at this distance,” said Goddard’s Neil Gehrels, principal investigator for Swift.
Ground-based observatories are monitoring the location of GRB 130427A and expect to find an underlying supernova by midmonth.
The 1st animation: The maps in the animation show how the sky looks at gamma-ray energies above 100 million electron volts (MeV) with a view centered on the north galactic pole. The first frame shows the sky during a three-hour interval prior to GRB 130427A. The second frame shows a three-hour interval starting 2.5 hours before the burst, and ending 30 minutes into the event. The Fermi team chose this interval to demonstrate how bright the burst was relative to the rest of the gamma-ray sky. This burst was bright enough that Fermi autonomously left its normal surveying mode to give the LAT instrument a better view, so the three-hour exposure following the burst does not cover the whole sky in the usual way.
The 2nd animation: This animation shows a more detailed Fermi LAT view of GRB 130427A. The sequence shows high-energy (100 Mev to 100 GeV) gamma rays from a 20-degree-wide region of the sky starting three minutes before the burst to 14 hours after. Following an initial one-second spike, the LAT emission remained relatively quiet for the next 15 seconds while Fermi’s GBM instrument showed bright, variable lower-energy emission. Then the burst re-brightened in the LAT over the next few minutes and remained bright for nearly half a day.
Credit: NASA/Swift/Stefan Immler
[Click image to enlarge - Image credit: NASA/JPL-Caltech/MSSS ]
This image is a scaled-down version of a full-circle view which combined nearly 900 images taken by NASA’s Curiosity Mars rover. The Full-Res TIFF and Full-Res JPEG provided in the top right legend are smaller resolution versions of the 1.3 billion pixel version for easier browser viewing and downloading. Viewers can explore the full-circle image with pan and zoom controls at http://mars.nasa.gov/bp1/.
The view is centered toward the south, with north at both ends. It shows Curiosity at the “Rocknest” site where the rover scooped up samples of windblown dust and sand. Curiosity used three cameras to take the component images on several different days between Oct. 5 and Nov. 16, 2012.
This first NASA-produced gigapixel image from the surface of Mars is a mosaic using 850 frames from the telephoto camera of Curiosity’s Mast Camera instrument, supplemented with 21 frames from the Mastcam’s wider-angle camera and 25 black-and-white frames — mostly of the rover itself — from the Navigation Camera. It was produced by theMultiple-Mission Image Processing Laboratory (MIPL) at NASA’s Jet Propulsion Laboratory, Pasadena, Calif.
This version of the panorama retains “raw” color, as seen by the camera on Mars under Mars lighting conditions. A white-balanced version is available at PIA16918. The view shows illumination effects from variations in the time of day for pieces of the mosaic. It also shows variations in the clarity of the atmosphere due to variable dustiness during the month while the images were acquired.
NASA’s Mars Science Laboratory project is using Curiosity and the rover’s 10 science instruments to investigate the environmental history within Gale Crater, a location where the project has found that conditions were long ago favorable for microbial life.
Malin Space Science Systems, San Diego, built and operates Curiosity’s Mastcam. JPL, a division of the California Institute of Technology, Pasadena, manages the Mars Science Laboratory project for NASA’s Science Mission Directorate in Washington and built the Navigation Camera and the rover. Via JPL/NASA
- Read the news release here via JPL/NASA
- More information about the mission is online at: http://www.nasa.gov/msl and http://mars.jpl.nasa.gov/msl/.
- You can follow the mission on Facebook and Twitter at: http://www.facebook.com/marscuriosity andhttp://www.twitter.com/marscuriosity.
- For more information about the Multi-Mission Image Processing Laboratory, see: http://www-mipl.jpl.nasa.gov/mipex.html.
The last remaining laboratory of scientist, visionary and inventor Nikola Tesla has been sold this week by the Agfa Corporation to Friends of Science East, Inc. dba Tesla Science Center at Wardenclyffe. Tesla Science Center at Wardenclyffe is a 501 (c) 3 not-for-profit corporation dedicated to saving and restoring Wardenclyffe, with the aim of turning it into a science learning center and museum.
Wardenclyffe is a 15.69 acre site in Shoreham, New York, where Tesla planned to build his wireless communications and energy transmission tower in the early 1900s. The tower was completed, but only one test was made in July 1903. Shortly after, Tesla suffered some financial reversals, and in 1917, the tower was taken down and sold for scrap metal.
Tesla was one of the most influential scientists of the late 19th and early 20th century. His contributions to commercial electricity, radio, magnetism and the invention of the AC (alternating current) motor helped to usher in the Second Industrial Revolution. He also made contributions to the fields of robotics, remote control, radar, computer science, ballistics, nuclear physics and theoretical physics. Nikola Tesla was one of the most famous scientists of his time in the United States, “but because of his eccentric personality and somewhat unbelievable and bizarre claims about scientific and technological developments, Tesla became disliked and was regarded as a mad scientist.”
Tesla is perhaps best known today for the controversy over the invention of the radio. A debate still rages between Tesla supporters and those who favor Guglielmo Marconi over who truly invented the first radio. According to the US Supreme Court in 1947, it was Tesla.
Newsday reports Friends of Science East, Inc. partnered with online comic Matthew Inman of TheOatmeal.com in August 2012 to host an online crowdfunding project on Indiegogo.com. They raised $1.37 million towards the purchase price of the Wardenclyffe site. The campaign reached the $1 million mark in just over a week, with the help of 33,000 contributors from 108 countries.
“This is a major milestone in our almost two-decade effort to save this historically and scientifically significant site. We have been pursuing this dream with confidence that we would eventually succeed,” said Gene Genova, Vice President of the organization, in a recent statement. “We are very excited to be able to finally set foot on the grounds where Tesla walked and worked.”
Friends of Science East, Inc. isn’t done yet, though.
“Now begin the next important steps in raising the money needed to restore the historic laboratory,” said Mary Daum, treasurer. “We estimate that we will need to raise about $10 million to create a science learning center and museum worthy of Tesla and his legacy. We invite everyone who believes in science education and in recognizing Tesla for his many contributions to society to join in helping to make this dream a reality.”
The organization is planning many fundraising events in the future to raise the capital to restore and run the site as a museum. You can find more information on these events on their website, at the Facebook page, and via Twitter.
Originally, the word “nebula” referred to almost any extended astronomical object (other than planets and comets). The etymological root of “nebula” means “cloud”. As is usual in astronomy, the old terminology survives in modern usage in sometimes confusing ways. We sometimes use the word “nebula” to refer to galaxies, various types of star clusters and various kinds of interstellar dust/gas clouds. More strictly speaking, the word “nebula” should be reserved for gas and dust clouds and not for groups of stars.
By order in which they appear from top to bottom, left to right, here are the main types and some provided examples for visual reference:
Planetary Nebulae: Sh2-188
Planetary nebulae are shells of gas thrown out by some stars near the end of their lives. Our Sun will probably evolve a planetary nebula in about 5 billion years. They have nothing at all to do with planets; the terminology was invented because they often look a little like planets in small telescopes. A typical planetary nebula is less than one light-year across.
Dark Nebulae: LDN 1622
Dark nebulae are clouds of dust which are simply blocking the light from whatever is behind. They are physically very similar to reflection nebulae; they look different only because of the geometry of the light source, the cloud and the Earth. Dark nebulae are also often seen in conjunction with reflection and emission nebulae. A typical diffuse nebula is a few hundred light-years across.
Emission Nebulae: NGC 896
Emission nebulae are clouds of high temperature gas. The atoms in the cloud are energized by ultraviolet light from a nearby star and emit radiation as they fall back into lower energy states (in much the same way as a neon light). These nebulae are usually red because the predominant emission line of hydrogen happens to be red (other colors are produced by other atoms, but hydrogen is by far the most abundant). Emission nebulae are usually the sites of recent and ongoing star formation.
Reflection Nebulae: NGC 1333
Reflection nebulae are clouds of dust which are simply reflecting the light of a nearby star or stars. Reflection nebulae are also usually sites of star formation. They are usually blue because the scattering is more efficient for blue light. Reflection nebulae and emission nebulae are often seen together and are sometimes both referred to as diffuse nebulae.
Wave at Saturn
Who wants to be in the world’s biggest class picture?
On July 19, NASA’s Cassini spacecraft will take a picture of Earth from nearly 900 million miles away.
Cassini will start obtaining the Earth part of the mosaic at 2:27 p.m. PDT (5:27 p.m. EDT or 21:27 UTC) and end about 15 minutes later, all while Saturn is eclipsing the sun from Cassini’s point of view.
A simulated view from the Cassini spacecraft when it will take the photo
A “RoboBee” and a synthetic insect eye reported in the same week? Sounds like a full-fledged man-made insect is just around the corner!
University of Illinois-UC researchers built a synthetic compound eye that, instead of focusing on the central field of view like our eyes, can discern depth and shape along its full scope. The resolution is only about that of a rather small ant, but there’s hope it could one day include as many facets as a bee or dragonfly eye. That research is reported in Nature.
And in this week’s Science, Harvard roboticists report the first controlled flight of a coin-size miniature aerial vehicle (MAV) based on the flight physics of insect wings. The construction is based on that used to make pop-up books, an odd advance in micro-building techniques that gave them the precision needed to get it off the ground. The wings aren’t as flexible or functional as real insect wings, but it’s the smallest piloted vehicle ever made. That research is reported in this week’s Science.
Now we just need to extend that compound eye camera’s sensitivity into the UV range, attach it to the RoboBee, and we’ll finally be able to see flowers like we imagined in this YouTube episode of It’s Okay To Be Smart (and maybe synthetically pollinate them!!)
I, for one, welcome our tiny, buzzing underlings.
Gene survival and death on the human Y chromosome
By M. Wilson Sayres
In humans, genetic females have two X chromosomes and genetic males have one X chromosome and one Y chromosome:
You might have noticed from the cartoon above that the human Y is much smaller than the human X. But, it wasn’t always this way. Ancestrally, the human X and Y were the same size, and had the same genes. Over time, however, the Y has shrunk, but both the X and Y have also gained some genes. To better understand how the X and Y became so different, and how the evolution of the two sex chromosomes are correlated, we asked three main questions:
What has been lost from the Y?
To know which genes were lost, we first had to identify which genes were on the ancestral sex chromosome pair. By comparing the genes on the human X with the genes the X in other species, we identified a set of genes that were likely on the ancestral X chromosome: 600 in total. Then, by searching the Y chromosome for the relics of all of these genes, we identified three classes of sex-linked genes. We should think of each of the 600 ancestral genes as a pair (with one copy on the X, and one on the Y). All of these pairs have a working copy on the human X. Some pairs have a working (functional) copy on the Y, some have a broken copy on the Y (degraded), and some are missing their Y-copy.
Many genes have been lost from the ancestral Y, but a few persist. So, while some Y-linked genes have survived (I have another paper discussing this), and there have been some unique additions to the Y chromosome, we can see that the Y has lost functional capabilities for 96.83% of the genes that it once shared with the X. Wow!
Are there indicators of whether a Y-linked gene will be retained?
We can learn about the evolution of the sex chromosomes by studying differences between classes of sex-linked genes defined above. Specifically we asked, do features of X-linked genes suggest whether their Y-linked partner are retained or lost? In some cases, yes, they do.
First, we found that human X-linked genes with very few changes across mammals were more likely to have a working Y copy. So, if a gene is important enough to survive over long evolutionary time in roughly the same condition across very different species, then it might be very useful to the organism, so it would be important to have that gene in a working form in both males and females in the same species (human).
Second, we looked at expression. Genes can sometimes be “on” (which we would call expressed) or “off” (not expressed), but more often they can fall within a range. It’s like a light with a dimmer switch. The light can be turned on very brightly, but can also dimmed to a very low level without being “off”. We found that X-linked genes that were highly expressed (bright) were more likely to have a working Y copy. This might mean that, for these genes, the level of “brightness” or expression is important, so that it is highly beneficial for these genes to be working very hard in both females and in males.
Does gene loss on the Y affect the evolution the X?
Okay, so some features of the X-linked partner might predict whether it’s Y-linked partner will survive, but is there any feedback from the Y back to the X chromosome? Yes!
Let’s think back to that first picture: females have two “big” X chromosomes, while males have one “big” X and one “little” Y. And, I’ve shown you that the Y chromosome has lost (either because of broken copies, or completely lost) almost 97% of the genes that it once shared with the X. This might lead you to believe that there are more genes expressed in females than in males. But, in many mammals, females silence most of the genes on one of their X chromosomes (X-inactivation), to equalize the dosage of genes expressed between males and females.
Although it has been hypothesized, we showed that the pattern of genes subject to silencing in females among the three classes above is consistent with a process whereby silencing evolves in response to gene loss on the Y chromosome. Moreover, this pattern suggests that some amount of time must pass to allow the signal (that the Y-linked partner is no longer working) to reach the X-chromosome before silencing can occur.
The paper is open access, so if you are curious, you can read it on Molecular Biology and Evolution.
[Image credit: NASA/JPL-Caltech]
“NASA’s Cassini spacecraft, now exploring Saturn, will take a picture of our home planet from a distance of hundreds of millions of miles on July 19. NASA is inviting the public to help acknowledge the historic interplanetary portrait as it is being taken.
Earth will appear as a small, pale blue dot between the rings of Saturn in the image, which will be part of a mosaic, or multi-image portrait, of the Saturn system Cassini is composing.
“While Earth will be only about a pixel in size from Cassini’s vantage point 898 million [1.44 billion kilometers] away, the team is looking forward to giving the world a chance to see what their home looks like from Saturn,” said Linda Spilker, Cassini project scientist at NASA’s Jet Propulsion Laboratory in Pasadena, Calif. “We hope you’ll join us in waving at Saturn from Earth, so we can commemorate this special opportunity.”
Cassini will start obtaining the Earth part of the mosaic at 2:27 p.m. PDT (5:27 p.m. EDT or 21:27 UTC) and end about 15 minutes later, all while Saturn is eclipsing the sun from Cassini’s point of view. The spacecraft’s unique vantage point in Saturn’s shadow will provide a special scientific opportunity to look at the planet’s rings. At the time of the photo, North America and part of the Atlantic Ocean will be in sunlight.
Unlike two previous Cassini eclipse mosaics of the Saturn system in 2006, which captured Earth, and another in 2012, the July 19 image will be the first to capture the Saturn system with Earth in natural color, as human eyes would see it. It also will be the first to capture Earth and its moon with Cassini’s highest-resolution camera. The probe’s position will allow it to turn its cameras in the direction of the sun, where Earth will be, without damaging the spacecraft’s sensitive detectors.”
To learn more about the public outreach activities associated with the taking of the image, visit:http://saturn.jpl.nasa.gov/waveatsaturn .
For more information about Cassini, visit http://www.nasa.gov/cassini and http://saturn.jpl.nasa.gov .
Feynman’s double-slit experiment brought to life
The precise methodology of Richard Feynman’s famous double-slit thought-experiment – a cornerstone of quantum mechanics that showed how electrons behave as both a particle and a wave – has been followed in full for the very first time.
Although the particle-wave duality of electrons has been demonstrated in a number of different ways since Feynman popularised the idea in 1965, none of the experiments have managed to fully replicate the methodology set out in Volume 3 of Feynman’s famous Lectures on Physics.
“The technology to do this experiment has been around for about two decades; however, to do a nice data recording of electrons takes some serious effort and has taken us three years,” said lead author of the study Professor Herman Batelaan from the University of Nebraska-Lincoln.
“Previous double-slit experiments have successfully demonstrated the mysterious properties of electrons, but none have done so using Feynman’s methodology, specifically the opening and closing of both slits at will and the ability to detect electrons one at a time.
“Akira Tonomura’s brilliant experiment used a thin, charged wire to split electrons and bring them back together again, instead of two slits in a wall which was proposed by Feynman. To the best of my knowledge, the experiments by Guilio Pozzi were the first to use nano-fabricated slits in a wall; however, the slits were covered up by stuffing them with material so could not be open and closed automatically.”
In their experiments, which have been published today, Thursday 14 March, in the Institute of Physics and German Physical Society’s New Journal of Physics, Batelaan and his team, along with colleagues at the Perimeter Institute of Theoretical Physics, created a modern representation of Feynman’s experiment by directing an electron beam, capable of firing individual electrons, at a wall made of a gold-coated silicon membrane.
The wall had two 62-nm-wide slits in it with a centre-to-centre separation of 272 nm. A 4.5 µm wide and 10 µm tall moveable mask, controlled by a piezoelectric actuator, was placed behind the wall and slid back and forth to cover the slits.
“We’ve created an experiment where both slits can be mechanically opened and closed at will and, most importantly, combined this with the capability of detecting one electron at a time.
“It is our task to turn every stone when it comes to the most fundamental experiments that one can do. We have done exactly that with Feynman’s famous thought-experiment and have been able to illustrate the key feature of quantum mechanics,” continued Batelaan.
Feynman’s double-slit experiment
In Feynman’s double-slit thought-experiment, a specific material is randomly directed at a wall which has two small slits that can be opened and closed at will – some of the material gets blocked and some passes through the slits, depending on which ones are open.
Based on the pattern that is detected beyond the wall on a backstop – which is fitted with a detector – one can discern whether the material coming through behaves as either a wave or particle.
When particles are fired at the wall with both slits open, they are more likely to hit the backstop in one particular area, whereas waves interfere with each other and hit the backstop at a number of different points with differing strength, creating what is known as an interference pattern.
In 1965, Feynman popularised that electrons – historically thought to be particles – would actually produce the pattern of a wave in the double-split experiment.
Unlike sound waves and water waves, Feynman highlighted that when electrons are fired at the wall one at a time, an interference pattern is still produced. He went on to say that this phenomenon “has in it the heart of quantum physics [but] in reality, it contains the only mystery.”
Louisiana’s third largest industry is tourism, and the state generates millions of dollars each year from conventions. After the Louisiana Science Education Act was passed, the Society for Integrative and Comparative Biology cancelled a scheduled convention in New Orleans in 2011, costing the city an estimated $2.9m. The society launched a boycott of Louisiana, and the state has become less competitive at attracting certain conventions because of its anti-science stance.
Thankfully, the boycott of New Orleans has ended, because the New Orleans city council has endorsed a repeal of the Louisiana Science Education Act and the Orleans Parish School Board banned the teaching of creationism in its schools. The boycott on the rest of the state still remains, however.
Thousands of metres below the sea, trapped in the fossilized remains of ancient bacteria, exists the iron remnants of a supernova explosion that happened millions of years ago. An imprint, here on Earth, of a dying star.
Iron-60, an isotope of iron created only in supernovae, has been found in fossilised seabed bacteria. The preliminary findings, announced by Shawn Bishop of the Technical University of Munich at a 14 April meeting of the American Physical Society in Colorado, may be the first time that a specific star’s debris has been found in our fossil record.
Iron-60’s half-life is relatively short when compared to the age of our solar system, so traces of the isotope on Earth suggests a direct interaction with a supernova in the planet’s history. The researchers searched for the isotope in fossils from seabed samples between 1.7 million to 3.3 million years old. They likely found traces of the isotope in fossils around 2.2 million years old.
The bacteria containing the Iron-60 are magnetotactic; they are strange organisms live in the seabed and align themselves with the Earth’s magnetic field. They extract iron from the water and sediment around them and create iron oxide crystals that are then preserved in the fossil record.
“For me, philosophically, the charm is that this is sitting in the fossil record of our planet,” said Bishop in a Nature.com report. The isotope had previously been discovered in seabed samples, but not in the fossil record.
“We are all, as Carl Sagan put it, stardust,” Bishop told Wired.co.uk. “[We have now] likely discovered, within crystal nano-fossils left behind by primitive bacteria, […] still-live radioactive atoms that can only have been synthesized within the same kind of nuclear furnace — an exploding star — that forged the elements from which all live on Earth is made. The cycle comes full-circle.”
It has been estimated that the supernova happened around 2.2 million years ago, and that the stream of cosmic rays would have had an effect on the Earth’s atmosphere by increasing cloud cover. The supernova responsible for depositing the iron-60 has not yet been found, but possible suspects have been identified in the nearby Scorpius-Centarus association.
This isn’t the first time that distant astronomical events have made an impact on Earth. In 2012, researchers found a surplus of radioactive atoms in Japanese trees, hinting at a violent cosmic event around 1,200 years ago.
Image Credit: Casey Reed/NASA.
“Talk about a celestial mood swing: Scientists using the SCORPIO camera of the Byurakan Astrophysical Observatory recently watched as a low luminosity star suddenly burst to life in an extraordinarily short amount of time — becoming 15 times brighter in less than three minutes.
Star WX UMa, which is relatively close-by in the Ursa-major constellation, is about 15.6 light-years from Earth and is part of a binary system. It’s a flare star — a normally subdued low luminosity object that occassionally and unpredicably boosts its brightness and heat in a matter of seconds. But it’s an effect that doesn’t last long. The stars return back to their normal state in about 10 minutes.
Fascinatingly, the effect is so dramatic that the classification of the star literally changes within a few seconds. In this case, WX UMa temporarily transformed from spectral type M to B. Its temperature went from about 2,800 kelvin (K) to six or seven times that — somewhere between 10,000 to 33,000 K.
These flares happen when instability within the plasma of the star causes turbulence in its magnetic field.
“A magnetic reconnection then occurs, a conversion of energy from the magnetic field into kinetic energy, in order to recover the stability of the flow, much like what happens in an electric discharge,” said a researcher when speaking to SINC. This kinetic energy transforms into thermal energy in the upper layers of the atmosphere and the star’s corona, driving up its temperature and brightness.”
Read the entire study at Astrophysics: Spectral observations of flare stars in the neighborhood of the sun.
He Helped Discover Evolution, And Then Became Extinct
Ask most folks who came up with the theory of evolution, and they’ll tell you it was Charles Darwin.
In fact, Alfred Russel Wallace, another British naturalist, was a co-discoverer of the theory — though Darwin has gotten most of the credit. Wallace died 100 years ago this day.
Where do we come from? This is the sort of big question that keeps people up at night, and NASA funded. If you are a star, however, the answer is easy: you come from a big cloud of gas. As astronomers, if we want to understand what controls properties of stars — what makes them big, small, clustered, or isolated– we can start by looking at the gas that will make them.
This paper presents a detailed study of the gas in M51, the Whirlpool galaxy. This system is actually two galaxies, but this paper focuses on the larger, main spiral (NGC 5194) in this interacting pair. This galaxy is relatively close by (20 million light years away), massive (~150 billion solar masses), and quite well-studied: astronomers have looked at it in wavelengths from radio to near-infrared, optical and ultraviolet. The combined resolution and sensitivity of these new millimeter observations (the J=1-0 rotational transition of the carbon monoxide molecule) allow the authors to detect for the first time individual molecular clouds in this galaxy, the objects from which stars and star clusters are born. Below is an image of M51 from this study showing the gas surface density (the amount of gas along our line of sight) from small amounts (dark blue) to large amounts (bright pink), all representing the fuel required to make the next generation of stars in this galaxy.
So what does it take to make an image like this? ALMA? Not quite. M51, with a declination of +47 degrees, is a galaxy that ALMA (the Atacama Large Millimeter Array, located in Chile at a latitude of 23 degrees South) will find very difficult to observe. Instead, the authors used the Plateau de Bure Interferometer (PdBI) and the IRAM 30m radio telescope to detect gas clouds as small as 40 parsecs across. The image above is a mosaic combining 60 pointings of PdBI with IRAM observations over the same region. But isn’t one telescope enough for the job of observing M51? Why take the time to observe it twice?
The answer is that interferometers (arrays of two or more telescopes which work together to act like a telescope with a diameter equal to the separation between antennas) by themselves have a big problem for big objects like M51. Although interferometers give us the advantage of higher resolution, that is not whole story– not only does the antenna separation determine the resolution, it also sets the size scales that you are sensitive to, acting like a high-pass filter for spatial frequencies. As shown in the figure below, a pair of antennas in an interferometer resolve ‘fringes‘ on the sky representing the resolution of that antenna pair (a function of the frequency of the observations and the spacing of the antennas). Different spacings and orientations from the combinations of many antenna-pair fringes contribute to making your beam– the tiny white dot in the bottom left corner of the above image, and the interferometric equivalent of the point-spread function (PSF). The problem is that flux from structures larger than the largest fringe that goes into making this beam will be lost. Since the shortest antenna spacing yields the largest fringe, and the antenna spacing cannot be smaller than the size of the telescope (get too close and the antennas will start bumping into and blocking each other), there is a maximum size scale that you can detect flux from.
How can we get that flux back? Use a single dish telescope! These telescopes are sensitive to the flux on all size scales larger than the resolution of their dish. By combining the data from an interferometer with single dish data, you can recover all of the flux from an object, and still observe it at high resolution. This synergy is why the most effective radio and millimeter interferometers all have a single-dish buddy: the Very Large Array (VLA) has the Green Bank Telescope (GBT), the PdBI (which took these images) has IRAM, and ALMA will have both a compact array and several ‘total power’ single dishes.
So now that you have a high-resolution picture of almost all of the gas clouds in M51, what do you do with it? This paper focuses on comparing (correlating) the location and amount of this gas with other tracers of galaxy properties. This includes tracers of different phases of the interstellar medium (the ISM, or gas in a galaxy at all temperatures, from plasma to neutral to molecular), tracers of star formation, and tracers of the existing stellar populations.