I've been on holiday in Spain for the last week and internet access has been patchy so I'm taking the rest of the week off. I'm afraid you'll have to find your own science news for two days. Happy hunting!!
People that know me will know I'm a better writer than I am speaker, so this blog is my way of explaining what it is I do with my spare time and why I enjoy it; namely, photography and science. If the two can be combined then all the better. If you would like to see more of my photos, or to purchase any, then check out my website at www.jasonhehirphotography.com If you like what you see then feel free to spread the word on Facebook and Twitter and the like. Thanks!
Thursday, 27 August 2015
Friday, 21 August 2015
Einstein Does It Again
It is often said that scientists are close minded, that they will do anything to uphold the status quo. Anyone who says this has clearly never met a scientist. I promise you, every researcher out there wants to be the one to upset the apple cart and come up with some kind of paradigm shift in thinking that will lead to the immediate recall of all the textbooks. Whilst doing an experiment that repeats or confirms a previous finding is an essential part of the scientific method and needs to be done, it won't set the world afire.
This is the stage many physicists are at when it comes to the Standard Model. The Standard Model of physics is our best guess so far about how the components of matter all come together. It deals with all the sub-atomic particles we know about, their corresponding anti-particles and the four fundamental forces of nature. It works supremely well and has been verified by multiple, converging lines of evidence from different fields of physics. Much of the testing at the Large Hadron Collider has reaffirmed the Standard Model and many there are genuinely disappointed to have not yet discovered any 'new physics' with the most complicated machine ever built by man. The Standard Model has been broadly in place for several decades now and ever since it's inception physicists around the world have been desperately trying to break it. This week another group failed.
In an open access letter to Nature researchers from Japan and Germany report that they have once again shown that protons and anti-protons are completely identical to each other in every way except their opposite charge. They used a device known as a Penning trap to carefully 'weigh' the protons and anti-protons in the most accurate experiment of its kind to date. They were hoping to find a slight difference that would be a deviation from the Model and open up new avenues of inquiry. Alas, after thousands of iterations they found that they were similar to 69 parts in a trillion. They then repeated the experiment but this time to see if gravity affected the matter and anti-matter in different ways. Again, no dice. Whilst it's lovely to, once again, show how much of a boffin Einstein was and how great relativity is, it would have been incredibly exciting to have shown a crack in the edifice and start hammering away at it.
![]() |
The Standard Model of physics |
Thursday, 20 August 2015
P-Hacking
Today we're going to get a bit meta. Just as important, if not more important, than science is the science of science itself. How do we know if the papers being published every day of the week are of a high enough standard to be trusted and to take human knowledge forward? What proportion of them will turn out to be wrong in time? How much of this is just due to a natural progression of knowledge and how much of it is due to shoddy work that should have been rooted out pre-publication?
I think science has a problem, not a fatal one, but one that it needs to address. It is increasingly the case that journals are only interested in publishing the papers that will get the most headlines and/or the most citations thereby increasing that journal's Impact Factor. Researchers will naturally want to publish in the journals with the highest Impact Factor and may, on occasion, massage things to help ensure they do so. There is increasingly little space for papers with a negative result or replications of previous experiments, both of which are absolutely vital to science but are not sexy or headline grabbing.
The easiest way to get published is to have a statistically significant P-value. Broadly speaking, the P-value is the likelihood that a result in an experiment could have been obtained by random chance and not as a result of whatever theory you might be testing. The smaller the P-value the more likely the hypothesis you're testing is correct. But the problem is that there are lots of different ways to generate a P-value. Different data sets suit different types of statistical analysis and within each analysis there will be certain parameters and limits to set. How and where these limits are set can give very different results perhaps pushing a negative data set just over the margin into significance.
I should say that this can all be done completely innocently. Researchers won't be malevolently thinking of ways to con the world into thinking that they have a real effect when they don't. All of the little decisions that go into designing a research study, of any kind, can be referred to as Researcher Degrees of Freedom (RDFs). Multiple studies have now shown that the more RDFs you have the more likely there is to be significance found in the analysis. Decisions about when to stop collecting data, which observations to exclude, which comparisons to make, which data sets to combine; these all have an impact on the final results. The phenomenon has come to be known as P-Hacking.
The reason I'm explaining this today is because of an open access article published last week in PLOS ONE. In it, researchers from the US Department of Health and Human Services detailed an interesting but slightly worrying observation. What they did was to look at every large study looking at cardiovascular disease conducted at the National Heart, Lung and Blood Institute between 1970 and 2012. They defined large as costing more than $500,000 per year to run. There were 55 such trials. These were then segregated into those that took place before (30) and after (25) the date in 2000 when it became compulsory to register your clinical trial and specifically what it was going to do before publication.
17 out of 30 studies (57%) published before 2000 had a positive result; but only 2 out of 25 (8%) had a positive result after 2000. No other factors looked at; like corporate co-sponsorship of the work or whether the trial compared against a placebo or an active comparator; made any difference to the figures. This one simple measure, of forcing scientists to register exactly what the parameters of their study would be before publication, seems to have led to a 7 fold decrease in the number of positive trials.
This is, of course, just one study; one study is never proof of anything. This needs to be replicated in multiple data sets by different groups to see if the effect is real. If it is, it could have profound implications for randomised controlled studies the world over. To be clear, if there is a bad scientific article published it will get found out. The scientific method and the peer review process are not fundamentally broken, but a lot of people might waste a lot of time and research money on a dead end and in these straightened times, when the science budget in the UK is at threat of a 40% cut, we cannot afford as a community, as a people, to be led a merry dance on effects that weren't even there in the first place.
17 out of 30 studies (57%) published before 2000 had a positive result; but only 2 out of 25 (8%) had a positive result after 2000. No other factors looked at; like corporate co-sponsorship of the work or whether the trial compared against a placebo or an active comparator; made any difference to the figures. This one simple measure, of forcing scientists to register exactly what the parameters of their study would be before publication, seems to have led to a 7 fold decrease in the number of positive trials.
This is, of course, just one study; one study is never proof of anything. This needs to be replicated in multiple data sets by different groups to see if the effect is real. If it is, it could have profound implications for randomised controlled studies the world over. To be clear, if there is a bad scientific article published it will get found out. The scientific method and the peer review process are not fundamentally broken, but a lot of people might waste a lot of time and research money on a dead end and in these straightened times, when the science budget in the UK is at threat of a 40% cut, we cannot afford as a community, as a people, to be led a merry dance on effects that weren't even there in the first place.
Wednesday, 19 August 2015
Return of the Cougars
An abstract presented at the 100th Ecological Science at the Frontier conference has done an analysis on the reintroduction of cougars to the eastern United States. Puma concolor still roams the western United States but has been extinct in the east for some 70 years; this analysis specifically looked at the financial pros and cons of reintroducing them to their former range.
The cougars main food source would be white tailed deer and the commensurate financial benefit to their reintroduction generally comes from there being less deer. Given that white tailed deer are involved in hundreds of thousands of collisions with vehicles every year, causing 29,000 human injuries and 211 human deaths per year at a cost of $1.1 billion there seems to be a clear area for potential benefit there. Unchecked deer populations have also led to ever greater damage of crops.
The researchers, from the University of Alaska, projected that over 50 years a successful reintroduction would save 53,000 injuries, 384 fatalities and $4.4 billion dollars as a result of lowering deer densities by about 22%. The deer themselves could also benefit. The cougars will be more likely to kill older and/or weaker animals so there will be fewer deer but the population as a whole would be stronger.
Financially then, the benefits seem clear and we should start buying cougars train tickets to Boston forthwith. But will the public go for the idea. The people of the eastern US are very used to being at the top of the food chain, they might not care for the idea of deliberately putting something near their house which could eat their dog, or their child, or themselves. But like with so many things; war, disease, Tory Governments; the fear of something tends to be far greater than the danger posed by the thing itself. Below I have composed a list of things that kill more people in a year than cougars in the US, in no particular order:
- Honeydew melons
- Being in a hot car
- Family pets
- Highschool shootings
- Falling out of bed
- Autoerotic asphyxiation
- Ants
- Vending machines
So, in summary, I say go for it.
![]() |
Image used with permission |
Tuesday, 18 August 2015
El Condor Pasa
A novel new method is being used to help sustain a wild population of Californian condors: electroshock therapy. No, that isn't a typo. Twice per year all of the 150 or so remaining birds are captured and electrocuted. It isn't just for kicks, however; over the past decade the biggest single killer of north America's largest bird was death by electrocution after flying into power lines. With a 3 metre wingspan they are more than capable of touching more than one wire at once and being killed; touching just one at a time is relatively safe as the electricity has nowhere to go. Since the introduction of the training death by electrocution has dropped from 66% to just 18%.
The next biggest killer is lead poisoning, thought to be as a result of eating carcasses that have lead shot in them after being hunted by humans. The condors seem to be particularly susceptible to lead and so on their twice yearly grounding they are checked over and operated on if necessary to remove shot. Apparently a ban on lead shot in the area has not led to a reduction in mortality.
This all sounds like quite an arduous experience for the birds themselves but it's probably not as bad as it sounds. In the 1980s there were only a couple of dozen birds left in he wild, at which point they were all captured and brought into captivity. Since then there have been a series of reintroductions back into the wild that have been the genesis of today's 150. As these birds were all born and raised in captivity it's more like visiting home than being abducted by aliens.
The program, as reported in Biological Conservation, is working. In the past 15 years the annual mortality rate has fallen from 38% to just 5.4%, a remarkable achievement. Will this be a large enough population to sustain a genetically diverse enough species in the future? Only time will tell. I recently read an analysis by a genetic statistician that said that, given perfect conditions and full control of who breeds with who, it is very tough indeed for a population smaller than 160 individuals to survive, but we'll never know if we don't try
A Californian condor complete with tracking tags in both wings |
Subscribe to:
Posts (Atom)