Politicians and “experts” scream and shout about testing and isolation being the way to halt the COVID 19 pandemic to a halt, but everything else they hsve told us about the disease has been absolute bollocks, why should this be any different?
Well surprise, surprise, it isn’t any different. The idear that testing everybody ten times a day (OK I might be exaggerating for effect,) will do any good is just another diversionary tactic designed to distract us from the sure and certain knowledge that the establishment, i.e. is the politicians, the academic community and the medical professions haven’t a clue how to deal with this disease. However in saying that we are allowing that COVID 19 coronavirus actually exists though that is nor proven. For a pathogen to be recognised as the cause of a disease it must meet all of a set of croteria known as The Koch Postulates. Covid 19 or The Wuhan Virus actually meets none.
And to top that of the tests being used to identify who is infected have been shown by independent (i.e. not funded by governments, Big Pharma corporations or The United Nations,) to be not fit for purpose.
At the media briefing on COVID-19 on March 16, 2020, World Health Organisation (WHO) Director General Dr Tedros Adhanom Ghebreyesus said:
“We have a simple message for all countries: test, test, test.”
The message was spread through headlines around the world, for instance by CNN, Reuters and the BBC’s news channel Germany’s heute journal — one of the most important news magazines on German television— was still repeating the mantra of the corona dogma on to its audience with the admonishing words:
Test, test, test—that is the credo at the moment, and it is the only way to really understand how much the coronavirus is spreading.”
This indicates that belief in the validity of the PCR tests is so strong that it equals a religious dogma that tolerates virtually no contradiction. But religions are about faith and not demonstrable facts. Were we still under the rile of The Holy Roman Empire, heretics who questioned this narrative would be tortured and burned.
It is certainly significant that Kary Mullis, inventor of the Polymerase Chain Reaction (PCR) technology was one of the vioces dissenting from that dogma before his recent death (which was not connected to COVID – 19 we understand.) His invention got him the Nobel prize in chemistry in 1993.
But while the WHO and other health bureaucracies are hailing PCR as the saviour of humankind, the eminent biochemist himself regarded his invention the PCR as an inappropriate tool for detecting a viral infection. The intended use of the PCR was, and still is, to apply it as a manufacturing technique, being able to replicate DNA sequences millions and billions of times, and not as a diagnostic tool to detect viruses.
Gina Kolata in a 2007 New York Times article Faith in Quick Test Leads to Epidemic That Wasn’t describes declaring pandemics on the basis of PCR tests as bad science.
It is also worth mentioning that PCR tests used to identify so-called COVID-19 patients presumably infected by what is called SARS-CoV-2 are unreliable because the results show the infection does not meet any of the Koch postulates (sic).
This is a fundamental point. Tests need to be evaluated to determine their preciseness — strictly speaking their “sensitivity” and “specificity” — by comparison with an established benchmark meaning the most accurate method available.
Australian infectious diseases specialist Sanjaya Senanayake, for example, stated in an ABC TV interview in an answer to the question “How accurate is the [COVID-19] testing?”:
If we had a new test for picking up [the bacterium] golden staph in blood, we’ve already got blood cultures, that’s our gold standard we’ve been using for decades, and we could match this new test against that. But for COVID-19 we don’t have a gold standard test.”
Jessica C. Watson of Bristol University UK confirms this in her paper “Interpreting a COVID-19 test result”, published recently in The British Medical Journal. Dr Watson writes that there is a “lack of a clear-cut ‘gold-standard’ for COVID-19 testing.”
But instead of classifying the tests as unsuitable for SARS-CoV-2 detection and COVID-19 diagnosis, or instead of pointing out that only a virus, proven through isolation and purification, can be a solid gold standard, Watson claims in all seriousness that, “pragmatically” COVID-19 diagnosis itself “may be the best available ‘gold standard’.” But this is not scientifically sound.
Apart from the absurdity of taking the test itself as part of the benchmark for evaluating the PCR test, there are no distinctive specific symptoms for COVID-19, as even people such as Thomas Löscher, former head of the Department of Infection and Tropical Medicine at the University of Munich has acknowledged. Recently I have read that COVID 19 is a respiratory disease that is far worse than pneumonia, that is is a disease of the blood vessels, that it causes brain damage, affects liver, kidneys and other vital organs, and that it damages the digestive tract. Maybe the obvious confusuion among medical professionals arises because people with a range of pre – existing conditions that take in all these symptoms are particularly vulnerable to COVID 19
And if there are no distinctive specific symptoms for COVID-19, COVID-19 diagnosis cannot be suitable for serving as a valid gold standard.
This is a post from a U.S. economic research website, criticizing the ludicrously over – the – top actions taken to limit the effects of the COVID – 19 coronavirus pandemic. The responses were of course based on the warnings of “scientists” who rather than using empirical evidence relied on output from mathematical models in order to predict how the pandemic would unfold.
Needless to say the predictions were hopelessly wrong …
As a site focused on economics, AIER would rather have stayed away from commentary on diseases and their mitigation. In normal times, we would have.
The archives of AIER dating back to 1933 show that we had no comments on the polio epidemic (1948-1951), the Asian Flu (1957-59), the Hong Kong flu (1968-69), the Avian bird flu (2006), or the Swine flu pandemic of 2009, which was a strain most like 1918 and therefore, one might suppose, would have caused panic but did not.
We had nothing to say because disease mitigation is a job for medical professionals, not economists and certainly not politicians.
The problem is that this time, the disease mitigators (some of them, the ones in power and with the ear of politicians) didn’t stay out of economics. Indeed, their plans for mitigation trampled all over commerce, life, and the freedoms that are necessary to make it function. For a few months in 2020, the presumptuous model-building disease mitigators became central planners, overriding the wisdom of not only medical professionals but also economists, philosophers, political scientists, historians, and everyone else including legislatures and voters.
Our first piece on the topic ran January 27. The focus was on the quarantine power and the argument was simple: because people are not ridiculous and know how to deal with disease in consultation with medical professionals, this state power should not be deployed. At the time, people said we were being alarmist even for saying this. Nothing like this could ever happen in the U.S. because we have a Constitution and courts and a tradition of trusting the people … Continue reading >>>
Coronavirus fear and panic
from Zero Hedge
A secretly recorded meeting between the editors-in-chief of The Lancet and the New England Journal of Medicine reveal both men bemoaning the “criminal” influence big pharma has on scientific research.
According to Philippe Douste-Blazy, France’s former Health Minister and 2017 candidate for WHO Director, the leaked 2020 Chattam House closed-door discussion between the EIC’s – whose publications both retracted papers favorable to big pharma over fraudulent data.
“Now we are not going to be able to, basically, if this continues, publish any more clinical research data because the pharmaceutical companies are so financially powerful today, and are able to use such methodologies, as to have us accept papers which are apparently methodologically perfect, but which, in reality, manage to conclude what they want them to conclude,” said Lancet EIC Richard Horton.
According to Douste-Blazy, the the EICs said the influence wielded by big Pharma to influence publications is “criminal.”
The scientist whose mathematical models of how the coronavirus would spread in the UK and the wildly exaggerated estimates of how many deaths might result from the epidemic reportedly led to the decision to implement a countrywide lockdown and trash the economy has been criticised in the past for flawed research.
In fact Professor Neil Ferguson, of the MRC Centre for Global Infectious Disease Analysis at Imperial College in London, who authored and published a research paper predicting that The UK was likely to see 250,000 premature deaths during a coronavirus epidemic unless measures to effectively shut down the country were taken. It is this research which convinced Prime Minister Boris Johnson and his cabinet and advisors to introduce the lockdown.
Neil Ferguson: would you buy a used mathematical model from this man (Picture: Daily Telegraph_
It is, however, always unsafe to accept the word of one scientist or one research project, and it seems Prof. Ferguson is such an incorrigible publicity junkie he has a track record for making exaggerated and sensationalised claims about the probable outcomes of various crises which is longer than the Tour de France course. It is now being discussed publicly that Ferguson has a long established reputation for making predictions based on allegedly faulty assumptions and the results of mathematical models which have nonetheless shaped government strategies and impacted the UK economy. We have to cease this deification of scientists now. They are not impartial and objective seekers after truth, but are every bit as self interested as the rest of us. And when we look at Freguson’s career and the disastrous policy decisions his methematical models have lead to, the best we can say is “He’s not The Messiah, he’sa a very naughty boy,” (h/t Monty Python’s Flying Circus.)
The 2001 model used by Professor Ferguson and his team at Imperial College London concluded that the culling of animals include not only those found to be infected with the virus but also those on adjacent farms even if there was no physical evidence of infection.
“Extensive culling is sadly the only option for controlling the current British epidemic, and it is essential that the control measures now in place be maintained as case numbers decline to ensure eradication,” said their report which as presented to government, but published after the cull began
This mass slaughter – technically known as contiguous culling – triggered disgust in the British public as news video night after night showed the corpses of healthy animals being stacked, soaked with fuel oil and burned, and also prompted analyses of the methodology which led to such an appalling and, as it turned out, unjustified conclusion.
An analysis of Ferguson’s research published in the 2011 paper, Destructive Tension: mathematics versus experience – the progress and control of the 2001 foot and mouth epidemic in Great Britain, found that the government had ordered the destruction of millions of animals on evidence from “severely flawed” modelling.
According to one of its authors – the former head of the Pirbright Laboratory at the Institute for Animal Health, Dr Alex Donaldson – Ferguson’s models made a “serious error” by “ignoring the species composition of farms,” (a fairly basic piece of information, it must require a special kind of stupidity to be unable to distinguish betwen cattle, sheep and pics,) and the fact that the disease spread more easily between some species than others.
The report stated: “The mathematical models were, at best, crude estimations that could not differentiate risk between farms and, at worst, inaccurate representations of the epidemiology of FMD.”
It also described a febrile atmosphere – reminiscent of the fear and panic whipped up by attention seeking “experts,” celebrities and the mainstream media in recent week – and suggested that this hysteria allowed mathematical modellers to shape government policy.
“The general impatience that met the wait for the full extent of infections to become apparent, accompanied by an ever increasing number of outbreaks and piles of carcasses awaiting disposal, was perceived as a lack of success of the traditional control measures and provided the opportunity for self-styled ‘experts’, including some veterinarians, biologists and mathematicians, to publicise unproven novel options,” the researchers said.
As the lead scientist behind that disputed advice that led to Tony Blair’s government ordering the mass culling of farm livestock during the 2001 epidemic of foot and mouth disease, a crisis which cost the country billions of pounds was none other than Ferguson who based his conclusion on the output from – you guessed it – mathematical models of a cow and a bacterium, it is absolutely unacceptable that this man’s advice is being allowed to influence government.
And before that it was he who predicted that up to 150,000 people could die from bovine spongiform encephalopathy (BSE, or ‘mad cow disease’) and its equivalent in sheep if it made the leap to humans. The BSE panic is long forgotten but BSE is still around and still no cure has been developed, yet to date there have been fewer than 200 deaths caused by the human form of BSE and none resulting from sheep to human transmission.
Ferguson’s foot and mouth disease research has attracted strong criticism in scientific journals and therefore cannot be said to have passed the acid test of scientific research, peer review. It has also been the subject of critical academic papers which identified allegedly unsupportable assumptions being made by Ferguson in creating the algorithms and defining the data for his mathematical modelling.
When challenged, he robustly defended his work, saying that he had worked with limited data and limited time so the models weren’t 100 per cent right – but that the conclusions it reached were valid. but as every old computer pro like myself knows, conclusions based on incomplete data may be valid in the circumstances but are meaninless. Mathematical models can only be relied on if they are fed all the relevant data, if guesses are made to fill in the gaps then the law of GIGO kicks in, “Garbage In, Garbage Out”.
Professor Michael Thrusfield, of the veterinary epidemiology faculty at Edinburgh University, who co-authored both of the critical reports, said the papers were intended to serve asas a “cautionary tale” about the dangers of using mathematical models to predict the spread of disease when there are unknown factors that can probably never be known.
He spoke of experiencing a sense of “déjà vu” when he read Mr Ferguson’s Imperial College paper on coronavirus, which was published earlier this month.
That paper – Impact of non-pharmaceutical interventions (NPIs) to reduce COVID19 mortality and healthcare demand – warned that if no action were taken to control the coronavirus, around 510,000 people in Britain would lose their lives.
Naive belief in the infallibility of mathematics has not only led to disastrous responses to outbreaks of disease of course, which is why many people of a naturally sceptical mindset have questioned the way modern academics conflate science and mathematics. Science and mathematics are not the same thing, in fact mathematics is not even a science, although we were al taught at school that is is. Mathematics is, in the truest sense of the word, an art: that is a contractions of artifice, which is a statement guaranteed to have some maths and science fanboys screaming in outrage. The true meaning of artifice however is not something false or dodgy, but something created by humans, something not of nature. And no matter what fanboys (and girls,) might try to tell you, nature does not do equations.
As I have said many time, computes are not infallible, they are only as good as the person who programs them. And there is no such thing as Artificial Intellegence, as Professor Ferguson’s wild adventures in mathematical modelling sem to show very clearly.
The Black-Scholes equation was the mathematical justification for the irresponsible trading in financial markets that plunged the world’s banks into meltdown a few years ago. The brainchild of economists Fischer Black and Myron Scholes, the equation provided a rational way (they believed,) to price a financial contract when it still had time to run. It was like buying or selling a bet on a horse, halfway through the race. It opened up a new world of ever more complex investments, blossoming into a gigantic global industry. But when the sub-prime mortgage market turned sour, the darling of the financial markets became the Black Hole equation, sucking money out of the universe in an unending stream. It was the Black-Scholes equation that opened up the world of derivatives.
The equation itself wasn’t the real problem. It was useful, it was precise, and its limitations were clearly stated. Derivatives could be traded before they matured. The formula was fine if you used it sensibly and abandoned it when market conditions weren’t appropriate. The trouble was its potential for abuse. Unfortunately a fatal flaw was that it allowed derivatives to become commodities that could be traded in their own right. The financial sector called it the Midas Formula and saw it as a recipe for making everything turn to gold. But the markets forgot how the story of King Midas ended.
The world’s banks lost hundreds of billions when the sub-prime mortgage bubble burst leaving thouse who had bought consolidated debt obligations in the belief that property prices would keep goping up forever. In the ensuing panic, taxpayers were forced to pick up the bill, but that was politics, not mathematical economics.
Likewise with Neil Ferguson’s mathematical models of diseases, in order to prevent a disaster which the professors mathematical models say is inevitable, politicians are being persuaded to courses of action that really will destroy national economies on the basis of a largely fictional (if not fantastic,) course of events. We must return to sanity now. Far more people are likely to die in a global recession than are ever going to be killed by coronavirus.
Hope rose of a vaccine for COVID-19 becoming available in the near future when Scientists at the Chinese Academy of Medical Sciences found that monkeys infected with the coronavirus variant developed immunity to the disease after recovering from it.
Unfortunately the hopes were quicky dashed when it emerged that after complting their tests to researchers ate the four monkeys for lunch.
A young rhesus monkey wonders why that Chinese research scientist is wearing a chefs hat and sharpening a butcher’s cleaver (picture: change.org)
A psychological study in the USA has concluded that Caucasians are expressing declining support for diversity.
In their study, which even a cursory examanation will reveal is deeply flawed, the psychologists conclude that white Americanshave come to view diversity and multiculturalism more negatively as the U.S. moves toward becoming a minority-majority nation, a team of UCLA psychologists report.
The researchers split a sample of 98 white Americans half male, half female, representative of regional, socio – economic classes and religious backgrounds, with an average age of 37, randomly into two groups. One group was told that whites will no longer be the majority in the U.S. by 2050; in fact, this is likely to be true as soon as 2043, according to some projections. The second group was told that whites would retain their majority status in the U.S. through at least 2050. All participants were then asked a series of questions about their views on diversity.
“Whites feel lukewarm about diversity when they are told that they are about to lose their majority status in the United States for the first time,” said Yuen Huo, UCLA professor of psychology and the study’s instigator.
Using a seven-point scale—where 1 meant “strongly disagree” and 7 meant “strongly agree”—subjects were asked how much they agreed or disagreed with statements like “One of the goals of our country should be to teach people from different racial, ethnic and cultural backgrounds how to live and work together” and “Americans should understand that differences in backgrounds and experiences can lead to different values and ways of thinking.” Those who believed whites would continue to be the majority gave an average response of 5.67, while those who believed that whites would no longer be the majority gave an average response of just 5.15.
“We see a significant reduction in the endorsement of diversity when white Americans are exposed to current projections of future demographics,” said Felix Danbold, a UCLA psychology doctoral student and the paper’s lead author. “Most Americans view diversity in positive terms, but many white Americans who see the actual demographic projections, and the loss of their majority status, end up being less enthusiastic about it.”
Those in the study who identified themselves as Republicans gave average responses of 4.5, compared with 5.8 for Democrats and 5.7 for independents. Thirty-six percent of the participants were Democrats, 21 percent were Republicans and 31 percent were independents.
Support for diversity was also higher among women, with an average response of 5.7; men’s average response was 5.1.
The main problem with this study is the sample size is far too small to be meaningful, 98 people from a population of 330 million? It’s nowhere near a thousandth of one percent. How can such a small number be considered representative?
Secondly there is the question of how we define diversity for such a study.
The Liberal / progressive movement are so blinkered by their self righteousness they have lost sight of what constitutes diversity. They scream “DON’T LABEL US,” then proceed to label themselves with their identity politics, talking about gay rights, trans rights, black rights, Islamophobia, homophobia and such.
I have some gay friends that, if you met them casually, you would never think they are gay, yet if lefties find out they will tell my friends “You should be proud to be gay.”
Why? my friends would ask. It’s their business alone, all their friends know, nothing to do with anybody else.
On the other hand I had a friend, Charlie, (we lost touch some years ago, no fallout, life just took us in different directions,) who anybody would reasonably assumed from his body language, speech mannerisms and general behaviour, to be gay. He wasn’t, he didn’t question his maleness, he was just who he was (and was married to a lovely woman.)
One of the care providers who visited my late wife in her final illness, A…. is a Muslim. Hijab? Baggy clothes that hide her figure? No way, short skirts, skin tight leggings, revealing tops, false eyelashes and her black hair beautifully styled, that’s the person the world sees.
This is real diversity. I find it ironic that for the left wing activists who scream about equality, diversity is only skin deep and beneath the colour of our skin these social justice activists expect us all to conform to the stereotypes defined by the politically correct left. In my view it is the constant lecturing and haranguing from the far left about how we must accept this, do that, tolerate the other that is turning both Europeans and caucasian Americans against this manufactured diversity that is being imposed on us by the Politically Correct Thought Police.
Dr. Robert Epstein, senior research psychologist at the American Institute for Behavioral Research and Technology and wel known critic of Google’s use of psychologial techniques to manipulate users decision making process by heavily censoring the information search users are fed, appeared on SiriusXM’s Breitbart News Daily to discuss Google’s latest tactics in election manipulation ahead on the us 2020 presidenyial election, and how voters and political campaign managers can combat them with host Alex Marlow.
Robert Epstein – warning us about Google’s evil ambitions
Epstein and Breitbart News editor-in-chief Marlow discussed the current state of Google’s business and political activities and how the company could use its technology to influence voters.
Host Alex Marlow examined Epstein’s research saying: “I think you put out some pretty hard data on how many votes you think were moved in the 2016 election and I think you estimated it was over two million or so, is that not the case?”
Epstein responded: “Well it was at least 2.6 million and it could have been as many as 10.4 million depending on how aggressive google was in using the various tools they have available to them to shift votes. I can’t pin it down exactly but I know it’s in that range.”
Discussing the need for a system capable of analysing Google search results and suggestions to detect political and commercial bias, Epstein stated: “We need big monitoring systems in place, I’m so far the only person that’s created monitoring systems, I did one in 2016 and one in 2018. I’m trying now to raise funds to build a very big monitoring system for 2020 and to monitor a lot more than Google search results, to monitor newsfeeds, answers that people are getting from their personal assistants.”
Epstein explained that monitoring search results and auto-suggest terms are so important when monitoring election interference, stating: “If you don’t monitor, you can’t go back in time and figure out what these companies were showing people because what they’re showing people is ephemeral. That’s the term that Google’s own employees use internally, they’re showing ephemeral experiences, those really short-lived experiences that kind of appear before your eyes and then disappear, like search results for example.”
Google have openly acknowledged that their algorithms are set up to skew search results against content or sites favouring conservative or libertarian politics, while raising the visibility of liberal or progressive supporting content.
Epstein continued: “They’re using ephemeral experiences to manipulate people on a massive scale, people don’t know they’re being manipulated, and there’s no record kept of those experiences, they’re just generated for you on the fly and then they disappear.”
To us the only question here is why are people still using Google as a search engine? Any pretence to neutrality in raking search relults was abandoned long ago, now search results are ranked in whether they serve Google’s political or financial interests.
Report reveals Google’s manipulation of search results to influence outcomes.
After years of being called ‘conspiracy theorists’ the wise people who noticed how Google were manipulating search result listings to server the corporation’s business and political ambitions have been proved right …
How will you know who is tracking your internet enabled sex toys
A team of Finnish climate researchers led by J. Kauppinen and P. Malmi of the Department of Physics and Astronomy at the University of Turku, have have published results of a project which found that the likely human contribution to a rise of 0.1°C in global temperatures over the past 100 years century is just 0.01°C. This is in direct contradiction of the catastrophic global warming narrative built by research grant phishing Warmageddonist scientists and the bureaucrats of the UN Intergovernmental Panel on Climate Change (IPCC). The UN has of course been making political use of Warmageddonist doom prophecies to advance its Agenda 21 and Agenda 30, both of which look like plans for global governance when subjected to critical scrutiny.
Kauppinen and Malmi write in their research analysis paper, dated June 29, 2019, that their research proves that GCM-models used in IPCC report AR5 fail to calculate the instances of the low cloud cover changes on the global temperature. They claim this is why the models used to make the case for catastrophic warming give a minimal natural temperature change, assigning a substantial change to the contribution of the greenhouse gases in the observed temperature.
This is the reason why IPCC has to use a considerable adjustment of data to compensate for the too small natural component. Further, scientists drafting reports for the IPCC have to leave out the strong negative feedback due to the clouds. Also, this paper proves that the changes in the low cloud cover fraction practically control the global temperature.
Kauppinen and Malmi explain:
“The climate sensitivity has an extremely large uncertainty in the scientific literature. The smallest values estimated are very close to zero while the highest ones are even 9 degrees Celsius for a doubling of CO2. The majority of the papers are using theoretical general circulation models (GCM) for the estimation. These models give very big sensitivities with a very large uncertainty range. Typically sensitivity values are between 2-5 degrees.
IPCC uses these papers to estimate global temperature anomalies and climate sensitivity.
However, there are a lot of papers, where sensitivities lower than one degree are estimated without using GCM.
The basic problem is still missing experimental evidence of the climate sensitivity.
Low cloud cover controls practically the global temperature. It turns out that the changes in the relative humidity and in the low cloud cover depend on each other. So, instead of low cloud cover, we can use the changes in the relative humidity in order to derive the natural temperature anomaly. According to the observations, a 1 % increase of the relative humidity decreases the temperature by 0:15°C.
The IPCC climate sensitivity is about one order of magnitude too high because a strong negative feedback of the clouds is missing in climate models. If we pay attention to the fact that only a small part of the increased CO2 concentration is anthropogenic, we have to recognize that the anthropogenic climate change does not exist in practice. The major part of the extra CO2 is emitted from oceans, according to Henry`s law. The low clouds practically control the global average temperature.
the last hundred years, the temperature is increased by about 0:1°C because of CO2 therefore:
The human contribution was about 0:01°C.”
The J. Kauppinen and P. Malmi report is the latest of many reports from research projects which provide evidence to challenge the scare tactics pushed by the purveyors of the climate change hypothesis and politicians seeking to use the climate change scare to erode our rights and liberties.
The facts show the climate models that have been proven wrong at every step, the Earth’s climate is much more complicated than the algorithms that can be programmed into a computer application could deal with, simply because there are so many unquantifiable effects acting on the climate. The CAGWARTs (Carbon- driven Anthropogenic Global Warming Alternative Reality Trolls still haven’t raised teir heads from their computer screens long enough to work out why the satellite temperature data says, Earth hasn’t warmed in twenty years , why there is still snow in winter, cool spells in summer, why Polar Bear numbers are incresing and where the 50 million people who were due to be displaced by rising sea levels by 2015 have disappeared to, (clue: they’re not hiding under harry Potter’s cloak of invisibility.)
Explore the Greenteeth Digital Publishing network
Greenteeth UK ] … [ Daily Stirrer.shtml ]…[Little Nicky Machiavelli]… [ Ian’s Authorsden Pages ]… [Greenteeth & Daily Stirrer on YouTube ] … [ It’s Bollocks My Dears, All Bollocks ] … [ Minds ] … [ <a href=https://medium.com/@greenboggartIan on Medium ] … [Scribd]…[Wikinut] … [ Boggart Abroad] … [ Grenteeth Bites ] … [ Latest Posts ] … [Ian Thorpe at Flickr ] … [Latest Posts] … [ Tumblr ] … [ Authorsden blog ] … [Daily Stirrer News Roundup]
… [ Boggart Network News ]
Over 5,000 research scientists working in German universities and other higher education institutes have published their research findings in journals run by quasi-scientific publishers, according to a media report released on Thursday.
When researchers publish their results in a scientific journal, it is anticipated that their research theory, scope, assumptions, exclusions, method and data have been subjected to rigorous scrutiny by other scientists in the field in a process known as peer review. Though in recent years the per review process has been discredited because of the operation of an old boy network in academic research, the system if properly executed acts as a form of quality control, ensuring that studies are scientifically sound before being released to the public.
Quasi-scientific publishers, however, carry out little to no review of the articles (a system known as pal review, you get your pal to say nice things about your research, in return you say nice things about the results of his next project) and often publish the articles soon after receiving them, according to research carried out by German public broadcasters NDR and WDR as well as German news magazine Süddeutsche Zeitung Magazin.
The publishers approach scientists and companies around the world, encouraging them to publish their work in one of their journals. The researchers then pay to have their article or study published in one of these journals where it appears within a few days.
The report found that some 400,000 researchers worldwide have used these scientifically dubious journals — knowingly or inadvertently — to publish their work.
A climate change researcher has echoed the claim made by many well informed but not academically qualified commentators, that climate scientists are confusing their role as impartial observers with green activism after his paper challenging predictions about the speed of global warming was rejected because it was seen as less than helpful.
Professor Lennart Bengtsson, a former director of the Max Planck Institute for Meteorology in Hamburg, says recent pressure resemblig a medieval witch hunt from fellow academics forced him to resign from his post on a climate sceptic think-tank.
Bengtsson, a research fellow at the University of Reading claims a paper he co-authored was deliberately suppressed from publicatoin in the scientific research journal Environmental Research Letters by scientists who peer-reviewed the work because of an intolerance of conclusions that dissent from the United Nations Intergovernmental Panel On Climate Change view that catastrophic climate change can only be aveterted by shovelling shitloads of money into research projects that will confirm more research is needed thus enabling
useless fuckers research scientists to keep their piggy suouts in the trough of taxpayers money and never face the horrific prospect that they might have to get proper jobs.
The problem we have now in the scientific community is that some scientists are mixing up their scientific role with that of climate activist, Bengtsson told The Times newspaper which is behind a paywall.
Professor Bengtsson claims a scientist advised him that the paper, which challenged findings that global temperature would increase by 4.5C if greenhouse gases were to double, should not be published in a respected journal because it was less than helpful.
Helpful to what. If any of that blether about scientific integrity and the scientific method means anything at all, then the only thing that is helpful is people publishing the findings of their research, not ignoring anything that is a tad inconvenient. Such science is similar to the European Union’s neo Nazi approach to democracy.
The Daily Stirrer and our friends Little Nicky Machiavelli and Boggart Blog are sick of saying we told you so and the number of times we have knocked gaping holes in the “expert knowledge” of scientists and academics by simply stating the absoeffinglutely obvious have become too many to count. So here’s veteran television critic and purveyor of common sense spiced with caustic wit, Clive James to say it for us:
Because of its many beautiful images of my homeland, I couldnt help watching the repeat of Australia with Simon Reeve (BBC Two). I thought I was being idle, but suddenly a big idea occurred to me.
It wasnt my usual idea of ordering my secret squad of ninjas (Agents of CLIVE) to waylay the unacceptably confident Simon and inject him with a suitable narcotic to take the edge off his deplorable enthusiasm. Besides, there must be a lot of viewers, along with his employers, who find his enthusiasm to be the opposite of deplorable: ie they think him an interesting bloke, and even take that terrible little moustache to be a sign of keenness.
No, this was a bigger idea: an idea relevant to countless BBC programmes about the environment over the course of the past decade and a half. Let me try to evoke the moment in which the idea occurred. Simon was talking to a man in charge of a South Australian wine factory which covered thousands of acres with its enormous shining silver vats and bins. The factory produces a zillion bottles of wine per year, and uses, in the process, a gazillion gallons of water.
The water is drawn from the Murray-Darling river system. If it occurred to you to wonder what would happen to the output of wine if the input of water were to be restricted, it occurred to Reeve too. So did he ask the professionally knowledgeable bloke in charge of the wine whether he anticipated any restrictions in the water supply?
No, he asked a climate change expert. In Australia, climate change experts are not hard to find. Indeed it is very hard to keep them out of your car: unless you wind the window all the way up, one of them will climb in. This climate change expert was called Tim. Armed with his ability to read the future, Tim predicted that any dry area of the Murray-Darling system was an indication of whats coming, and that what Australia is experiencing here now would eventually be experienced by hundreds of millions of people around the world.
Simon nodded his moustache sagely but didnt once ask whether the flourishing wine industry was not part of what Australia is experiencing here now. Nor did he ask whether, in view of climate change, the wine industry was doomed. It was then that the big idea hit me. Why hadnt he asked the wine grower? It would have been easy to frame the question, perhaps along the lines of: In view of what is happening to the planet, have you any plans for selling all this colossal acreage of silver metal for scrap?
It would have been worth asking the wine grower because his whole way of life depends on what he thinks about the water supply, whereas, with Tim, nothing depends on what he thinks about the water supply except his next research grant and his prospects of getting on screen with the visiting TV presenter so that they can shoot off their mouths together. And at that point I started thinking about all those BBC environment and nature programmes from the immediate past that might just turn out, in retrospect, to have been souping up their science with science fiction.
So there you have it. Genuine environmental research that challenges the official line that climate change is man made and only punitive taxes on everything, leading to the enslavement of entire populations can avert it. Droughts are caused by climate change – they have nothing to do with draining natural aquifers and other water resources for industrial and domestic water supplies, hurricanes are caused by climate change and climate changes is happening faster than even the most dramatic computer models predicted in spite of the fact that we have had less hurricanes in the past few years than before climate change scaremongering became profitable.
Sea ice is melting faster than ever as is proved by the norther sea ice extent increasing more last year than in any previous year. And antarctic ice is melting so fast that a scientific research ship sent to measure how much ice had disappeared in the southern ocean got stuck in ice which would not have been there at the height of an antarctic summer but for global warming.
Anyone who previously though science was about studying things carefully to gain knowledge can be disabused of such a ridiculous notion. science is about corporate profits and political power. QED.