Read some articles from The Audio Critic Magazine. |
Hi-End Flummery
I have been a fan of 'The Audio Critic' for years now, they are crusaders who wish to simply de-mystify Audio technology into measureable facts and numbers that can show differences in various pieces of the audio chain. The truth sometimes is harder to accept than the version often presented by the media in general. As a matter of fact another individual who works to accomplish the same thing with regards to paranormal and pseudoscientific claims is James Randi. James Randi has an international reputation as a magician and escape artist, but today he is best known as the world's most tireless investigator and demystifier of paranormal and pseudoscientific claims.Randi has pursued "psychic" spoonbenders, exposed the dirty tricks of faith healers, investigated homeopathic water "with a memory," and generally been a thorn in the sides of those who try to pull the wool over the public's eyes in the name of the supernatural. So I was surprized to find this blurb in James Randi's weekly Commentary July 23,2004. James Randi's website is a must read and you can visit it by simply clicking on the image below. Update: Round 2 On October 21st Round two started! click Here to jump to that section. This is the section of his Commentary that caught my attention:HI-END FLUMMERY Reader 'Andrew' writes, re what he calls, "Bad science in Stereophile Magazine": While this is not, strictly speaking, a claim of paranormal powers, you will recognize many familiar elements. Stereophile Magazine, and similar publications, promotes various audio equipment, fancy power cords, gold speaker wire, that sort of thing. They run tests which "prove" that these are better.
James Randi responds with the following: This excerpt is from randi.org You may be interested in my own personal experience with a phenomenon similar to the ideomotor effect. As a youngster, I was a Hi-Fi nut — building all my own equipment. On one occasion I had built what I considered to be the ultimate preamp and decided to give it an A/B test. [Alternating between the two modes being examined.] It was absolutely amazing — as I switched back and forth between my old and new preamps I was astounded at the beauty and clarity of the new unit's sound. Imagine my chagrin and embarrassment when I discovered that I had incorrectly wired the A/B switch. It was doing absolutely nothing!
James Randi responds with the following: You will need to set some time aside to read these links: The Double-Blind Debate Experiments are made valid (ie, measure what they claim to measure) by good design, not by statistical analysis. The perfect experiment would be completely free of bias, perfectly sensitive to the variable under test, and would require only one trial. However, the experimenter, after conducting such an experiment, might be uncertain that his method was perfect so he repeats it just to be sure. Then, through statistical analysis, the probability of chance results (Type 1 error) or insensitivity (Type 2 error) can be determined. Note that even with one trial the results are valid. 1000 or 1,000,000 trials more do not increase the validity of the work. However, the reliability increases with more trials as does confidence that the results are true. The significance of statistics can be seen with the experiment that is just one hair short of perfect. Suppose there is a one-in-a-million chance that the experiment is not perfect: in a million trials, a 'false' will turn up as a 'true' one time. The experiment is conducted and a 'false' occurs. The experimenter is then killed in a freak accident before he can conduct any more trials. Here we can have a valid experiment (1/1,000,000 probability of Type 1 error) with untrue results. Statistical verification through repetition is thus really necessary, a prerequisite for valid results, but it is not the cause of those results. Statistics can also verify biased results. A million trials of a biased test are just as invalid as one trial, but more reliable. The moral is that validity can only be determined by examining the test and its inherent characteristics. Leventhal is right by concluding that aggregation of "unfair" results is unfair, but he fails to examine the test itself for fairness. Statistics are just numbers. They are neither fair nor unfair. Numbers just don't care. Fairness and high sensitivity are just what makes the ABX method so appealing: it contains the validity elements that constitute a fair test. Listener and administrator bias are controlled by concealing the identity of the device under test. The listener gets direct, level-controlled access to the device under test, the control device and X, with multidirectional switching and user-controlled duration. Contrast this to the open evaluation with usually no more than one or two switch trials, no controls over listener or administrator bias or level, often with references that aren't even present during the test, and no recorded numerical results or statistical analysis. Which is the most fair?How about sensitivity? Les Leventhal makes his entire fairness case around the idea that subtle differences may only be present 60-80% of the time during the tests. When p approaches 0.9 (differences present 90% of the time), the fairness coefficient evens up and even a 16-trial test meets all criteria for both Type 1 and 2 error. Notice that probability of error is not the same as actual error. Even a perfect one-trial experiment would have an unacceptably high risk of Type 1 and 2 error. So what makes for a sensitive listening test? What actual values can we expect for p? A casual survey of any of the underground magazines shows that audiophiles typically find it fairly easy to perceive differences. Leventhal implies that p may be a low value when there is nothing in the audiophile position to support such a notion. Read any decent "audiophile" review and draw your own conclusion as to the value of p inherent in their position. An examination of the 16-N tests referenced by Dr. Leventhal reveals conditions indicative of high sensitivity. Clark and Greenhill auditioned the devices under test prior to the test to identify sonic characteristics. The ABX blind tests were performed using their personal reference systems, with familiar program material and at their leisure. I find it difficult to believe that this procedure might have a sensitivity of under 0.9. A low sensitivity value of, say, 0.6 for p suggests that for every 10 trials only 6 real trials occur. Thus one must increase the sample size to add enough real trials to avoid Type 2 error. A low-sensitivity test of 16 trials is only a 10-trial test under these conditions. If the differences are only present on 60% of all the program material available, and if your material is chosen from a random sample, then the sensitivity issue might apply. However, the identification of material where differences are present is imperative for sensitive testing. It also enables us to test for differences that may only be present 10%, or even 1%, of the time. We can make these tests by selecting programs in which differences are present 100% of the time during the test. It seems to me that this is what audiophiles do, and precisely what Clark and Greenhill, Shanefield, Lipshitz and Vanderkooy, et al, do also. For tests using listener groups it may be difficult to give all listeners completely sensitive programs. However, because the sample is now much larger, only 100 total trials are needed to reduce the risk of Type 2 error to less than 1% with a listener sensitivity of 0.7. Using 10 listeners in a 16-trial test would mean 160 total trials. I find it interesting that no one has difficulty discovering differences during subjective evaluations. However, during the open sessions I've participated in the general sensitivity level of the listeners often seems to be greater than one (p equal to or greater than 1.0). Differences abound. However, sometimes these differences mystically disappear under blind conditions. Why? It seems to me that many of them are a part of the relationship or interface between the listener and that gear. The things the listener hears are as much a part of the listener as they are a part of the equipment. Withholding the identity of the equipment breaks the bond with the listener and the differences disappear. As an audiophile, it is important to me to know which differences are attributable to the equipment alone. Those which are part of the listener interface may not apply to me. The ABX method is the only test I am aware of that makes this important distinction. It is the only one that has both scientific validity and statistical reliability. I don't doubt that listeners and golden ears hear what they hear, but there is scant evidence that others would hear it. While the debate rages on, I will devote my energy to areas where there is no argument about the existence of major differences. Loudspeakers, anyone?—Thomas A. Nousaine, Chicago, IL
THEY’RE STILL ON THE RUN Do you remember the silly claims of Stereophile Magazine that prompted me to offer them a million dollars if they could prove any of the trash they were offering their readers? Well, they’re still hiding under the bed – or under that huge rock with Sylvia Browne – to avoid meeting the challenge. Just do a search on the main Swift page for “Stereophile,” to refresh your memory on that brouhaha. Well, now reader John McKillop sends us to www.stereophile.com/asweseeit/110/index.html to find an article written back in 1987 by J. Gordon Holt, the man who founded Stereophile Magazine in 1962. Holt apparently had the present management beat for brains. The article is titled, “L'Affaire Belt,” and refers to the ridiculous claims made back then by one Peter Belt, “inventor” of magical devices that improve everything from harmonics to hysterics. I have news: Mr. Belt is still making those silly claims, and is still getting rich by selling garbage to naïve audiophiles. We must wonder, as reader McKillop does, whether Art Dudley – a willingly flummoxed reviewer for Stereophile – and/or John Atkinson, present editor of the magazine – ever read this discussion by their founder, of the hilarious Peter Belt pretensions. Go there and see a thoughtful, well-reasoned, article that handles honestly what the present Stereophile management has chosen to ignore: blatant fakery, fraud, and swindling in the audio business. I’ll quote a pertinent section from the 18-year-old article here that should – but won’t – seriously embarrass Atkinson and Dudley. Holt recognized reality, and wasn’t reluctant to share it with his readers. Unfortunately, he sold the magazine in 1982, and the woo-woos immediately took over. Here’s the 1987 excerpt:
Nor do we, Mr. Holt, but the suckers still buy the garbage…. I am seldom presented with such a succinct, powerful, and to-the-point summary of what we at the JREF battle, every day. Our very own Kramer, who handles the claims for the JREF prize, has sterling expertise and experience in the audio field, as well; regarding the Stereophile matter, he offers this comment:
|
|
All of the pictures and information contained within the www.biline.ca website are the property of Jeff Mathurin please do NOT use any of the contents of this website without consent. If you would like to contact me for any reason then feel free to use the contact form by clicking Here |