Common sense versus reality

From Modern Mechanix:

Improbable gliders

Yes, these things are exactly what they look like. And when the design was tested, no, it didn't work. You can't power an aeroplane with a sail.

"Common sense", whatever that is, says it's impossible to make a sail-powered aeroplane. And common sense is right.

But if your vehicle has a connection of some sort to the ground, or water, it is eminently possible to sail faster than the wind. Tacking sailing ships do this routinely. Common sense doesn't say that's impossible, unless it's the common sense of someone who's never seen a boat race.

But common sense most definitely says that sailing dead downwind, with the wind exactly at your back, cannot be done faster than that wind is blowing. Obviously, whether you're in a boat or in a land yacht (meaning a wheeled vehicle propelled by the wind, not a '71 Impala), when your speed and heading relative to the ground or water are the same as the wind speed and heading relative to the ground or water, there's no more energy to be harvested and you can't go any faster.

In this, common sense is absolutely wrong. A land yacht certainly can sail downwind faster than the wind.

The fastest one to do so thus far is called Blackbird, but there are others:

What all of these yachts have in common is a large propeller instead of a sail, and the prop has a drive connection to the wheels. Common sense says this won't make a blind bit of difference to anything, but it does.

There have been some rather nasty arguments between people who know that this cannot be done and people who, as per the old saying, should not be interrupted because they're busy doing it. Enjoy the comments here, for instance, if you'd like to consume rather more than the recommended daily intake of flame-war.

At this stage, anyone who still objects is in the position of a person in 1910 who still insists that aeroplanes are impossible on the grounds that he, personally, hasn't yet seen one flying.

(Although, to be fair, some of the land-yacht runs are alleged to have been made on the dry bed of Ivanpah Lake. I've been there in Fallout: New Vegas and it's clearly not nearly big enough for any such activities.)

Common sense is, in general, immensely useful. It's what tells you that, when you want to cross the street and see a car coming, you shouldn't just step out in front of the car, even if you've never subjected this belief to empirical testing by walking out and seeing what happens.

But common sense, like memory and even perception itself, is unreliable. Common sense only works on things that it's worked on before, and the only way to expand your common sense to deal with new concepts is by making those new concepts fit into some part of the existing framework. Expanding your common-sense framework to accept genuinely new ideas is possible, but it doesn't happen automatically.

If you're trying to figure out whether to step out in front of a type of oncoming car you've never seen before, the common-sense shortcut will work. But if you're trying to understand some new, counter-intuitive physical oddity like these land yachts, common sense will fail you miserably, just as it so often does when people try to think about tax brackets or daylight saving, and on the rather fewer occasions when people try to think about aeroplanes on conveyor belts.

I don't think all of the people who got into shouting matches over the downwind-faster-than-the-wind idea were just emotionally invested in a position they'd not thought about at all, as is so often the case in, say, political arguments. The physics involved is decidedly non-obvious; that, plus Sayre's Law, could account for the whole kerfuffle. And this new development doesn't seem likely to revolutionise land transportation, or anything else.

The next time you're inclined to take a common-sense view of some new idea that actually matters, though, try to bear in mind that common sense also says that the world is flat and the sun goes around it.

They didn't do it, nobody saw them do it, you can't prove anything

Remember when the Sydney Morning Herald published that article saying how awesome the Moletech (or possibly MTECH) Fuel Saver was, when that device was of course actually just another useless magic talisman?

And then the online version of the article was erased, in a rather weird way?

And then the paper favoured me with a ten-word non-explanation about what had happened?

(I'm still waiting for Asher Moses, the author of the Moletech article, to reply to my e-mail about it. It's been almost three years now.)

Well, that's how newspaper Web sites work these days, apparently. 'Cos, a couple of days ago, the Daily Telegraph (another Australian paper) published that paean to the all-round gosh-darned fabulousness of the "Q-Link Mini" self-adhesive radiation-absorbing tiger-repelling antigravity eternal-life cure for the common cold.

And now they've... unpublished it again.

Ze page, she is not found.

It was foolish of me to think that a major publication wouldn't be so shameless as to do this, after I'd already seen a different major publication do it. Next time, I'm keeping a backup of the page. (Google still indexes umpteen traces of the article on other dailytelegraph.com.au pages, but the text of the article itself is lost.)

This is the normal way in which defamatory or otherwise objectionable material is dealt with on the Web. We all know about the Streisand Effect vastly increasing the readership of any material that someone unlikable wants kept secret. But in situations when someone has valid grounds for objection to something on the Web, the outraged party usually just shouts at the offender a bit, whereupon the offender takes down the page full of lies about the sexual habits of Joe Bloggs, or the review that was copied wholesale from someone else's site, or whatever. There often isn't even a legal nastygram involved.

But this is not how it should work for major publishers. Even if the Q-Link Mini piece was never published on paper (I don't read the Telegraph - anybody see it on the actual fishwrap?), the greater public respect that "proper" publishers are meant to have (I'll wait for the laughter to die down...) means that, at the very least, they should do one of those one-square-inch-on-page-19 retraction/apologies. Not just silently delete the Web page.

I wonder, as a commenter on the last post pointed out, whether attention from the Mirror Universe evil twin of Media Watch had anything to do with this unannounced retraction.

[Update: As pointed out in the comments, Media Watch has covered the story now as well!]

As that Crikey piece points out at the end and as this Crikey piece explains in detail, it turns out that Stephen Fenech's footballer brother Mario is paid to promote Q-Link products. Which, to be fair, Mario probably sincerely believes are effective. This continues the great tradition of incisive critical thinking we've come to expect from sports stars.

(The second Crikey article also links to this page, where someone wades through the alleged scientific support for Q-Link claims, so you don't have to.)

Entertainingly, a search for the names of the two brothers currently turns up rather a lot of people talking about this Q-Link nonsense. You could probably piece the whole article back together from the sections of it quoted on blogs and Twitter.

While I waited for an apology from Stephen Fenech and/or the Daily Telegraph (or Queen Beatrix of the Netherlands, for that matter, because that seems about as likely), I was wondering what the heck Stephen was thinking when he wrote that piece. Did he, I wondered, imagine that the preposterousness of the product would distract people from the giant conflict of interest? Perhaps Mario's the smart one in that family?

But no, that wasn't it. Stephen actually thought he'd get away with this because he's done it twice before.

Here and here, courtesy of the Australian Q-Link site's "In The Media" page, are Mr Fenech's two previous proud declarations of belief in the incredible powers of Sympathetic Resonance Technology. Both published in the Telegraph.

How often do you have to do this to be eligible for a Lifetime Achievement Bent Spoon Award?

Self-adhesive super-science!

A round of applause, gentle readers, for Stephen Fenech, "Technology Writer" for the Daily Telegraph here in Australia, for his unflinchingly courageous presentation of the "Q-Link Mini".

The Mini is a tiny self-adhesive object which, Mr Fenech assures us, is "powerful enough to shield us from the potentially harmful electromagnetic radiation generated by mobile phones and other electronic devices". (Q-Link themselves delightfully refer to the Mini as a "Wellness Button".)

Not for Mr Fenech the mealy-mouthed objections of hide-bound so-called "scientists", who've observed that there's no good reason to suppose that low-level exposure to non-ionising electromagnetic radiation has any deleterious effects, and that there's also no good reason to suppose that there is even a theoretical basis for low-energy EMR to harm us, and that if you block the radiation coming out of a mobile phone, the phone won't work any more.

Mr Fenech is similarly wisely unconcerned that Q-Link's most famous product, the "SRT-2 Pendant", contains a copper coil that isn't connected to anything, and a surface-mount zero-ohm resistor, which is also not connected to anything.

I'm sure Mr Fenech disregards doubts raised by this discovery because, of course, Q-Link's products are unconstrained by the foolish fantasies of orthodox "science", which has somehow come by the idiotic idea that the existence of microwave ovens, GPS satellites and personal computers might indicate a more accurate understanding of the principles by which the universe operates than that possessed by the manufacturers of mystic talismans supported by testimonial evidence, uncontrolled user tests and the sorts of studies that cause spikes in the blood pressure of "scientists" who work so hard to get their own papers published because, of course, their papers are mere tissues of lies that never mention "biomeridians" or "Applied Kinesiology"...

...which is here discussed in a way clearly calculated to underhandedly attack Q-Link's products!

If you buy something that's meant to operate by "Sympathetic Resonance Technology™" or "non-Hertzian frequencies", you should of course take it back for a refund if it turns out not to contain seemingly-meaningless components that aren't connected to anything. Those components are where the magic happens, people!

Now, I know that some of you are the sort of raving "science"-worshippers that won't take Mr Fenech's word by itself as proof that the Q-Link Mini is worth $US24.95 - or even $AU48, which for some reason is what it costs here.

Rest assured, all you Moon-landing conspirators and Nazi doctors, that Mr Fenech has diligently secured supportive quotes from the entirely unbiased CEO of Q-Link Australia, and also a naturopath called Daniel Taylor, who appears to be a practitioner of the "Dorn Method", which regrettably does not seem to have anything to do with being knocked out to demonstrate how dangerous the latest threat to the Enterprise D is.

I don't believe a study's yet been done to determine what happens if you use one of those antenna-enhancing stickers at the same time as a Q-Link Mini. Be warned that adding a battery-enhancing sticker and a Guardian Angel battery may result in headache, irritable bowels or time travel.

Psychoacoustics again, again, and again

Today's addition to my ongoing Psychoacoustics Archive comes courtesy of Ben Goldacre.

When listening to the exact same recording, apparently being played by similar-looking but differently-attired female violinists, evaluators consistently thought the music was better when the performers were more "professionally" attired.

This turns out to be an entirely uncontroversial finding. Until I read this Bad Science post, I didn't know that orchestra auditions are now usually blinded (the auditioner plays behind an opaque screen). This is because unblinded auditions have repeatedly been demonstrated to create unfair discrimination, even when frank racism is not involved. Even listeners who apparently honestly don't consciously believe that, for instance, women are worse musicians than men, will often rate female performers lower. And that's before you even start to consider attire and physical attractiveness. (Witness the recent global astonishment when an unattractive woman, apparently against all that science and art has ever told us, turned out to have a decent singing voice.)

The evaluators in this latest study were just music students and professional orchestral musicians, though, not audiophiles. I'm sure audiophiles would have done much better.

From the "any publicity..." file

Imagine my delight at receiving the following:

From: "Clink Admin" >admin@clink.com.au<
To: dan@dansdata.com
Subject: A review?
Date: Sat, 21 Aug 2010 15:21:37 +1000

Hi Dan,

I was wondering if you would do a review of something on my website, address in signature.
Not sure if anything on there is along the lines of stuff you would normally but think there may be a couple of items that fit in.

Would love if you would do a review of my Vortex Analogue Interconnects, these have proven very popular cable.
http://clink.com.au/audio/stereo.htm (bottom of the page)
So would be great to get an independent and unbiased view of these.
Would only ask you to do a cable review though if you feel it is something that has an impact on audio quality.
If your of the school of thought that they have no impact then prefer not to have a review done as it would be very short, probably in the under 10 words variety of short.

Gregory
Cinema Link, Sales
675 Elizabeth St
Waterloo NSW 2017
Ph: (02) 9698 4959
www.clink.com.au

[There was a bit more to this e-mail; I've corresponded with Gregory previously. He asked if I'd like to check out one of his HDMI switches, which I don't actually have the equipment to test but which seem quite handy; by linking to them and other pages of his without so much as a nofollow, I hereby repay Greg for what's going to happen to him in the rest of this post!]

My answer:

Yeeeahhh... you haven't read much of my site, have you :-)?

(Or this blog, for that matter.)

It's the "school of thought" part that I think is the problem. There's no need to separate people into pseudo-religious "schools of thought" over a question that can be settled by scientific means.

We know, with the same certainty that we know that the GPS system and personal computers work and for many of the same reasons, that none of the conventionally-measurable electrical characteristics of analogue cables have any effect on the sound. Well, except in particularly pathological cases where some truly bizarre cable architecture adds substantial reactance or something, in which case it only makes a system sound better if there was something wrong with the system in the first place. Like, your speakers have 14 drivers wired in parallel and thus have far too little impedance for your amp to happily drive, so hooking them up via carbon spark-plug leads or something that add a lot of resistance un-ruins the sound.

(See also those occasional fringe-audiophile products that are actually quantifiably bad, like this amplifier, plus a veritable cavalcade of dreadful valve amplifiers. All of which have users who insist that they sound GREAT.)

[Oh - in case you're wondering, yes, Cinema Link have fancy digital cables, too...]

The analogue-cables-sound-different response to the electrical-engineering argument is to say that DC-to-daylight frequency and phase analysis just doesn't measure some special something that they know when they hear it, science doesn't know everything, et cetera.

But a vanishingly small percentage of the people who say this ever bother to do even a simple single-blind test to see if they, themselves, can actually hear any difference between their special cables and lamp cord. Such tests really are not difficult to do at all - all you need is a trustworthy friend to flip coins, swap cables and make notes, some very elementary experimental design, and a spare afternoon - but they're amazingly unpopular. Un-blinded tests remain immensely popular, but it's trivially demonstrable that those don't work.

This is my favourite recent example, but there are countless others, covering the entire breadth of live and recorded sound. Vision and hearing are subject to an immense amount of processing by the brain before consciousness gets to perceive them.

(Another favourite of mine: Famous concert violinists are often certain that they can tell the difference between a priceless antique violin - especially if it's their Stradivarius or whatever - and a high-quality modern instrument. But when you do a blinded test, the results, once again, drop to chance levels! They can probably pick the Strad blindfolded if they're actually holding it in their hands, but that's all.)

Some audiophiles go so far as to say that no matter how perfect the experiment design, with no possibly-sound-colouring ABX switchboxes or skull-resonance-changing blindfolds involved, these sorts of differences just can't be detected by science, in the same way that God will never permit Himself to be detected by scientific investigation. Exactly how these people figured out that the new cables sounded better is, in these cases, something of a mystery.

(The people who insist that cables need "burn-in time" have a particularly neat way out of blinded tests; they can just assert that the... phlogiston, or whatever... leaks out of burned-in cables when you disconnect them. But I'd be willing to bet quite a lot of money that swapping out their expensive burned-in wires for hidden $2 interconnects and bell-wire speaker cables would pass entirely unnoticed.)

I'm inclined to go easy on people who buy fancy cables and reckon they sound good. We all fool ourselves frequently, which is why science is so important, but a fooling of oneself that leads to essentially harmless happiness is not a major crime.

But I really must insist that people who're in the business of making and selling fancy cables have no right to make any claims about the "sound" of their products, if they haven't at least hired a few first-year electrical-engineering students to spend a day doing an independent test.

If, when blinded tests were done, they at least reasonably frequently showed that fancy cables sounded better, then it'd be no big deal to sell such products without doing the tests yourself. But what we instead keep seeing is that in a blinded test people can't tell the difference between Monster Cables and (literal) coat-hanger wire. (Monster products may be overpriced and often sold in a blatantly dishonest way, but surely they ought to beat coat-hangers!)

Given this, I cannot help but consider the basic rationale for products such as your cables as being as unproven as the notion that a chiropractor can cure diabetes, or that all poor people are poor because they do not adequately desire wealth.

It's not the Middle Ages any more. We know where lightning comes from, we have machines that routinely fly hundreds of people thousands of miles in (relative) comfort, and our doctors have figured out that it's a good idea to wash your hands before operating. Every day, people in First World nations are surrounded by proof of the effectiveness of scientific inquiry that's so bright, loud and ubiquitous that we, apparently, have developed the ability to tune it out when it suits us. But that doesn't make it a good idea to do so.

You're not a quack, and I don't think you're a scam artist, either. Your cables aren't outrageously expensive relative to the price of the components and assembly - they might as well be free, when compared with the truly out-there cable vendors. And you don't sell $1000 power cables, either (...do you? Tell me you don't!). But this doesn't mean that sending samples of new cables to your existing customers and using their testimonials in advertising is an acceptable way of proving your claims.

If testimonials were a good way of proving the scientifically dubious, I'd be torn between devoting all my time and money to Transcendental Meditation in order to develop the ability to fly and walk through walls, or devoting just as much time and probably even more money to Scientology in order to develop the ability to control space and time.

At the end of the day, I suppose you do end up with "schools of thought", but the members of those schools are not "people who reckon special cables sound better" and "people who don't" (or "people who reckon Uri Geller has paranormal powers" and "people who don't"; I'm sure you can provide many of your own examples). They're "people who believe this question is amenable to rational investigation" and "people who don't care".

You're allowed to not care. Everyone's entitled to his opinion. But nobody's entitled to be taken seriously.

Gregory replied:

Thanks for taking the time to reply in depth, and for the informative links.

I've taken a little more time this time to read some of the pieces on your site and understand a little more of your thoughts on audio cables.

So I'll take that as no, or at least I'll take it as something that would be detrimental to my business health.

To which I replied:

...and you are thus acknowledging that if you made an attempt to figure out if your fancy cables worked, you'd find that they didn't? :-)

[Greg's, regrettably, not yet found time to reply to that.]

As I said, for hi-fi this really doesn't make a whole lot of difference either way. Even the really wacky Shun Mook or Peter Belt (...or just about anything else that 6moons thinks is fantastic...) sort of hi-fi cultism doesn't really hurt anyone - certainly not by the standards of the usual kind of cult. Some nut out there has probably bought speaker wire instead of nutritious food for his children, but that is hardly a probable situation.

That doesn't mean that the same patterns observable in truly harmful things like crazy cults and medical quackery aren't valid when you see them in other contexts, though. One I find particularly common, which is very much on show in the audiophile world, is the peculiar and inexplicable situation in which the better you investigate something - eliminating extra variables, reducing experimenter bias, reducing the ability of subjects to fool themselves - the less effect that something turns out to have.

When "lousy test" shows "huge effect" and "better test" shows "medium effect" and "further-improved test" shows "not much effect at all", it may be that the latter two tests were false negatives.

But it usually does actually mean that "perfect test" would show "zero effect".

BANG! Art! BANG! Art!

Lichtenberg figure being made

In which Theo Gray makes some acrylic Lichtenberg figures rather bigger than the ones I can afford.

Lichtenberg figure being made

More detail in these excerpts from his book.

(Via.)

Seeking (Economic) Enlightenment

I think I'm about as susceptible as any other human to having my opinions formed by someone else. All I have to do for this to happen is read someone else's opinion about something, before seeing that something for myself.

So when I found out about Economic Enlightenment in Relation to College-going, Ideology, and Other Variables: A Zogby Survey of Americans, by Daniel B. Klein of George Mason University and Zeljka Buturovic of Zogby International, I decided to read it for myself, properly.

The paper, you see, concludes that politically left-wing people know a lot less about economics than do politically right-wing people.

This has caused something of a stir.

I deliberately avoided reading any other analysis of the paper before I read it myself, and then wrote most of this interminable post. (Had I more time, I would have made this shorter.)

Then I put a bit on the end that links to other discussions of the paper, and summarises the stuff that I missed.

I did all this instead of writing about Lego printers or something because I've been thinking, recently, about scientific papers and their interpretation and reporting. Mass-media science reporting has, I think, never been lousier. If you really pay attention to mass-media science reports, you'll hardly have time to worry about GMOs giving you CJD and UFOs landing at HAARP, because you'll be too busy clearing out your pantry, because whatever stuff cured cancer last week has now been conclusively shown to cause it.

You can usually still get reasonable interpretations of new findings from something like Scientific American, but normal everyday news sources are worse than useless, spraying anti-facts all over the place daily.

If you want to know what some particular piece of research really means, you therefore have to go to the source yourself. This is an important skill for modern humans.

It's tempting to not read the actual paper at all - or just scan the abstract - and then read what some blogger you like said about it. But you really should dig into papers properly, at least occasionally. Now that it's so often possible to have the whole thing in front of you, for free, in a matter of seconds, there's no excuse for just reading what some newspaper journalist mistakenly thinks he read.

I suspected that Economic Enlightenment in Relation to Blah Blah Blah was just a barrow-pushing junk survey, because that's what a lot of political polling is. Complete-garbage surveys are all that various interest groups need to move their barrows along, after all. It only took Sir Humphrey a minute to persuade Bernard that he simultaneously supported and opposed the reintroduction of conscription, so why try harder? Why cover your ideological nakedness with a real fig leaf, when a scrap of paper's cheaper?

Whether or not this particular study was junk, I knew I could easily find some journalist telling me that it was. Or that it wasn't. So I downloaded it myself (PDF), and read it. Feel free to do so yourself, before reading on to have my own opinions stamped on your brain.

The Buturovic/Klein Economic Enlightenment survey has a pretty clear conclusion. To the great delight of the Objectivist playpen in the Wall Street Journal's op-ed pages, this survey found that "the left" in the USA "flunks Econ 101".

The United States has, of course, no actual left wing that any of us foreigners can identify. When US "liberals" agree with policies that were too authoritarian for Richard Nixon, then they're only "liberal" in a relative sense. (That's right - US political blocs are defined by relativism! Or relativity, or something! My god - it'll be social justice next!)

The elephant-in-the-room problem with the Buturovic/Klein paper is that although it was conducted by Zogby International, a respected and above-board organisation, the actual respondents were from an "Online Panel", not a proper random sample. Zogby invited 64,000 people to take part in the survey, and those 64,000 were of course already biased in favour of people who can access the Internet and care to be involved in surveys.

(The Zogby Online Panel appears to be something you can sign up for. Surely you don't have to actively sign up to be surveyable in this way... but if you don't, how can they contact you without spamming? If the Online Panel is actually the same near-meaningless fluff as TV ratings, then the whole project is in dire danger at the outset.)

Anyway, only 4,835 of the people invited to participate responded. That's a 7.6% response rate, which is about par for the course for entertainment-value-only Internet polls. It's a serious, serious problem for any survey that's meant to have some scientific rigour.

You can try to balance things out by weighting responses so that the responders' demographics match those of the whole population; Zogby usually seem to do that with their own Internet surveys. But the authors of this paper didn't do it, and may not actually have been able to do it, if subsets-of-a-subset problems would have left them giving large weight multiples to very, very small slices of the respondent base, giving rise to error bars taller than the whole chart.

Here, though, is one of several points where this paper doesn't follow the standard Crap-Survey script. The authors didn't weight the data, but they make it all available online, so you can see what they were actually working from, and massage it yourself if you like.

(Here's the "survey instrument" in Word DOC format; here's the results in Excel XLS format.)

You may be wondering how the "Economic Enlightenment" survey defines "Economic Enlightenment". And, indeed, how it defines "the left" and "the right".

Well, Economic Enlightenment - "EE" from now on - is meant to be your ability to understand economic reality. Like, I suppose if you can understand that if you don't have much money then it's probably better to rent accommodation than to take out a large zero-deposit mortgage, then that's an economically-enlightened decision.

I think it's uncontroversial that most people in the modern Western world don't have a lot of economic sense. The credit-card companies wouldn't be sitting on such a wonderful green gusher of cash if people-in-general realised that holding your damn horses until you can actually afford something, rather than borrowing at 20%-plus to buy it, will let you own a lot more stuff. People keep borrowing big to buy a brand new car, too; I'd put that decision on my definitely-not-EE list.

(Actually, I think there's a bit of a no-real-left-wing sort of situation in the EE world, too. Look at all of the people in the affluent West who consider it completely normal to be deep, deep in entirely optional debt for your whole adult life. In comparison, anybody with the vaguest semblance of actual money-sense looks like some sort of Oracle of Infallible Wisdom. I dunno what Warren Buffett would count as on this scale; perhaps he'd be a strongly-superhuman Banksian economic Mind.)

The EE survey admits on the first page that their "designation of enlightened answers" may be a "controversial interpretive issue", and that they specifically went out huntin' for "leftist mentalities", without asking questions slanted the other way.

This is another big and significant problem.

Their page-3 example of a survey question, for instance, is "Restrictions on housing development make housing less affordable", with the usual multiple-choice answers from "Strongly Agree" to "Strongly Disagree" and "Not Sure".

The authors use this question as an example of how they "Gauge Economic Enlightenment", because a question apparently has to have at least this definite an "enlightened" answer to be worthy of contributing to an EE score.

But they admit that there are still confounding factors, because different people will have different opinions about what the question's really asking.

What sort of "restrictions", for instance, might we be talking about? Does "affordable" relate to initial purchase price alone, or purchase price plus maintenance and making-good of a shoddily-built house, treatment for the lung disease you got from un-"restricted" asbestos insulation batts shedding fibres into the HVAC ducts, et cetera? What does it "cost" if an electrical fault burns the house down, and you die? What if an un-"restricted" housing industry forms a cartel that builds houses out of damp cardboard and forces poor people to live in them - for a price that's exactly as un-"affordable" as makes the cartel the most money - or live in the park?

The paper doesn't really go into that much detail in its brief discussion of confounding effects, which given its respondents, all living in the troubled-but-not-total-chaos current mainstream US economy, is probably fair enough. There are infinite possible wiggy reasons why someone might mean something strange by their answer to what you thought was a clear question, and if your sample's big enough and random enough (which, once again, is a problem for this paper...) you can iron most of that out.

But I think there's one confounder that should have been mentioned specifically:

Deliberate lies.

Someone who's sympathetic to the current US "radical conservative" movement may personally believe that Sarah Palin is an idiot, but tell a pollster that she's a genius, just to Fight the Good Fight. Similarly, someone who wasn't paying attention during the most recent interminable US Presidential campaigns and so was under the impression that Obama had promised to immediately end both wars and nationalise Halliburton, may tell a pollster that he's 100% happy with the President even though he's actually very disappointed.

(For the same reason, I find it difficult to believe any survey about the sexual activities of teenagers. Religious loonies often seem to get very excited when a survey comes back saying that 90% of 14-year-old boys have had sex a thousand or more times. I don't know whether I'd rather those loonies are so upset because they actually believe the survey, or if they know that it's BS and are feigning belief to advance their own agenda.)

There's definitely some slanted question-selection going on in the EE survey; they admit as much. They had 16 multiple-choice questions in the actual survey, but they chose only eight of those questions to make up the final EE score for respondents. They say they eliminated the questions that were "too vague or too narrowly factual, or because the enlightened answer is too uncertain or arguable" - but I'd say the "narrowly factual" part shouldn't be a problem at all. Ask someone what an "interest rate" or "inflation" is; if they don't know, their EE score drops. An awful lot of people don't seem to understand income-tax brackets; there's another great question for a more factual EE test.

(Perhaps such questions would measure mere economic "literacy", not "enlightenment", though.)

Several of the dropped questions also seem to me to be more likely to get "leftist" answers, since they include, for instance, "Business contracts benefit all parties" and "In the USA, more often than not, rich people were born rich".

But here, again, is evidence that this isn't a pure obviously-fake barrow-pushing trash-poll. The paper tells you they dropped the questions, and why (rightly or not), and the authors also tell you what the dropped questions were. Leaving that last detail out is exactly the sort of thing that trash-pollsters do, because that way you can avoid disclosing that they, for instance, asked 50 questions and published only the ones whose aggregate answers happened to support their thesis.

(The downloadable full results also include responses to all of the dropped questions. I'd do some analysis of that data, if this post wasn't already the size of a holiday novel.)

On page 4 of the paper, there's a list of the eight questions they used, and the answers they deemed "Unenlightened":

1. Restrictions on housing development make housing less affordable.
Unenlightened: Disagree
2. Mandatory licensing of professional services increases the prices of those
services.
Unenlightened: Disagree
3. Overall, the standard of living is higher today than it was 30 years ago.
Unenlightened: Disagree
4. Rent control leads to housing shortages.
Unenlightened: Disagree
5. A company with the largest market share is a monopoly.
Unenlightened: Agree
6. Third-world workers working for American companies overseas are being
exploited.
Unenlightened: Agree
7. Free trade leads to unemployment.
Unenlightened: Agree
8.Minimum wage laws raise unemployment.
Unenlightened: Disagree

To keep this post under the 50,000-word mark, I leave determination of all possible well-reasoned but "unenlightened" answers to these questions as an exercise for the reader. Questions 1, 2 and 8 seem pretty straightforward, if simplistic, to me.

I also at first thought it was pretty hard to reason your way to the "wrong" answer for question 3, as well - but then I realised how many ways there are to measure "standard of living", besides "number of features in your car" and "number of televisions in your house". Is "standard of living" the same as "mean household income"? If not, how not? Answers on a postcard, please.

And the rest of the questions, not to put too fine a point on it, seem to me to be wide friggin' open. Not least because of the lack of any clear definition of terms.

I know, for instance, that there are people in the USA who seriously advance the idea that no Third-World workers for US companies are being exploited in any way, because apparently being paid a quarter of a living wage toward the large debt you incurred when you started work in the crap-for-fat-Westerners factory isn't exploitation. But if you take the completely crazy position that maybe some people in the Third World are being exploited by US companies, and therefore disagree with assertion 6, you're officially Un-Enlightened.

The very next paragraph of the paper, entertainingly, says that any objection such as this is "tendentious and churlish".

I may now find myself required to challenge the pinguid, sesquipedaliaphiliac diplozoon responsible for this paper to a duel.

Let us now move beyond the lack of imagination of the authors as regards valid objections to their definition of enlightenment, and their questionable... question... selection, and move on to the demographic differences between the people who scored high, and low, in EE (whatever, if anything, the EE score actually measures).

The paper's headline "discovery" is that going to college didn't give respondents a statistically-significant higher EE score.

I imagine that tertiary education specifically involving economics - or just a weekend personal-finance-management seminar, for that matter - would have an effect on the EE score - at least, for the four questions that really do seem to have pretty clear objectively-correct answers. But since most university students wisely avoid the dismal science with the same zeal with which sane people avoid the Continental postmodernists, it's hardly surprising that just passing through a university does not cause one to pick up knowledge of economics by osmosis, without studying it.

(Colleges won't make you study it, either. Later in the paper, it points out that of "50 leading universities" surveyed by these people, exactly none had compulsory economics courses.)

It's practically a truism that knowing a great deal about one subject has little to no effect on your knowledge of other subjects. Actually, people who're very knowledgeable about one thing often incorrectly assume that they've got the right end of the stick about some completely different subject. Scam artists love Ph.Ds. (See also, "Engineers' Disease".)

Research that confirms the "obvious" is still valuable, even if it's routinely reported in News Of The Weird stories and derided by politicians who're trying to reduce "wasteful" government spending. (By, of course, taking funding away from anything that they reckon sounds a bit silly, and giving it to the people who've given them a rent-free flat.)

Still, though, EE and college education being uncorrelated doesn't look like a big discovery to me. (If the EE testing method itself is fatally flawed, of course, then no correlation, or lack thereof, means anything.)

What else you got, guys?

Well, there's the left/right thing.

One very simple way of pigeonholing people as "left" or "right" in political ideology is to just to ask them, which this survey did. Once again, the lack of a real left wing in the US political dialogue means that a reborn Dwight D. Eisenhower would now be categorised as a State-trampling tax-and-spend socialist enemy-emboldener - but never mind that for now. The survey asked respondents to categorise themselves as "Progressive/very liberal", "Liberal", "Moderate", "Conservative", "Very conservative", "Libertarian", "Not sure" or "Refuse to answer".

And lo, those who admitted that they were infected with the terrifying disease of "progressivism" scored the very worst on the EE scale, with a neat diminishing-wrongness progression as you proceed toward "Very Conservative". And then a score a little better again - though not statistically-significantly so - for the brave and hardy "Libertarians"!

Once again, this was presented according to proper scientific standards, with a full breakdown and confidence interval listed. The error bars are wide enough to, as I said, mean the Very Conservatives might actually have beaten the Libertarians, but the overall order is clear. And the authors mention, again, that questions specifically aimed at "typical conservative or libertarian policy positions" might have changed the results. (Like, I dunno, maybe "Illegal immigrants are a major drain on the American taxpayer.")

But they, again, conclude, "Naaah." (I paraphrase.)

Next we get a bunch of little tables demonstrating that people who voted for Obama have miserable EE (but people who voted for Nader or the Green Party's Cynthia McKinney score even worse - though the error bars are of course really large for these unpopular "wasted vote" candidates).

Who else scored badly? Oh, just black people, Hispanics, citydwellers, Jews and Muslims, union members, and people with no direct or familial connection with the armed forces.

Who else scored well? "Atheist/realist/humanists", people who did not consider themselves to be "a born-again, evangelical, or fundamentalist Christian", and people who go to church "rarely" or "never". I'm not sure what this is supposed to mean, but it's amusing.

Oh, and "Married" people beat every kind of single person, and beat by a wider margin people in a "civil union/domestic partnership". "Asian/Pacific" respondents scored even better than white folk. NASCAR fans scored better than others, too. (Somewhere in America there must be a married Japanese-American NASCAR-loving atheist Republican who has a perfect model of the entire world economy turning and twinkling in his mind's eye.)

And Nader and McKinney voters may have scored miserably, but people who voted for the Libertarian candidate Bob Barr got an average score even better than those wily McCain voters, though again with a big enough error bar that they might not really have scored higher in a bigger survey.

Registered Libertarian voters scored better than people affiliated with different parties, and in response to "Do you consider yourself to be mostly a resident of: your city or town, America, or planet earth", Planet Earthers scored worst, followed by "not sure/refused", then "my town", then "America". (Presence or absence of a subsequent "Fuck Yeah!" was not recorded.)

All this isn't quite the results that a modern US "radical conservative" would really want to see, but that's just because religious beliefs and a good EE score appear to be incompatible (though "Other/no affiliation" for the religion question scored even worse than those silly Muslims!). Apart from that, the results are driving straight down the radical-conservative road. In brief, the authors' thesis that conservatives and Libertarian-ish people have higher Economic Enlightenment than members of the Pinko-Green Communist Alliance was solidly supported across the board of their questions.

There were several really nice line-fits, too. I mean, check this out:

Income versus 'economic enlightenment'

The more you make, the more economically enlightened you are! Makes sense, doesn't it, kids?

But wait a minute. Look at that tiny little error bar for those brilliant high-EE "$100K+" respondents. A smaller error bar means a larger sample. Did they really have more respondents making $100,000 or more than any of the other income brackets?

According to the raw data - yes, they did! The breakdown for the 4,835 respondents was:

No answer: 593 (12% of respondents)
Below $25K: 277 (6%)
$25-$35K: 337 (7%)
$35-$50K: 541 (11%)
$50-$75K: 941 (19%)
$75-$100K: 757 (16%)
$100K+: 1389 (29%)

Now, this was total household income, not the personal income of the person answering the survey. But the median and mean household incomes for the USA in 2004 were $44,389 and $60,528, respectively. I doubt that either figure has shot up past $100,000 in the last six years.

When 29% of respondents are making around twice as much as the average income - the number of $100K-plus responses is only marginally smaller than the $35-$50K and $50-$75K respondents put together - serious "skewed sample" alarm bells should start ringing.

The paper does have quite a bit of discussion of the problems with their testing technique, but of course concludes that none of them invalidate the study. Or mean that they should have applied weighting to try to un-skew their strange self-selected sample.

(There's also, once again, the possibility of deliberate deception. Respondents might seek to give their ideology-driven answers to other questions more weight by claiming household income much higher than what they actually make.)

On finishing reading the paper - which, gentle reader, means that the end of this epic saga of a blog post is also in sight - I figured that the main problems with it were this obviously unbalanced self-selected sample, the lack of any weighting to attempt to compensate for the sample bias, and the selection of the questions used to construct the EE score.

Apart from that, I reckoned this was a decent paper. It's rather sad that the best you can say about so many media-touted studies is that they conform to the minimum standards for an academic paper - presenting methodology and results, and not blatantly lying. But still, it's better than nothing.

("Yeah, that car he sold me WAS full of rust, but at least it really was a car, not just a couple of bikes covered with tape.")

Perhaps I'm so easily impressed by any paper that achieves the basic benchmarks for publication in a peer-reviewed journal because I'm so used to examining the rather different evidentiary paperwork of true out-there crackpots. Those guys often insist that their magic potion or antigravity machine has been tested by some prestigious institution or corporation - UCLA, Bristol-Myers Squibb, the US military, and of course poor old NASA. But when you ask who actually did the test, and when, and whether it was published anywhere... well, you may end up with a photocopy of a photocopy of a photocopy of something that might originally have been on university letterhead. Or test results from special secret scientists or car-gizmo testers who always seem to find things that nobody else can. But you'll probably only receive abuse.

Compared with that, this paper is a magnificent solid-gold triumph of the scientific method.

What, I now wondered, do people who do not bear the mental scars of numerous encounters with extremely independent thinkers make of the Buturovic/Klein study?

I returned to the page that, seemingly years ago, alerted me to the study's existence in the first place: This question on Ask MetaFilter.

Commenters there linked to this FiveThirtyEight piece by Nate Silver - who has an economics degree.

Silver has previously written that Zogby's "regular polls" were acceptably accurate in the last US Presidential election. But "Zogby Interactive", the "Internet Panel", has consistently been appallingly inaccurate. Because, yes, you really do get on the Internet Panel by just signing up at the Web site!

Knowing this, I feel I now have no option but to class any actual academic researcher who uses the Zogby Internet Panel, but doesn't weight the results and stretch the error bars accordingly, as being deliberately deceptive. There is no excuse for pretending that the Internet Panel is directly representative of anything but itself, even if you take care to ask an unbiased series of questions, which Buturovic and Klein clearly did not.

In this particular case, Silver once again points out the lousiness of the Zogby Internet Panel, and the questionability of the "Economic Enlightenment" questions. He also mentions that some of the questions do not have a clear answer even according to professional economists, "...as Klein should know, since he's commissioned several surveys of them."

This does further damage to the headline "college education doesn't teach economics" finding; actually, the more you learned at university about economics, the more likely you appear to be to give the "wrong" answer for one of the EE questions. This turns that finding into a tautology.

Nate Silver's conclusion from this is that the study is "junk science". If Silver's post had been the only thing I read about this study then I'd agree with him; having actually read the study, I still agree with him, because what he noticed lines up with what I noticed.

Another MetaFilter commenter pointed out that the questions asked will allow anybody who sticks to mainstream US "conservative" viewpoints to, regardless of their actual level of comprehension of what they're saying, get an excellent EE score.

Commenters also came up with a number of theories about why the paper is the way that it is, for instance perhaps because of a conscious or, just barely possibly, unconscious desire to contribute to the US radical-conservative echo chamber about universities being hotbeds of crazy left-wing brainwashing.

(It's true, you know. By and large, the more education someone's received, the more likely they are to hold "leftist" political views. Clearly, brainwashing is the only possible explanation for this.)

And then there's the issue of mining for correlations. If you measure a lot of things and then shuffle the data around until you find something that correlates with something else, you may have discovered a real relationship. But as a dataset increases in size, the chance of finding a statistically-significant but entirely spurious correlation in there somewhere approaches one. Hunt through the data until you find similar-looking graphs and you may indeed have discovered that G causes R, or that R causes G, or that both R and G have a common cause that you didn't measure. But G and R may also appear connected by a pure fluke.

The Buturovic/Klein poll contains a sort of back-door correlation-mining; the question selection seems to have guaranteed the overall "conservatives smart, liberals dumb" conclusions.

(Another commenter was surprised that there wasn't any Laffer Curve BS in the survey. And yet another commenter cunningly attempted to lengthen this post by mentioning a two-question ideology-versus-science test involving "deadweight loss".)

Someone also said that Zogby "is a bit of a joke among other pollsters". But I find it hard to dislike John Zogby himself:

Honestly, I could have better used the hours I spent poring over this study. You could probably have better used the time you spent reading this page.

But there is, at long last, a point to this beyond just debunking that one fatally-flawed study. It is:

The next time you see a reference to a scientific paper on a subject that interests you, if it's possible to dig up the paper without having to trek to the nearest university library or something, do so, and read it for yourself.

(If you've got a standard worse-than-useless newspaper science article in front of you and you're trying to figure out who the "scientists" are who've allegedly discovered the cure for oh-god-not-again, Google Scholar is a good place to start. Note, however, that the modern mass-media science story is based on press releases from university and corporate PR bodies, who are famous for sending puffed-up announcements about studies that haven't actually quite been published yet. If it ain't been published, you ain't gonna find it in Google Scholar, PubMed or anywhere else.)

You need advanced education to understand some scientific papers. You're probably not going to get a lot from a paper about, for instance, cryptography or particle physics, unless you're already quite knowledgeable in those fields.

But a lot of papers, definitely including many of the psychological, sociological and medical/epidemiological papers that are so popular with the newspapers, can be comprehended with nothing more than a bit of light Wikipedia use and basic knowledge about statistics and probability. That latter knowledge is, of course, useful in all sorts of other situations too.

(You can get a basic tutorial in stats and probability from Wikipedia too, or in a more structured and entertaining form from the classic How to Lie With Statistics, and/or Joel Best's much more recent Damned Lies and Statistics. John Allen Paulos' Innumeracy is also excellent.)

At the very least, it's a salutary mental exercise to understand what a good study's saying, or to figure out what's wrong with a bad one. And it can also tip you off about the reliability of different sources of information about scientific discoveries. Who knows - you may find that your local newspaper has a science reporter who's actually good!

The developed world is entirely built upon a foundation of science, and the basic interchangeable unit of scientific research is not, as one might suppose, the undergraduate lab assistant, but the published paper. To float along on the surface of the world's science and technology without ever looking at the papers from which it is all built is like eating meat daily without taking any interest in what happens in a slaughterhouse.

I've been to a slaughterhouse.

I find reading scientific papers somewhat less unpleasant.

Today's mechanical conundrum

A reader writes:

As soon as I heard about "Steve Durnin's D-Drive, [possibly] the holy grail of infinitely variable transmissions", my BS meter activated and the needle swung to "Possible thermodynamics violation".

But in his favor he's got an actual physical prototype...

...and is attempting to have a metal model made so its input and output power can be tested.

What do you think of the concept, and can you tell how on earth it works? I'm still trying to figure out how this is too different from CVT, other than maybe a wider range.

I'm still wondering if this is somehow impossible, but personally I'm open to the possibility that it's a similar step such as CVT and the in-article claims are typical science-journalism overestimations.

David

Oh no - it's another New Inventors prize-winner!

Fortunately, though, an infinitely-variable transmission (IVT) is not actually in any way related to perpetual motion. All it is, is a continuously-variable transmission (CVT) that has some way to run its variable "gear ratio" all the way down to infinity-to-one, also known as a "driven neutral".

(This is, by the way, not the same as just running the gear ratio up so much, billions or trillions to one, that the final gear in the train is functionally immobile, and could be embedded in concrete without having any effect on the load of the driving motor for some years. A true "driven neutral" could be driven at a trillion RPM for eleventy frajillion years, and never turn the output at all. A transmission that bottoms out at zillion-to-one gearing would, however, be perfectly usable as a real-world infinitely-variable transmission.)

Because it can gear down to infinity-to-one, this does indeed mean that this transmission doesn't need a clutch, which does indeed reduce complexity. Whether a real-world version of the D-Drive would be too big or too heavy or inadequate in some other more complex way for real-world duty, though, I don't know. But there's nothing crackpot-y about the basic idea.

As the video makes clear, the big deal here is making an IVT - actually, a mere CVT, that still needed a clutch, would do - that uses standard gearbox-y sorts of components, or can in some other way handle lots of power and torque without being unmanageably big, expensive and/or quick to wear out.

Normal CVTs have been available in low-torque machinery like motor-scooters for some time, and are now showing up in some mainstream, full-sized cars as well. But they're still a fair distance from ideal.

It's easy to make a CVT, you see. Here's one made out of Lego. It's hard to make a CVT that can handle lots of power. And yes, the fact that most CVTs contain some sort of friction-drive device is a big part of the reason for this.

Note, however, that there's a big difference between dynamic-friction CVTs like this one or the Lego one, in which friction between moving parts transfers power, and static-friction CVTs like this one, in which friction locks components together (as in a clutch!), and they don't wear against each other.

But even here, real-world elements muddy the water and make it hard for someone who doesn't actually work at the engineering coalface to tell whether they're looking at something genuinely new and useful, or something that's not new at all, and/or won't work. Here, for instance, is the NuVinci transmission, a friction-based CVT that spreads the friction stress between numerous relatively lightly-clamped spheres - it's related to the "ball differential" with which R/C car racers are familiar. The NuVinci's makers claim it's useful for high-power, high-torque applications. And maybe they're right. I don't know.

For an excellent example of the ugliness that can happen when somewhat specialised knowledge is repurposed by people who, at best, don't know what they're talking about, look at this particular piece of "water-powered car" nonsense, where the well-known-to-jewelers electric oxyhydrogen torch is claimed to be some sort of incredible over-unity breakthrough. This sort of thing happens all the time - it's just, usually, not quite such a blatant scam.

As the Gizmag article mentions, many commercial CVTs are also deliberately hobbled by car manufacturers. They force the transmission to stick to only a few distinct ratios, and also to want to creep forward when at rest, just like a normal automatic transmission. This isn't a limitation of existing CVT technology, though; it's just deliberately bad implementations of it.

(The manufacturers do this so that people who're used to normal autos won't be freaked out by a CVT. Those of us who'd like the superior technology we pay for to be allowed to actually be superior just throw up our hands, and cross those cars off the worth-buying list.)

I think one trap for the D-Drive could be the second motor that handles the ratio-changing - that might need to spin really, really fast in certain circumstances.

There's also the fact that this is only really an infinitely-variable transmission at one end of the ratio scale. The D-Drive can gear down an infinite amount, and right on through zero to negative (reverse) ratios. But unless I'm missing something, I don't think it can gear up at all. So the output shaft can't ever turn faster than the input shaft. This is a problem if you want to do low-power flat-highway cruising, when the engine's turning quite slowly but the wheels are turning very fast.

Normal cars have significant gear reduction in the differential, though - the "final drive ratio". Perhaps if you make the diff a 1:1 device, which shouldn't make it that much bigger, the D-Drive's output-ratio limitation won't matter.

The reason why I'm saying "might" and "perhaps" so often is that I, like the New Inventors judges, am not actually an expert on the very large number of mechanisms that the human race has invented over the centuries. The simplicity of the D-Drive makes me particularly suspicious. The D-Drive's mode of operation may be a little difficult for people who don't work with mechanisms all day to intuitively grasp, but there aren't many components in there, and none of them are under 100 years old. Actually, that's probably a considerable understatement; I'm not sure when epicyclic gearing became common knowledge among cunning artificers, but I can't help but suspect that a master clockmaker in 1650 wouldn't find any of the D-Drive's components surprising.

Sometimes someone really does invent some quite simple mechanical device, like the D-Drive, that nobody thought of before. But overwhelmingly more often, modern inventors just accidentally re-invent something that was old when James Watt used it.

To get an idea of the diversity of mechanical movements and mechanisms, I suggest you check out one of several long-out-of-copyright books full of the darn things. I think Henry T Brown's 507 Mechanical Movements, Mechanisms and Devices is the most straightforward introduction; it's a slim volume available for free from archive.org here.

(If you'd like a paper edition, which I assure you makes excellent toilet reading, you can get the one I have for eight US bucks from Amazon. Here's a version of it for four dollars.)

And then there's Gardner Dexter Hiscox's Mechanical movements, powers, devices, and appliances, whose full title would take a couple more paragraphs, which is also available for free.

Both of those books carry publication dates in the early twentieth century, but many of the mechanisms in them were already very, very old. Like, "older than metalworking" old. But several of them are still, today, unknown to practically everybody who's not able to give an impromptu lecture about the complementary merits of the cycloidal and Harmonic drives.

(You may, by the way, notice rather a lot of mechanisms in those old books that do the work of a crank. That's because one James Pickard patented the crank in 1780 - plus ça change. This forced James Watt, and many other early-Age-Of-Steam engineers, to find variably practical Heath-Robinson alternatives to that most elegant of mechanisms to get the power of their pistons to bloody turn something. Watt's colleague William Murdoch came up with a kind of basic planetary gearing to replace the crank. Planetary gears have, in the intervening 230-odd years, found countless applications - including the D-Drive!)

Getting back to Mr Durnin and The New Inventors, they both currently allege that the D-Drive is a "completely new method of utilising the forces generated in a gearbox". According to this Metafilter commenter and this patent application, that may not actually be the case, since 18 of the 19 formal Claims made in the application appear to have been turned down. But, again, I could be getting this wrong, because somewhere behind the impenetrable thicket of legalese I suspect the "Written Opinion" may be saying that the final Claim actually is patentable as a separate worthwhile thing. (See also this forum thread.)

This all has me thinking, again, about the repeatedly-demonstrated gullibility of The New Inventors. When I can bring myself to watch the show, I keep thinking - OK, actually sometimes shouting - about how I'd spoil the party by asking at least one out of every four inventors "would you be willing to make a small wager that your device is not fundamentally worthless, or a duplicate of something that's been in production for years?"

(Sometimes, I'd just say "Have you always dreamed of being a rip-off artist, or is it a recent career development?")

The New Inventors seem to not have much of a peer-review system to keep the show free of crackpots, scammers and ignorant inventors who're unaware that their baby was independently invented in 1775. Or maybe there's just a shortage of interesting inventions, like unto Atomic magazine's shortage of interesting letters, so they let even the dodgy ones onto the show as long as they look impressive.

Perhaps the people on the judging panel just studiously avoid saying anything that might attract legal action from an inventor outraged that someone dared to point out that his magic spark plugs strongly resemble 87 previous magic spark plugs out of which the magic appeared to leak rather quickly.

Personally, I suspect that some insight into the newness or otherwise of the D-Drive may lurk in the various kinds of differential steering used in tanks. (Many of those have also been implemented, needless to say, in Lego.) And don't even ask about differential analysers.

It doesn't even take a lot of searching to find other IVTs. Here's one that, like the D-Drive, has no friction (or hydraulic) components. Its highest input-to-output gear ratio is quoted as "five to one", which is weirdly low; perhaps it's meant to be the other way around.

I hope, I really do hope, that the D-Drive turns out to be a proper new and useful device. We can always use another one of those.

But I remain very unconvinced that something this simple, aiming to do this straightforward a task, really is useful, let alone new.

UPDATE: As mentioned in the comments, Gizmag have a new post about this.

To summarise: The D-Drive does not remove all friction components from the drivetrain, because it can only ever be a part of that drivetrain, and needs supporting stuff that'll probably need friction components. And yes, it would need a motor just as powerful as the "main" one to drive the control shaft.

And Steve Durnin is apparently proud of independently coming up with a system similar to Toyota's Hybrid Synergy Drive "Power Split Device". I must be missing something, there, seeing as if this is the case then the D-Drive probably isn't patentable, and probably wouldn't even be particularly marketable.