"Sucrosa. It's a pill."

A reader writes:

One of your recent posts got me thinking.

Everyone (or anyone who's ever had occasion to read any sort of scientific study), knows about the placebo effect. I'm curious as to whether the magnitude of said effect has ever been studied, in order to determine the extent to which placebo effect is an actual effect, or just the natural history of a disease (or whatever).

I'm thinking that a randomised, NON BLINDED study might be necessary.

Take a bunch of people with the same usually-harmless disease, say a simple cold, or something like Bell's palsy. Tell them that there's a trial of a new wonder drug to treat their condition, and would they enrol in said study. Tell some people that they are getting the 'drug', and tell the others they are getting placebo. Give everybody placebo.

If there's a significant difference between the two groups, then that would be an actual placebo effect (ie thinking you're going to get makes you get better), as opposed to things just getting better on their own.

Has something like this been done? Would there be any problem with it, other than the ethics of telling people they're in one study but actually studying something entirely different?

Ben

Yes, there have been some studies of this sort. Placebo hasn't achieved much.

It's popularly imagined that placebos can do all sorts of amazing things, just as adrenaline makes a tiny woman able to lift a crashed car off her baby, and acupuncture can be used for surgical anaesthesia.

None of these things are actually true.

The reason why the placebo-controlled trial is such a central tool of medical research is that placebos don't do much of anything. If placebo treatment really was effective for all sorts of things, then, one, doctors could save a lot of time and money by just giving patients placebos all the time, and, two, placebo response could be a confounding factor on the non-placebo side of placebo-controlled trials. There's nothing stopping someone from having a placebo response to a real treatment, on top of whatever the treatment itself does.

The reason why trials are placebo-controlled, rather, is so that they can be blinded properly - preventing at least the patients and preferably also the doctors from being able to tell whether they're administering the medicine or the placebo. Unblinded tests are terribly susceptible to all sorts of biases, and a number of practical problems as well, like for instance all of the subjects who discover they're not getting any medicine not bothering to turn up next week.

The placebo effect, insofar as actual empirical science has been able to quantify it, is a delusion. That's "delusion" in the technical psychological sense - a counterfactual belief. But if you're given a placebo as a treatment for pain, or anxiety, or depression, and it works, then you have a delusion that your illness is not as severe. Which, for all practical purposes, is the same as "real" medicine for conditions like this that're all about what you perceive and how you think, rather than empirically measurable phenomena.

This is not the case for most illnesses, though. If you're given a placebo treatment for diabetes, or cancer, or yellow fever, you may if you're particularly amenable to positive delusions sincerely and unshakably believe you're getting better. But you won't be. Just as some dangerously thin anorexic people can literally see a fat person when they look in the mirror, some people undergoing worthless treatment for, say, cancer, can literally feel the lump getting smaller. Until they die.

(The flip-side of this is the not-terribly-well-documented situation in which someone is given a "nocebo", something inert which they believe to be poison or black magic or what-have-you, and then develop real symptoms or even die. There's no very persuasive evidence that people who believe themselves poisoned or cursed in one way or another actually can "worry themselves to death", but it's uncontroversial that someone who sincerely believes themselves to be in a nonexistent deadly situation can worry themselves into a state requiring serious real medical treatment. Note that it doesn't count if the patient just has a fatal car accident while driving frantically to the hospital after, say, being bitten by a non-poisonous spider.)

To really tell how effective placebos are, you need to do a three-pronged study, with one group getting a treatment, one group getting a placebo, and a third group getting nothing at all, if you can persuade that last group to stick around. (Or you can do a two-pronged study with placebo and no-treatment, if you can get such a thing past your Ethics Committee.)

When people do this, testing placebo against nothing at all, there tends to be little difference, and no objectively measurable difference at all.

There are lots and lots of real individual clinical observations (as opposed to friend-of-a-friend stories) of placebos creating real physical changes in real diseases like irritable bowel syndrome, asthma, ulcers and quite a lot of other conditions. These changes are hard to pin down, though; they exhibit the same deadly weakness seen in claims of paranormal powers, in which the harder you look to see if the effect is real, the smaller it becomes.

Useful placebo effects are, at best, highly variable between patients. And, again, you can't really tell what's going on without running a pretty big study of one kind or another, testing placebo versus no treatment at all. This is ethically difficult, and probably not a great use of researchers' time, compared with trying to develop non-placebo treatments that work whether the patient believes in them or not.

I could continue to ramble on here, but I've really got nothing much more to say about placeboes than actual-medical-doctor Harriet Hall says in this excellent article.

(The title of this post is from this Onion article.)


Psycho Science is a regular feature here. Ask me your science questions, and I'll answer them. Probably.

And then commenters will, I hope, correct at least the most obvious flaws in my answer.

Sproing

A reader writes:

If you take a spring - a metal one I suppose (I know nothing about springs other than they're fun to play with) - and hang it by one end, with no weights or anything attached to the other end, and just leave it hanging there, will it eventually (like really eventually) become completely straight? Or what will happen? Does it matter what kind of spring it is? Will it's own weight straighten it out, or is there something about its structure that would prevent that from happening? And the follow-up, if the answer is yes, is, let's say it's a Slinky; about how long would it take?

I wish I could say I had a beer riding on this, but the truth is I'm just geeky and thought you might know.

Michael

No, a hanging spiral spring won't straighten.

The key concepts here are elasticity and plasticity. The whole idea of a spring is that it's elastic - you can stretch and/or compress it, and when you let go, it returns to its original shape. If the force applied to an elastic object exceeds the limits of its elasticity then the object will be permanently deformed (or just break), but you'd need a pretty darn long, but skinny, spring for that to happen just from the spring's own weight.

Slinkies are an extreme case, here, because they're a quite unusual kind of spring, with peculiar dimensions compared with most spiral springs. Dangling a brand new Slinky may actually give it a slight permanent stretch, but it clearly doesn't stretch it very much, and you can leave it hanging as long as you like without getting any more stretch than happens in the first few minutes.

It's actually normal for a new spring to distort somewhat when put to use. This is called "taking a set", and has to be accounted for in the design of devices that use springs, from the huge ones in heavy vehicle suspensions to the incredibly delicate ones in mechanical wristwatches. It's unusual for a spring to take a set just from its own weight, though.

If you made a spring out of, say, tin/lead electronics solder, then it wouldn't need to be very long in order to straighten out under its own weight. It'd probably continue to straighten for some time, too - meaning hours, though, not years. Tin/lead alloy is of course a terrible material for springs, since it's highly plastic and hardly elastic at all.

Apropos of this, there's a really neat guide to making your own springs here. Home handypersons usually regard spring-making as a black art and just end up with a parts box full of springs cannibalised from other items, but you really can make them yourself without being a master metalworker.

(Oh, and I know I sound like a broken record, but J.E. Gordon's "The New Science of Strong Materials, or Why You Don't Fall through the Floor" and "Structures, Or Why Things Don't Fall Down" both have a lot to say about the springiness of actual springs and of many other objects, and about the foundational concepts of stress and strain.)


Psycho Science is a regular feature here. Ask me your science questions, and I'll answer them. Probably.

And then commenters will, I hope, correct at least the most obvious flaws in my answer.

Pretty, but smelly

A reader writes:

Why is "cloudy ammonia" cloudy?

I've used household ammonia for decades now, for cleaning windows and floors and so on, and never really questioned why it has that swirling cloudy look. But the other day I realised that this stuff is just a solution of ammonia (NH3) in water, and I don't think there's any reason why that shouldn't be clear, like a solution of many other simple chemicals in water.

Where do the clouds come from?

Jennifer

Some readers may be confused at this point, because different parts of the world have different kinds of supermarket ammonia-water. I think cloudy ammonia is the standard kind in the Commonwealth, with the non-cloudy version being normal in the USA, but don't quote me on that.

For those who haven't seen the cloudy kind...

Cloudy ammonia

...here some is. It's really quite pretty; you can see a similar pearlescent swirling effect in various shampoos that're trying to persuade you they're not just detergent.

Jennifer's right, though: There's no reason why ammonia in water should look like this. Bubble ammonia into water and you get a solution of ammonium hydroxide, which is clear.

The reason why cloudy ammonia is cloudy is simple enough: The manufacturers put soap in it. I've also read that there can be some oil or other instead of, or in addition to, the soap. In any case, the additive makes the substance look interesting, and may also make it a better cleaner. Probably not a better window cleaner, though, since straight ammonia-plus-water will evaporate to nothing and not leave marks, while ammonia-plus-water-plus-soap can leave streaks.

The ammonia in the above picture had been sitting undisturbed for some time, so it wasn't actually very cloudy at first...

Bottom of ammonia bottle

...because a lot of its "clouds" had settled out. I took this photo first, then shook the bottle up and took the other one.

While we're on this pungent subject, I have recently accidentally created an ammonia-water generator.

We have, you see, four indoor cats. They use cat litter fast enough to make "flushable" litter a pretty quick ticket to a drain blockage, if you're dumb enough to actually flush it, which, at one point, I was.

Now, next to the phalanx of litter trays, there's a flip-top rubbish bin that I shovel the nasty stuff into for later, non-plumbing disposal.

Urine, left to sit, generates ammonia from the breakdown of urea.

(If your surname is "Fuller", then someone in your ancestry was probably rather familiar with this phenomenon.)

All that ammoniacal goodness is contained very effectively by the bin's fitted lid. It only comes out to say hello to my sinuses when I've got the bin open. It's less gross than you'd think, too; it doesn't really smell like wee and poo at all, because the wall of harsh ammonia-smell covers everything else.

When the litter stays there a bit longer than usual, it warms up a bit from its own slow decomposition. When the ambient temperature was about 20°C, I measured the temperature in the middle of the litter at around 26°C.

When this has happened and I open the bin, the ammonia-smell is really strong, and the underside of the lid is covered with droplets of quite clear, clean-looking liquid. Which is ammonium hydroxide; a solution of ammonia in water. It's probably got a high enough pH to be pretty much sterile, too; if I were more dedicated to perversity, I could bottle it and use it as a cleaner.

Oh, and while we're on the subject of well-aged urine, if you're unhinged enough to boil down aged urine, you can isolate phosphorus.

Or possibly not.


Psycho Science is a regular feature here. Ask me your science questions, and I'll answer them. Probably.

And then commenters will, I hope, correct at least the most obvious flaws in my answer.

Entrée: Two ice cubes. Main Course: Oxygen.

A reader writes:

Is there such a thing as food with less than no calories?

You're supposed to be able to eat lettuce or celery or something, and the energy your body uses to digest it is more than you get out of it.

(Well, I don't really know if you're "supposed" to be able to or not, but I've certainly heard people say this.)

What about eating ice? Obviously you get no nutrition at all from that, but I've never seen a Get Thin By Eating Snow diet book, so I figure it's not practical.

Nat

There are foodstuffs that have very little "food energy". They're pretty much what you'd expect, and do include a lot of green vegetables.

But digestion is good at sucking energy out of food. Your body does better than break even, even when you're eating celery. It's not easy to gain weight eating nothing but undressed salads and vitamin supplements, but it's possible.

The eating-ice thing, in contrast, sounds like a great idea. But only if you make a particular mistake, having to do with the term "calorie".

There are 540 calories in a Big Mac. But the enthalpy of fusion of water is about 80 calories per gram. And then you need another calorie to heat one gram of water by 1°C; if your ice starts out a few degrees below zero and ends up at body temperature, that adds up. And the only place this energy can come from is your body's own reserves.

So you can more than offset the entire nutritional value of your hamburger by crunching up one lousy ice cube! Right?

Sorry, no. Because the "physics" calorie, the one being used on the melting-ice side of the equation, is one thousandth of the "dietary" calorie, on the food side of the equation.

(This is noticeable when the dietary calorie is clearly indicated, as "kcal", for instance. The modern metric alternative to the two kinds of calorie is the joule and kilojoule; fortunately, there's no colloquial tendency to call both of these units "joules".)

A hundred grams of celery is about 14 kcal. To offset only that much energy value, you'd need to eat more than a hundred grams of ice. A whole tray of ice cubes would probably do it; the ice-cube trays in my fridge hold about 160 grams.

So as few as 35 trays of ice cubes might compensate for a Big Mac!

Presuming, of course, that you actually can Freeze Yourself Thin at all.

The human body runs warm as a matter of course. If the ambient temperature is below body temperature, which it is for most humans most of the time, then the body's leaking heat all the time anyway, and eating cold stuff may change where the heat goes, more than it changes how much heat is lost.

This page at livestrong.com gives a ballpark figure of only one dietary calorie burned per ounce of ice eaten. 80 small calories of enthalpy of fusion per gram of ice, plus 40 small calories of heat to take the water from a bit below freezing to body temperature, times about 28 grams to the ounce, gives 3360 small calories or 3.36 dietary ones of raw heating power. If that only adds up to one extra dietary calorie burned, the tooth wear and ice-cream headaches don't seem like much of a trade-off. Especially since you don't actually get any ice-cream.

(If you really apply yourself to slimming via low temperatures, I would not put it past your body to decide that all this shivering indicates you're now living in a cold climate, so more incoming food should be directed towards creating a nice insulating layer of fat.)

Your natural basal metabolic rate is probably closer to 2000 kcal per day than it is to 1000. Adding a couple of dozen kilocalories to that by ice-eating may actually hurt more than doing an energy-equivalent amount of exercise.

(Exercise is not really a great way of burning calories. Run ten kilometres, burn 700 kcal. As a general rule, if you're not some sort of athlete or heavy manual labourer, exercise will make you fitter and stronger, but not thinner.)

Needless to say, Wikipedia has a page about negative-calorie food. And a funnier one about the "Negative Calorie Illusion".


Psycho Science is a regular feature here. Ask me your science questions, and I'll answer them. Probably.

And then commenters will, I hope, correct at least the most obvious flaws in my answer.

Fire from, or at least near, ice

A reader writes:

Got a science question of sorts.

WTF is actually going on here?

Chris

The ice cube is not glowing. The induction coil is.

Induction heaters are interesting things. You make a coil out of a sturdy conductor - usually copper bar stock - and you put a whole lot of current through it at, usually, a pretty high AC frequency. The alternating current then induces current in any conductive object you put inside the coil, and the resistance of the object turns the current into heat, which heats the object. It's the same principle that heats up a wire, or an actual heating element, when you put current through it. The source of the current in an induction heater is just less obvious, and the electricity in the heated object isn't going round and round in a circuit; it's just jiggling eddy currents.

(Magnetic braking relies on induced eddy currents as well, and also heats up the object the eddy currents are being induced in.)

The induction coil was actually the first, and worst, kind of transformer. It was the worst because the purpose of a transformer is to turn one voltage of AC into another (or keep the same voltage but isolate two circuits). The more energy a transformer wastes as heat, the less useful it is. Modern transformers have laminated cores made from "electrical steel", specifically to minimise unproductive transformer-heating eddy currents.

A powerful enough induction heater can do all sorts of neat tricks, like heat-treating part of a piece of metal - all the way to glowing hot - so fast that the heat won't have managed to conduct through the metal to other parts of the object before the bit you're heating gets to the right temperature and can be quenched. You can also use an induction heater to melt metal in a crucible without a flame.

Or even to levitate a light enough metal, while it melts!

Induction cooktops work this way too. That's why they'll heat a metal pot, but not glass cookware. If it's conductive, they heat it; if it isn't, they don't.

[UPDATE: As commenters have pointed out, only ferromagnetic cookware actually works on an induction cooktop. I'll fix this properly when I have a moment.]

Ice is very slightly conductive (as I have proved to my own satisfaction), but can generally be considered an insulator, and won't be significantly warmed by an induction heater. So the induction coil in the ice-cube video is essentially being run "empty", and just rapidly heating itself up, and in due course glowing, in a simple resistive way. That ice cube will actually melt pretty quickly, because of radiant heat and air convection from the coil. But it'll last as long as you'd expect it to if it were sitting next to a similarly glowing plain resistive heating element.

(The glow probably isn't really as impressive as it looks, either, because digital cameras of all sorts are sensitive to infrared light. Most digital image sensors have an IR-blocking filter on them to minimise this effect, but the filters aren't completely effective, and so very hot things like this coil or the aftermath of certain pyrotechnic entertainments look hotter than they are. The human eye may see some glowing metal as orange, but most digital cameras will think it's white.)


Psycho Science is a regular feature here. Ask me your science questions, and I'll answer them. Probably.

And then commenters will, I hope, correct at least the most obvious flaws in my answer.

Marble-ous photography

2012 Blue Marble picture

A reader writes:

I love NASA's new "Blue Marble" images. I was a kid in the early 70s when they took the first Blue Marble picture, but now the young whippersnappers all have their Google Earths and such and can all pretend to be looking out the window of a UFO whenever they want to. The magic hasn't died for me though, and now there are newer, better, brighter Blue Marbles! Three of them, from different directions!

Except in the "Western Hemisphere" one (which I found on Astronomy Picture of the Day), the USA is HUGE! It looks almost as big as the whole of Europe, Russia and China. The one that shows Australia and a lot of clouds makes it look as if Australia's the only land mass in one whole hemisphere. Africa's way too big in the "Eastern Hemisphere" one, too.

You've written about map projections before; is this something related to that? But that can't be it, there's no "projection" at all when the map of the globe is still a globe, right?

Ulricke

1972 Blue Marble picture

The original 1972 Blue Marble, above (the Wikipedia article has information about the 2012 version too), is what it looks like. It's a single photo, taken from a spacecraft - Apollo 17, to be exact. The famous photo was taken from a distance of about 45,000 kilometres, which is a bit higher than geostationary-orbit-distance, but well under a tenth of the distance to the moon.

(There are a lot more Apollo 17 images; they're archived here.)

The distance is important, because the earth is about 12,750 kilometres in diameter.

If you're looking at the earth from an altitude of 45,000 kilometres, you're only three and a half earth-diameters away from the surface of the planet. (If you're 45,000km from the centre of the planet, the nearest point on the planet is less than 39,000km, about three diameters, away. Keep this in mind when reading about orbits; it's not always clear whether a given distance indicates how far it is from the centre of one object to the centre of another, or the distance between objects' surfaces, or even the distance between an orbiting object and the barycentre of the orbital system.)

Shrink everything until the earth is the size of a tennis ball, and the original Blue Marble viewpoint would only be about 23.6cm (9.3 inches) away from the surface of the ball.

When you're looking at a sphere, you can never see a whole half of it at once. If you're very close to the sphere - an ant on the tennis ball - you can see a quite small circle around you, before the curvature of the sphere cuts off your view. (Actually, the fluff on the tennis ball would block your ant's-eye view much closer, but let's presume someone has shaved these balls.)

Go a bit higher up from the surface, and you can see a bigger circle. Higher, bigger, higher, bigger; eventually you're so far away that you can see 99.9999% of one half of the sphere, but you'll never quite make it to seeing a whole half.

(See also, the previously discussed optical geometry of eclipses. Shadows cast by spherical bodies are conical, and more or less blurred around the edges, depending on the size of the light source.)

If you're eight inches away from a tennis ball, you can see pretty close to a half of it, and if you're 45,000 kilometres from the earth, you can see pretty close to a half of the planet. Which is why, in the original Blue Marble, Africa looks pretty darn big, but not disproportionately so.

2012 Blue Marble picture

Now let's look at the 2012 Blue Marble.

(Note that there's also, confusingly, an unrelated other NASA thing called "Blue Marble Next Generation"; that's a series of composite pictures of the earth covering every month of 2004.)

The 2012 Blue Marble was stitched together from pictures taken over four orbits by the satellite Suomi NPP, previously known as "National Polar-orbiting Operational Environmental Satellite System Preparatory Project", which was just begging for that Daily Show acronym joke.

Suomi NPP is in a sun-synchronous orbit (which, just to keep things confusing, is a type of orbit that can only exist around a planet that is not quite spherical...), only 824 kilometres up. So Suomi NPP can't see very much of the globe at any one time. But if you're compositing pictures of a planet together, you can use your composite to render an image apparently taken from any altitude you like. Provided you've got enough patchwork photos to cover the whole planet, you just have to warp and stitch them until you've covered the whole sphere. Then, you can render that sphere however you like.

For the 2012 Blue Marble images, NASA chose to render the sphere from a much closer viewpoint than the 1972 image was taken from - they say the altitude is 7,918 miles (about 12,743 kilometers). That distance is, no doubt not accidentally, about the diameter of the planet - so the virtual "camera" for the new pictures is only one tennis-ball diameter away from the surface of the tennis ball it's "photographing".

As a result, Blue Marble 2012 shows rather less than half of the sphere. But this is not immediately apparent. At first glance, it just looks as if whatever you can see is bigger than it should be.

You can experiment with this in Google Earth or any other virtual globe, or of course with one of those actual physical globes that people used to use in the 1950s or during the Assyrian Empire or something before we had computers.

Anyway, look at the real or on-screen globe from a long distance and a given feature, like for instance Australia or North America, looks as if it takes up as much space as it ought to. Zoom in and whatever's in the middle of your view takes up proportionally more of the face of the planet, as things on the edges creep away over the horizon.

So the close viewpoint of the 2012 Blue Marbles doesn't give them away as "synthetic", stitched-together images. Something else does, though.

The feature image of the new Blue Marbles - the one that showed up on APOD, and countless other sites - is the one that shows the USA. I think NASA may not have chosen that one just for patriotic reasons, though. Rather, I think it may be because the America image is the only one of the three that doesn't have noticeable parallel pale stripes on the ocean.

The stripes - most visible in the Eastern Hemisphere image of Africa (4000-pixel-square version) - are from sunlight reflecting off the water, which the Suomi NPP satellite saw on each of its orbits, and which therefore show up multiple times in the composited image. A real observer sitting in the location of the virtual camera of the new Blue Marble would only see sun-reflection on one spot on the earth, if the appropriate spot was on the water.

The 1972 Blue Marble photo was taken with the sun pretty much behind the spacecraft, so it has this one reflective highlight in the middle of the image, off the coast of Mozambique.


Psycho Science is a regular feature here. Ask me your science questions, and I'll answer them. Probably.

And then commenters will, I hope, correct at least the most obvious flaws in my answer.

Blinky bulbs

A reader writes:

What are those little LED-like, but flickery, orange lights, seen in nightlights, electric-blanket power lights, etc? I've seen them in antique radios and as indicator lights in other ancient gear, so I presume they're not actually LEDs.

Minnie

They're neon bulbs. One giveaway is the colour; a plain neon tube, just low-pressure neon in a glass envelope, glows naturally with that orange-red colour when you put enough volts across it.

("Neon lights" that aren't orange-red may still contain neon, but have a phosphor coating on the inside of the glass that turns the light another colour. White fluorescent lights are all actually mercury-vapour tubes with a phosphor coating. The amount of mercury in even a large fluorescent lamp is very small.)

For a large neon tube, the voltage from end to end has to be up in the kilovolts. But if you make a little teeny neon bulb with electrodes only a few millimetres apart, you only need a bit more than a hundred volts to get it to glow.

This makes teeny neon bulbs a natural fit for indicator-light duty in countries with 115V-ish mains power. You still need to use a current-limiting resistor in series to discourage the lamp from zipping up past the C on this graph and burning up, but that's all you need. In countries with 230V-ish mains, you just need a larger resistor value.

If you run a neon lamp directly from 50 or 60Hz AC mains power like this, the bulb flickers at twice the mains frequency, because the two electrodes light up in turn, but only when the mains waveform is giving the bulb enough voltage to light. (From DC, the lamp won't flicker, but only one electrode will light up.) The older the lamp, the more flickery it will become, until eventually it doesn't light up at all. Little neon lamps ought to last 20,000 hours or more, but many modern ones seem to be of lousier quality.

(Incandescent bulbs don't visibly flicker when run from AC, because a tungsten filament has enough thermal inertia to keep glowing at very close to full brightness even when the mains waveform is crossing the zero-volts mark. A fluorescent tube driven almost-as-directly as a neon bulb from mains power will also flicker, which a lot of people hate. Modern high-frequency electronic ballasts solve this problem, for fluorescent tubes and compact-fluorescent lamps.)


Psycho Science is a regular feature here. Ask me your science questions, and I'll answer them. Probably.

And then commenters will, I hope, correct at least the most obvious flaws in my answer.

Is Pyrex, Pyrex?

A reader writes:

When I was visiting my mother the other day, I dropped her glass casserole baking dish... thing... (I'm not much of a cook), and it broke, and so of course I said I'd get her a new one. The old one was "Pyrex" brand, but she told me I should just buy whatever similar sized glass dish is cheapest, because, and I quote "Pyrex isn't made from Pyrex any more".

The philosophical implications of that statement aside, were Pyrex products made from special glass, and now they're not? All I know about Pyrex is that I've seen that word written on laboratory glassware.

Harry

In the olden days, the "Pyrex" brand, wherever you saw it, meant borosilicate glass. Borosilicate glass doesn't change size much in response to temperature (it has a low "coefficient of thermal expansion"), so if you heat or cool it suddenly, it's unlikely to shatter.

("Pyrex" wasn't actually the first borosilicate glass; Otto Schott invented it, and the Schott company still sells it under the "Duran" brand. But Pyrex became the genericised trademark for borosilicate glass. Lab glassware that's intended to be used on heat is pretty much all borosilicate, under one name or another.)

Ordinary "soda-lime" glass expands and contracts more with temperature. So if, for instance, you suddenly cool a hot plain-glass baking dish by putting it the sink and turning on the tap, the inside surface of the dish contracts as it cools, the outside surface stays expanded, and stress between the two encourages the glass to break.

This can also happen when a glass object is originally manufactured. After forming the object, if you don't "anneal" the glass by slowly cooling it (a special kiln for doing this to glass that's been made somewhere else is called a lehr), a brand-new glass object can break spontaneously as it cools, or be right on the edge of breaking from the slightest shock.

There are numerous tricky ways to make glass objects more sturdy, the most common of which takes advantage of soda-lime glass's thermal expansion and contraction, to "temper", or "toughen", the glass and force the outside of the glass object to be under great compressive stress, which glass tolerates very well.

The simplest way of tempering glass is by rapidly cooling the outside of molten glass, so it solidifies and contracts quickly, and is then pulled into compression when the core of the glass cools later. Now, any insult suffered by the object will have to overcome the compression built into the outer layers before it can get the glass into tension and get a crack going. And if a crack does start, the whole glass object will collapse into zillions of distinctive little lumps of glass with quite safe large-angled edges, rather than dagger-like shards.

The forces involved in tempering glass are the same as the forces that make unevenly-cooled, unannealed glassware fragile; they're just tightly marshalled to make the material more durable, in the same way that prestressing "tendons" can make concrete far stronger.

(The most extreme version of the tempering process is Prince Rupert's Drops...

...which you can make at home, while wearing suitable protective clothing, by dripping molten glass into a bucket of water. Internal tensions make the body of each drop amazingly strong, but if you snap the thread-like tail - which is also very strong, but so thin that it can easily be bent or sheared past its limits - the whole drop instantly explodes into tiny particles.)

(Oh, and again, if you'd like to have the above explained much more clearly, try J.E. Gordon's classic "The New Science of Strong Materials, or Why You Don't Fall through the Floor", which is one of my favourite books, along with "Structures, Or Why Things Don't Fall Down".)

A fancier kind of tempered glass is "Corelle", which is laminated tempered glass, but doesn't look or feel much like glass at all. This is partly because it's opaque (though I don't think there's anything about the manufacturing process that says it has to be), and partly because it's so strong that plates and bowls made from the stuff can be very thin and lightweight.

Which brings us back to Pyrex, because the Pyrex and Corelle brands are now both held by World Kitchen, LLC. World Kitchen would really like people to stop saying that Pyrex kitchenware isn't made from borosilicate glass any more, because although this statement is actually correct, it wasn't World Kitchen that changed it. World Kitchen say the change happened "more than 60 years" ago; other sources can't put an exact figure on it, but it seems pretty clear that it's not a recent development.

In any case, what World Kitchen sell today as "Pyrex" bakeware isn't plain soda-lime glass, but "heat strengthened" soda-lime, which presumably means the usual kind of tempered glass. Tempered glass resists breaking from temperature changes pretty well, and resists breaking from mechanical insults very well, so it's a good choice for bakeware, which is bumped by other bakeware much more often than it has to tolerate large temperature shocks.

Well, it's a good choice for bakeware as long as your oven doesn't get hot enough to anneal the glass, which I think it definitely doesn't.

This is despite the additives in soda-lime glass, which are there to make the stuff melt at a reasonable temperature. Silica, also known as quartz, makes up the bulk of all normal glass compositions, and could be used to do anything ordinary glass does. But quartz's melting point is way up around 1700 degrees Centigrade. This is higher than the melting point of iron, and makes quartz unreasonably difficult to use for glassware, unless you're making furnace windows or something.

To make soda-lime glass from scratch you need a furnace that burns hot enough to melt silica - which is why recycling glass is so popular - but once the ingredients are mixed, the melting point of the mixed material plummets to less than 600°C.

Annealing happens significantly below the melting point, but you still need a temperature of more than 500°C to anneal soda-lime glass, even if you're willing to wait for hours, and no household oven goes that high. Actually, I don't think any food oven goes that high. The hottest are probably coal-fired pizza ovens (the great problem of making "authentic" pizza at home is getting the oven hot enough); I think those top out at around the 1000°F/540°C mark, but they usually run rather cooler.

I'm sure there are many companies that make tempered, or toughened, glass kitchenware, and I'm also sure that other companies again make plain soda-glass kitchenware, which may not even be properly annealed, much less properly tempered. So your mum may be right that Pyrex-brand glassware is not particularly good - but you also shouldn't buy the cheapest glass casserole dish you can find, unless you've good reason to believe it's made from tempered glass. Which may or may not be clearly, or honestly, indicated on the box.

I think the best way to authoritatively tell the difference is by bopping any dish you're planning to buy with a ball-peen hammer. I leave the formulation of techniques by which one could get away with this as an exercise for the reader.


Psycho Science is a regular feature here. Ask me your science questions, and I'll answer them. Probably.

And then commenters will, I hope, correct at least the most obvious flaws in my answer.