Forces of nature

If you want to truly be amazed by the wonders of the universe, the quickest way to do so is to learn about the science behind it.

And pardon the split infinitive in that paragraph, but it’s really not wrong in English, since it became a “rule” only after a very pedantic 19th century grammarian, John Comly, declared that it was wrong to do so — although neither he nor his contemporaries ever called it that. Unfortunately, he based this on the grammar and structure of Latin, to which that of English bears little resemblance.

That may seem like a digression, but it brings us back to one of the most famous modern split infinitives that still resonates throughout pop culture today: “To boldly go where no one has gone before,” and this brings us gracefully back to science and space.

That’s where we find the answer to the question “Where did we come from?” But what would you say exactly is the ultimate force that wound up directly creating each one of us?

One quick and easy answer is the Big Bang. This is the idea, derived from the observation that everything in the universe seems to be moving away from everything else, so that at one time everything must have been in the same place. That is, what became the entire universe was concentrated into a single point that then somehow exploded outward into, well, everything.

But the Big Bang itself did not instantly create stars and planets and galaxies. It was way too energetic for that. So energetic, in fact, that matter couldn’t even form in the immediate aftermath. Instead, everything that existed was an incredibly hot quantum foam of unbound quarks. Don’t let the words daunt you. The simple version is that elements are made up of atoms, and an atom is the smallest unit of any particular element — an atom of hydrogen, helium, carbon, iron, etc. Once you move to the subatomic particles that make up the atom, you lose any of the properties that make the element unique, most of which have to do with its atomic weight and the number of free electrons wrapped around it.

Those atoms in turn are made up of electrons that are sort of smeared out in a statistical cloud around a nucleus made up of at least one proton (hydrogen), and then working their way up through larger collections of protons (positively charged), an often but not always equal number of neutrons (no charge), and a number of electrons (negatively charged) that may or may not equal the number of protons.

Note that despite what you might have learned in school, an atom does not resemble a mini solar system in any particular way at all, with the electron “planets” neatly orbiting the “star” that is the nucleus. Instead, the electrons live in what are called orbitals and shells, but they have a lot more to do with energy levels and probable locations than they do with literal placement of discrete dots of energy.

Things get weird on this level, but they get weirder if you go one step down and look inside of the protons and neutrons. These particles themselves are made up of smaller particles that were named quarks by Nobel Prize winner Murray Gell-Man as a direct homage to James Joyce. The word comes from a line from Joyce’s book Finnegans Wake, which itself is about as weird and wonderful as the world of subatomic science. “Three quarks for muster mark…”

The only difference between a proton and a neutron is the configuration of quarks inside. I won’t get into it here except to say that if we call the quarks arbitrarily U and D, a proton has two U’s and one D, while a neutron has two D’s and one U.

And for the first few milliseconds after the Big Bang, the universe was an incredibly hot soup of all these U’s and D’s flying around, unable to connect to each other because the other theoretical particles that could have tied them together, gluons, couldn’t get a grip. The universe was also incredibly dark because photons couldn’t move through it.

Eventually, as things started to cool down, the quarks and gluons started to come together, creating protons and neutrons. The protons, in turn, started to hook up with free electrons to create hydrogen. (The neutrons, not so much at first, since when unbound they tend to not last a long time.) Eventually, the protons and neutrons did start to hook up and lure in electrons, creating helium. This is also when the universe became transparent, because now the photons could move through it freely.

But we still haven’t quite gotten to the force that created all of us just yet. It’s not the attractive force that pulled quarks and gluons together, nor is it the forces that bound electrons and protons. That’s because, given just those forces, the subatomic particles and atoms really wouldn’t have done much else. But once they reached the stage of matter — once there were elements with some appreciable (though tiny) mass to toss around, things changed.

Vast clouds of gas slowly started to fall into an inexorable dance as atoms of hydrogen found themselves pulled together, closer and closer, and tighter and tighter. The bigger the cloud became, the stronger the attraction until, eventually, a big enough cloud of hydrogen would suddenly collapse into itself so rapidly that the hydrogen atoms in the middle would slam together with such force that it would overcome the natural repulsion of the like-charged electron shells and push hard enough to force the nuclei together. And then you’d get… more helium, along with a gigantic release of energy.

And so, a star is born. A bunch of stars. A ton of stars, everywhere, and in great abundance, and with great energy. This is the first generation of stars in the universe and, to quote Bladerunner, “The light that burns twice as bright burns half as long.” These early stars were so energetic that they didn’t make it long, anf they managed to really squish things together. You see, after you turn hydrogen into helium, the same process turns helium into heavier elements, like lithium, carbon, neon, oxygen, and silicon. And then, once it starts to fuse atoms into iron, a funny thing happens. Suddenly, the process stops producing energy, the star collapses into itself, and then it goes boom, scattering those elements aback out into the universe.

This process will happen to stars that don’t burn as brightly, either. It will just take longer. The first stars lasted a few hundred million years. A star like our sun is probably good for about ten billion, and we’re only half way along.

But… have you figured out yet which force made these stars create elements and then explode and then create us, because that was the question: “What would you say exactly is the ultimate force that wound up directly creating each one of us?”

It’s the same force that pulled those hydrogen atoms together in order to create heavier elements and then make stars explode in order to blast those elements back out into the universe to create new stars and planets and us. It’s the same reason that we have not yet mastered doing nuclear fusion because we cannot control this force and don’t really know yet what creates it. It’s the same force that is keeping your butt in your chair this very moment.

It’s called gravity. Once the universe cooled down enough for matter to form — and hence mass — this most basic of laws took over, and anything that did have mass started to attract everything else with mass. That’s just how it works. And once enough mass got pulled together, it came together tightly enough to overcome any other forces in the universe.  Remember: atoms fused because the repulsive force of the negative charge of electrons was nowhere near strong enough to resist gravity, and neither was the nuclear force between protons and neutrons.

Let gravity grow strong enough, in fact, and it can mash matter so hard that it turns every proton in a star into a neutron which is surrounded by a surface cloud of every electron sort of in the same place, and this is called a neutron star. Squash it even harder, and you get a black hole, a very misunderstood (by lay people) object that nonetheless seems to actually be the anchor (or one of many) that holds most galaxies together.

Fun fact, though. If our sun suddenly turned into a black hole (unlikely because it’s not massive enough) the only effect on the Earth would be… nothing for about eight minutes, and then it would get very dark and cold, although we might also be fried to death by a burst of gamma radiation. But the one thing that would not happen is any of the planets suddenly getting sucked into it.

Funny thing about black holes. When they collapse like that and become one, their radius may change drastically, like from sun-sized to New York-sized, but their gravity doesn’t change at all.

But I do digress. Or maybe not. Circle back to the point of this story: The universal force that we still understand the least also happens to be the same damn force that created every single atom in every one of our bodies. Whether it has its own particle or vector, or whether it’s just an emergent property of space and time, is still anybody’s guess. But whichever turns out to be true, if you know some science, then the power of gravity is actually quite impressive.

Why astrology is bunk

This piece, which I first posted a year ago, continues to get constant traffic and I haven’t had a week go by that someone hasn’t given it a read. So in an effort to have a little bit of a summer vacation — and because something big is in the works — I felt that it was worth bringing to the top again.

I know way too many otherwise intelligent adults who believe in astrology, and it really grinds my gears, especially right now, because I’m seeing a lot of “Mercury is going retrograde — SQUEEEE” posts, and they are annoying and wrong.

The effect that Mercury in retrograde will have on us: Zero.

Fact

Mercury doesn’t “go retrograde.” We catch up with and then pass it, so it only looks like it’s moving backwards. It’s an illusion, and entirely a function of how planets orbit the sun, and how things look from here. If Mars had (semi)intelligent life, they would note periods when the Earth was in retrograde, but it’d be for the exact same reason.

Science

What force, exactly, would affect us? Gravity is out, because the gravitational effect of anything else in our solar system or universe is dwarfed by the Earth’s. When it comes to astrology at birth, your OB/GYN has a stronger gravitational effect on you than the Sun.

On top of that, the Sun has 99.9% of the mass of our solar system, which is how gravity works, so the Sun has the greatest gravitational influence on all of the planets. We only get a slight exception because of the size of our Moon and how close it is, but that’s not a part of astrology, is it? (Not really. They do Moon signs, but it’s not in the day-to-day.)

Some other force? We haven’t found one yet.

History

If astrology were correct, then there are one of two possibilities. A) It would have predicted the existence of Uranus and Neptune, and possibly Pluto, long before they were discovered, since astrology goes back to ancient times, but those discoveries happened in the modern era, or B) It would not have allowed for the addition of those three planets (and then the removal of Pluto) once discovered, since all of the rules would have been set down. And it certainly would have accounted for the 13th sign, Ophiuchus, which, again, wasn’t found until very recently, by science.

So…stop believing in astrology, because it’s bunk. Mercury has no effect on us whatsoever, other than when astronomers look out with telescopes and watch it transit the Sun, and use its movements to learn more about real things, like gravity.

Experiment

James Randi, fraud debunker extraordinaire, does a classroom exercise that demolishes the accuracy of those newspaper horoscopes, and here it is — apologies for the low quality video.

Yep. Those daily horoscopes you read are general enough to be true for anyone, and confirmation bias means that you’ll latch onto the parts that fit you and ignore the parts that don’t although, again, they’re designed to fit anyone — and no one is going to remember the generic advice or predictions sprinkled in or, if they do, will again pull confirmation bias only when they think they came true.

“You are an intuitive person who likes to figure things out on your own, but doesn’t mind asking for help when necessary. This is a good week to start something new, but be careful on Wednesday. You also have a coworker who is plotting to sabotage you, but another who will come to your aid. Someone with an S in their name will become suddenly important, and they may be an air sign. When you’re not working on career, focus on home life, although right now your Jupiter is indicating that you need to do more organizing than cleaning. There’s some conflict with Mars, which says that you may have to deal with an issue you’ve been having with a neighbor. Saturn in your third house indicates stability, so a good time to keep on binge watching  your favorite show, but Uranus retrograde indicates that you’ll have to take extra effort to protect yourself from spoilers.”

So… how much of that fit you? Or do you think will? Honestly, it is 100% pure, unadulterated bullshit that I just made up, without referencing any kind of astrological chart at all, and it could apply to any sign because it mentions none.

Conclusion

If you’re an adult, you really shouldn’t buy into this whole astrology thing. The only way any of the planets would have any effect at all on us is if one of them suddenly slammed into the Earth. That probably only happened once, or not, but it’s what created the Moon. So probably ultimately not a bad thing… except for anything living here at the time.

Theatre Thursday: So much for stage fright

The one thing I miss most of all during these strange days, other than hanging out with friends, is being able to go on stage and perform. I know that it’s something that a lot of people wouldn’t miss because they’d never do it in the first place, but I’m feeling the loss, and so are my many actor and improviser friends.

Studies seem to show that the one thing people fear the most, beyond death and spiders, is public speaking… and I just don’t get it. Then again, I’m a performer. Put me on a stage, give me an audience, and I am on. And it doesn’t matter whether I have pre-planned words to speak, like doing a play or giving a speech, or whether I’m totally winging it by doing improv.

To me, an audience is an invitation to entertain.

On top of that, to me, the more the merrier. I’ll take an audience of hundreds over an audience of dozens or fewer any day. The energy of a large house is infectious, and whenever I’m with a cast that’s in front of a big crowd, we all can feel it in each other’s performances. The intensity level and connections between us all go way up.

And it’s not an ego thing. It’s not about “Oh, look at ussssss!” It’s the people on stage thinking, “Look at them.”

We can see and hear you out there, and speaking for myself, if I’m doing comedy, there’s nothing I appreciate more than hearing a good laugh. If I’m doing drama, then there’s nothing more satisfying than the silent intensity of dozens or hundreds of captive eyes and minds.

Every time I go onstage, I have to wonder why anyone would fear doing it. Because here’s a simple truth that performers just know but which muggles might miss: The people watching you in the audience are a lot more afraid than you are.

Why is this? Two reasons. The first is that the audience gets to sit in the dark and be anonymous, while the performer doesn’t. You’d think that this would put the performer on the spot, but it’s quite the opposite. In fact, being in the spotlight gives the performers all of the power — and if you’ve ever been in the house of a large professional theater with a name actor onstage when someone’s cell phone rings audibly, or people are taking pictures, you’re seen this power being used with a vengeance.

This touches on the other reason for the fear: That an audience member is going to wind up being forced to participate somehow — that’s been a hazard of modern theatre ever since Bertolt Brecht broke the fourth wall, if not even earlier. Audiences can get spooked when the actors notice them and interact with them.

I’ve seen it as an audience member most obviously when I went to a production of Tony n’ Tina’s Wedding, which is a piece of environmental theatre first created in the 90s that casts the audience as the wedding guests. (A modern example of the form: escape rooms.) The audience starts out just sitting in the chairs under the outdoor tent for the ceremony, which is not without its family drama, although this part plays out a little bit more like a traditional play.

It’s when everyone moves inside to the banquet hall for the reception that things get interesting. Well, at least the cast tries to make them so. The audience is seated at various tables, with one or more actors planted at each. Now, I have to assume that each table had a similar set-up facilitated by a different family member. At ours, the Tina’s mother came over to tell us that Tina’s ex had come to the wedding uninvited, but that was okay. He was fine as long as he didn’t drink, so she was putting him at our table and asked us to make sure that he didn’t.

I wound up sitting next to the actor, and I sure played my part, making sure to vanish his champagne and wine glasses before he could get to them, but not only was no one else playing along, they weren’t even interacting with him. Now, I’m sure the inevitable arc for that actor is to figure out how to get “smashed” no matter what, and the character gets really inappropriate later on, but nobody at my table was trying, and I’m sure it was true at others.

I finally got to the point of abandoning my table and chatting with anyone who seemed to be a player, and damn was that fascinating — not to mention that they seemed grateful as hell that somebody was interacting with the character they’d bothered to create. I learned all kinds of things about what was going on, family dirt, some of the Italian wedding traditions, and so forth.

That’s what you have to do as an audience member when you go to environmental theatre. That’s the contract! So if you’re not into it, don’t go see those kinds of shows.

On the other hand, I’ve seen it from an actor’s POV more than a few times, and in shows that were not necessarily advertised as environmental theatre, or were not even announced as happening beforehand. In those cases, I can understand the audience discomfort. That doesn’t mean that it wasn’t fun to put them through it, at least in those situations.

Those situations have also been some of my favorite show memories, though. I was in a production of an Elaine May play, Adaptation, that posits life as a game show with a large ensemble cast. I think that only the host and star of the show-within-the-show played one character. The rest of us played a ton and our “offstage” was sitting in the audience, meaning that we had plenty of asides delivered directly to whomever we wound up sitting next to between scenes. Or, sometimes, we’d turn around and deliver the line to the people behind us or lean forward and deliver it to the people in front of us, which startled the hell out of them.

I also performed in a series of Flash Theatre performances done all over Los Angeles over the course of an entire year and staged by Playwrights Arena, and a lot of those involved interacting directly with our audience, which were a combination of people who knew about it beforehand and (mostly) whichever random folk were in the area when it happened. That is perhaps the most immediate and real fourth wall breaking because there was never a fourth wall in the first place. Or, rather, the audience is inside of it with the cast, even if everyone is outside, and a lot of the shows were. It’s the ultimate environmental theatre, staged with no warning and no invitation.

Even when the play wasn’t designed to break the fourth wall, a director’s staging can make it happen, and I had that experience in a production of Tennessee Williams’s Camino Real, where I basically played Mexican Jesus.

It’s one hot mess of a show that only ran sixty performances originally in 1955, when Williams was at the height of his powers, and I can say for certain that while it’s really fun for the actors to do, I felt sorry for every single audience we did it for. And I am really curious to see what Ethan Hawke manages with his planned film version of it. Maybe that medium will save it, maybe not.

But… our big fourth wall break came when the actress playing my mother (aka “Thinly Veiled Virgin M”) held the “dead” hero in her lap, Pietà style (while I was secretly getting a workout using my right arm to hold up his unsupported shoulders under the cover of the American flag he was draped in), and during her monologue, which was a good three or four minutes, every actor onstage except Mom and “dead” hero (there were 26 of us, I think) started by locking eyes with somebody in the audience house left and then, over the course of the speech, very, very slowly turning our heads, making eye contact with a different audience member and then a still different one, until, by the end of the speech, we were all looking house right.

Ideally, the turning of our heads should have been imperceptible, but our eye contact should have become obvious as soon as the target noticed. I should also mention that since I was down center sitting on the edge of the stage, the nearest audience member to me was about four feet away — and I was wearing some pretty intense black and silver makeup around my eyes, which made them really stand out.

Good times!

I’m glad to say that what I’m doing now — improv with ComedySportz L.A.’s Rec League — is designed to never make the audience uncomfortable, so that no one is forced to participate in any way. And that’s just as fun for us on stage, really, because the participation we get via suggestions and audience volunteers is sincere and enthusiastic. And if our outside audience happens to be too quiet or reticent during a show, we always have the Rec League members who aren’t playing that night as convenient plants who will take up the slack after a decent pause to allow for legitimate suggestions.

Yeah, I won’t lie. I definitely enjoyed those times when I got to screw with audiences. But I enjoy it just as much when we go out of our way to bring the audience onto our side by making them feel safe. I never have anything to be afraid of when I step on stage. I’d love to make our audiences realize that they don’t either.

Image by Image by Mohamed Hassan via Pixaby.

5 things space exploration brought back down to Earth

Previously, I wrote about how a thing as terrible as World War I still gave us some actual benefits, like improvements in plastic surgery, along with influencing art in the 20th century. Now, I’d like to cover something much more positive: five of the tangible, down-to-earth benefits that NASA’s space programs, including the Apollo program to the Moon, have given us.

I’m doing so because I happened across another one of those ignorant comments on the internet along the lines of, “What did going to the Moon ever really get us except a couple of bags of rocks?” That’s kind of like asking, “What did Columbus sailing to America ever really get us?” The answer to that should be obvious, although NASA did it with a lot fewer deaths and exactly zero genocide.

All of those Apollo-era deaths came with the first manned attempt, Apollo 1, which was destroyed by a cabin fire a month before its actual launch date during a test on the pad on January 27, 1967, killing all three astronauts aboard. As a consequence, missions 2 through 6 were unmanned. Apollo 7 tested docking maneuvers for the Apollo Crew and Service Modules, to see if this crucial step would work, and Apollo 8 was the first to achieve lunar orbit, circling our satellite ten times before returning to Earth. Apollo 9 tested the crucial Lunar Module, responsible for getting the first humans onto and off of the Moon, and Apollo 10 was a “dress rehearsal,” which went through all of the steps except the actual landing.

Apollo 11, of course, was the famous “one small step” mission, and after that we only flew six more times to the Moon, all of them meant to do the same as 11, but only the other one that’s most people remember, Apollo 13, is famous for failing to make it there.

I think the most remarkable part is that we managed to land on the Moon only two-and-a-half years after that disastrous first effort, and then carried out five successful missions in the three-and-a-half-years after that. What’s probably less well-known is that three more missions were cancelled between Apollo 13 and 14, but still with the higher numbers 18 through 20 because their original launch dates were not until about two years later.

Yes, why they just didn’t skip from to 17 so that the numbering worked out to 20 is a mystery.

Anyway, the point is that getting to the Moon involved a lot of really intelligent people solving a lot of tricky problems in a very short time, and as a result of it, a ton of beneficial tech came out of it. Some of this fed into or came from Apollo directly, while other tech was created or refined in successive programs, like Skylab, and  the Space Shuttle.

Here are my five favorites out of the over 6,300 technologies that NASA made great advances in on our journeys off of our home planet.

CAT scanner: Not actually an invention of NASA’s per se — that credit goes to British physicists Godfrey Hounsfield and Allan Cormack. However, the device did use NASA’s digital imaging technology in order to work, and this had been developed by JPL for NASA in order to enhance images taken on the moon. Since neither CAT scanners nor MRIs use visible light to capture images, the data they collect needs to be processed somehow and this is where digital imaging comes in.

A CAT scanner basically uses a revolving X-ray tube to repeatedly circle the patient and create a profile of data taken at various depths and angles, and this is what the computer puts together. The MRI is far safer (as long as you don’t get metal too close to it.)

This is because instead of X-rays an MRI machine works by using a magnetic field to cause the protons in every water molecule in your body to align, then pulsing a radio frequency through, which unbalances the proton alignment. When the radio frequency is then turned off, the protons realign. The detectors sense how long it takes protons in various places to do this, which tells them what kind of tissue they’re in. Once again, that old NASA technology takes all of this data and turns it into images that can be understood by looking at them. Pretty nifty, huh?

Invisible braces: You may remember this iconic moment from Star Trek IV: The One with the Whales, in which Scotty shares the secret of “transparent aluminum” with humans of 1986.

However, NASA actually developed transparent polycrystalline alumina long before that film came out and, although TPA is not a metal, but a ceramic, it contributed to advances in creating nearly invisible braces. (Note that modern invisible braces, like Invisalign, are not made of ceramic.)

But the important point to note is that NASA managed to take a normally opaque substance and allow it to transmit light while still maintaining its properties. And why did NASA need transparent ceramic? Easy. That stuff is really heat-resistant, and if you have sensors that need to see light while you’re dumping a spacecraft back into the atmosphere, well, there you go. Un-melting windows and antennae, and so on. This was also a spin-off of heat-seeking missile technology.

Joystick: You can be forgiven for thinking that computer joysticks were invented in the early 1980s by ATARI or (if you really know your gaming history) by ATARI in the early 1970s. The first home video game, Pong, was actually created in 1958, but the humble joystick itself goes back to as far as aviation does, since that’s been the term for the controller on airplanes since before World War I. Why is it called a “joystick?” We really don’t know, despite attempts at creating folk etymology after the fact.

However, those early joysticks were strictly analogue — they were connected mechanically to the flaps and rudders that they controlled. The first big innovation came thirty-two years before Pong, when joysticks went electric. Patented in 1926, it was dreamt up by C. B. Mirick at the U.S. Naval Research Laboratory. Its purpose was also controlling airplanes.

So this is yet another incidence of something that NASA didn’t invent, but boy howdy did they improv upon it — an absolute necessity when you think about it. For NASA, joysticks were used to land craft on the Moon and dock them with each other in orbit, so precision was absolutely necessary, especially when trying to touch down on a rocky satellite after descending through no atmosphere at orbital speed, which can be in the vicinity of 2,300 mph (about 3,700 km/h) at around a hundred kilometers up. They aren’t much to look at by modern design standards, but one of them sold at auction a few years back for over half a million dollars.

It gets even trickier when you need to dock two craft moving at similar speed, and in the modern day, we’re doing it in Earth orbit. The International Space Station is zipping along at a brisk 17,150 mph, or 27,600 km/h. That’s fast.

The early NASA innovations involved adding rotational control in addition to the usual X and Y axes, and later on they went digital and all kinds of crazy in refining the devices to have lots of buttons and be more like the controllers we know and love today. So next time you’re shredding it your favorite PC or Xbox game with your $160 Razer Wolverine Ultimate Chroma Controller, thank the rocket scientists at NASA. Sure, it doesn’t have a joystick in the traditional sense, but this is the future that space built, so we don’t need one!

Smoke detector: This is another device that NASA didn’t invent, but which they certainly refined and improved. While their predecessors, automatic fire alarms, date back to the 19th century, the first model relied on heat detection only. The problem with this, though, is that you don’t get heat until the fire is already burning, and the main cause of death in house fires isn’t the flames. It’s smoke inhalation. The version patented by George Andrew Darby in England in the 1890s did account for some smoke, but it wasn’t until the 1930s the concept of using ionization to detect smoke happened. Still, these devices were incredibly expensive, so only really available to corporations and governments. But isn’t that how all technological progress goes?

It wasn’t until NASA teamed with Honeywell (a common partner) in the 1970s that they managed to bring down the size and cost of these devices, as well as make them battery-operated. More recent experiments on ISS have helped scientists to figure out how to refine the sensitivity of smoke detectors, so that it doesn’t go off when your teenage boy goes crazy with the AXE body spray or when there’s a little fat-splash back into the metal roaster from the meat you’re cooking in the oven. Both are annoying, but at least the latter does have a positive outcome.

Water filter: Although it turns out that water is common in space, with comets being lousy with the stuff in the form of ice, and water-ice confirmed on the Moon and subsurface liquid water on Mars, as well as countless other places, we don’t have easy access to it, so until we establish water mining operations off-Earth, we need to bring it with us. Here’s the trick, though: water is heavy. A liter weighs a kilogram and a gallon weighs a little over eight pounds. There’s really no valid recommendation on how much water a person should drink in a day, but if we allow for two liters per day per person, with a seven person crew on the ISS, that’s fourteen kilos, or 31 pounds of extra weight per day. At current SpaceX launch rates, that can range from $23,000 to $38,000 per daily supply of water, but given a realistic launch schedule of every six weeks, that works out to around $1 to $1.5 million per launch just for the water. That six-week supply is also eating up 588 kilos of payload.

And remember: This is just for a station that’s in Earth orbit. For longer missions, the cost of getting water to them is going to get ridiculously expensive fast — and remember, too, that SpaceX costs are relatively recent. In 1981, the cost per kilogram was $85,216, although the Space Shuttles cargo capacity was slightly more than the Falcon Light.

So what’s the solution? Originally, it was just making sure all of the water was purified, leading to the Microbial Check Valve, which eventually filtered out (pun intended) to municipal water systems and dental offices. But to really solve the water problem, NASA is moving to recycling everything. And why not? Our bodies tend to excrete a lot of the water we drink when we’re done with it. Although it’s a myth that urine is sterile, it is possible to purify it to reclaim the water in it, and NASA has done just that. However, they really shouldn’t use the method shown in the satirical WW II film Catch-22

So it’s absolutely not true that the space program has given us nothing, and this list of five items barely scratches the surface. Once what we learn up there comes back down to Earth, it can improve all of our lives, from people living in the poorest remote villages on the planet to those living in splendor in the richest cities.

If you don’t believe that, here’s a question. How many articles of clothing that are NASA spin-offs are you wearing now, or do you wear on a regular basis? You’d be surprised.

Momentous Monday: Meet the Middletons

Thanks to boredom and Amazon Prime, I watched a rather weird movie from the 1930s tonight. While it was only 55 minutes long, it somehow seemed much longer because it was so packed with… all kinds of levels of stuff.

The title is The Middleton Family at the New York World’s Fair, and while the content is 7exactly what it says on the tin, there are so goddamn many moving parts in that tin that this is one worth watching in depth, mainly because it’s a case study in how propaganda can be sometimes wrong, sometimes right and, really, only hindsight can excavate the truth from the bullshit.

While it seems like a feature film telling the fictional story of the (snow-white but they have a black maid!) Middleton Family from Indiana who goes back east ostensibly to visit grandma in New York but, in reality, in order to attend the New York World’s Fair of 1939, in reality this was nothing more than a piece of marketing and/or propaganda created by the Westinghouse Corporation, major sponsors of the fair, poised on the cusp of selling all kinds of new and modern shit to the general public.

Think of them as the Apple or Microsoft of their day, with solutions to everything, and the World’s Fair as the biggest ThingCon in the world.

Plus ça change, right?

But there’s also a second, and very political, vein running through the family story. See, Dad decided to bring the family to the fair specifically to convince 16 year-old son Bud that, despite the bad economic news he and his older friends have been hearing about there being no job market (it is the Great Depression, after all) that there are, in fact, glorious new careers waiting out there.

Meanwhile, Mom is hoping that older daughter Babs will re-connect with high school sweetheart Jim, who had previously moved to New York to work for (wait for it) Westinghouse. Babs is having none of it, though, insisting that she doesn’t love him but, instead, is in love with her art teacher, Nick.

1939: No reaction.

2020: RECORD SCRATCH. WTF? Yeah, this is one of the first of many disconnect moments that are nice reminders of how much things have changed in the 81 years since this film happened.

Girl, you think you want to date your teacher, and anyone should be cool with that? Sorry, but listen to your mama. Note: in the world of the film, this relationship will become problematic for other reasons but, surprise, the reason it becomes problematic then is actually problematic in turn now. More on which later.

Anyway, obviously richer than fuck white family travels from Indiana to New York (they’re rich because Dad owns hardware stores and they brought their black maid with them) but are too cheap to spring for a hotel, instead jamming themselves into Grandma’s house, which is pretty ritzy as well and that says grandma has money too, since her place is clearly close enough to Flushing Meadows in Queens to make the World’s Fair a day trip over the course of a weekend.

But it’s okay — everyone owned houses then! (Cough.)

And then it’s off to the fair, and this is where the real value of the film comes in because when we aren’t being propagandized by Westinghouse, we’re actually seeing the fair, and what’s really surprising is how modern and familiar everything looks. Sure, there’s nothing high tech about it in modern terms, but if you dropped any random person from 2020 onto those fairgrounds, they would not feel out of place.

Well, okay, you’d need to put them in period costume first and probably make sure that if they weren’t completely white they could pass for Italian or Greek.

Okay, shit. Ignore that part, let’s move along — as Jimmy, Babs’ high school sweetheart and Westinghouse Shill character, brings us into the pavilion. And there are two really weird dynamics here.

First is that Jimmy is an absolute cheerleader for capitalism, which is jarring without context — get back to that in a moment.

The other weird bit is that Bud seems to be more into Jimmy than Babs ever was, and if you read too much gay subtext into their relationship… well, you can’t read too much , really. Watch it through that filter, and this film takes on a very different and subversive subplot. Sure, it’s clear that the family really wishes Jimmy was the guy Babs stuck with, but it sure feels like Bud wouldn’t mind calling him “Daddy.”

But back to Jimmy shilling for Westinghouse. Here’s the thing: Yeah, sure, he’s all “Rah-Rah capitalism!” and this comes into direct conflict with Nicholas, who is a self-avowed communist. But… the problem is that in America, in 1939, capitalism was the only tool that socialism could use to lift us out of depression and, ultimately, create the middle class.

There’s even a nod to socialism in the opening scene, when Bud tells his dad that the class motto for the guys who graduated the year before was, “WPA, here we come!” The WPA was the government works program designed to create jobs with no particular aim beyond putting people to work.

But once the WPA partnered with those corporations, boom. Jobs. And this was the beginning of the creation of the American Middle Class, which led to the ridiculous prosperity for (white) people from the end of WW II until the 1980s.

More on that later, back to the movie now. As a story with relationships, the film actually works, because we do find ourselves invested in the question, “Who will Babs pick?” It doesn’t help, though, that the pros and cons are dealt with in such a heavy-handed manner.

Jimmy is amazing in every possible way — young, tall, intelligent, handsome, and very knowledgeable at what he does. Meanwhile, Nicholas is short, not as good-1ooking (clearly cast to be more Southern European), obviously a bit older than Babs, and has a very unpleasant personality.

They even give him a “kick the puppy” moment when Babs introduces brother Bud, and Nicholas pointedly ignores the kid. But there’s that other huge issue I already mentioned that just jumps out to a modern audience and yet never gets any mention by the other characters. The guy Babs is dating is her art teacher. And not as in past art teacher, either. As in currently the guy teaching her art.

And she’s dating him and considering marriage.

That wouldn’t fly more than a foot nowadays, and yet in the world of 1939 it seems absolutely normal, at least to the family. Nowadays, it would be the main reason to object to the relationship. Back then, it isn’t even considered.

Wow.

The flip side of the heavy-handed comes in some of Jimmy’s rebukes of Nicholas’ claims that all of this technology and automation will destroy jobs. While the information Jimmy provides is factual, the way his dialogue here is written and delivered comes across as condescending and patronizing to both Nicholas and the audience, and these are the moments when Jimmy’s character seems petty and bitchy.

But he’s also not wrong, and history bore that out.

Now this was ultimately a film made to make Westinghouse look good, and a major set piece involved an exhibit at the fair that I actually had to look up because at first it was very easy to assume that it was just a bit of remote-controlled special effects set up to pitch an idea that didn’t really exist yet — the 1930s version of vaporware.

Behold Elektro! Here’s the sequence from the movie and as he was presented at the fair. Watch this first and tell me how you think they did it.

Well, if you thought remote operator controlling movement and speaking lines into a microphone like I did at first, that’s understandable. But the true answer is even more amazing: Elektro was completely real.

The thing was using sensors to actually interpret the spoken commands and turn them into actions, which it did by sending light signals to its “brain,” located at the back of the room. You can see the lights flashing in the circular window in the robot’s chest at around 2:30.

Of course, this wouldn’t be the 1930s if the robot didn’t engage in a little bit of sexist banter — or smoke a cigarette. Oh, such different times.

And yet, in a lot of ways, the same. Our toys have just gotten a lot more powerful and much smaller.

You can probably guess which side of the argument wins, and while I can’t disagree with what Westinghouse was boosting at the time, I do have to take issue with one explicit statement. Nicholas believes in the value of art, but Jimmy dismisses it completely, which is a shame.

Sure, it’s coming right out of the Westinghouse corporate playbook, but that part makes no sense, considering how much of the world’s fair and their exhibit hall itself relied on art, design, and architecture. Even if it’s just sizzle, it still sells the steak.

So no points to Westinghouse there but, again, knowing what was about to come by September of 1939 and what a big part industry would have in ensuring that the anti-fascists won, I can sort of ignore the tone-deafness of the statement.

But, like the time-capsule shown in the film, there was a limited shelf-life for the ideas Westinghouse was pushing, and they definitely expired by the dawn of the information age, if not a bit before that.

Here’s the thing: capitalism as a system worked in America when… well, when it worked… and didn’t when it didn’t. Prior to about the early 1930s, when it ran unfettered, it didn’t work at all — except for the super-wealthy robber barons.

Workers had no rights or protections, there were no unions, or child-labor laws, or minimum wages, standard working hours, safety rules, or… anything to protect you if you didn’t happen to own a big chunk of shit.

In other words, you were management, or you were fucked.

Then the whole system collapsed in the Great Depression and, ironically, it took a member of the 1% Patrician Class (FDR) being elected president to then turn his back on his entire class and dig in hard for protecting the workers, enacting all kinds of jobs programs, safety nets, union protections, and so on.

Or, in other words, capitalism in America didn’t work until it was linked to and reined-in by socialism. So we never really had pure capitalism, just a hybrid.

And, more irony: this socio-capitalist model was reinforced after Pearl Harbor Day, when everyone was forced to share and work together and, suddenly, the biggest workforce around was the U.S. military. It sucked in able-bodied men between 17 and 38, and the weird side-effect of the draft stateside was that suddenly women and POC were able to get jobs because there was no one else to do them.

Manufacturing, factory jobs, support work and the like boomed, and so did the beginnings of the middle class. When those soldiers came home, many of them returned to benefits that gave them cheap or free educations, and the ability to buy homes.

They married, they had kids, and they created the Boomers, who grew up in the single most affluent time period in America ever.

Side note: There were also people who returned from the military who realized that they weren’t like the other kids. They liked their own sex, and couldn’t ever face returning home. And so major port towns — San Francisco, Los Angeles, Long Beach, San Diego, Boston, New York, Miami, New Orleans — were flooded with the seeds of future LGB communities. (T and Q+ hadn’t been brought into the fold yet.)

In the 60s, because the Boomers had grown up with affluence, privilege, and easy access to education, they were also perfectly positioned to rebel their asses off because they could afford to, hence all of the protests and whatnot of that era.

And this sowed the seeds of the end of this era, ironically.

The socio-capitalist model was murdered, quite intentionally, beginning in 1980, when Ronald fucking Reagan became President, and he and his cronies slowly began dismantling everything created by every president from FDR through, believe it or not, Richard Nixon. (Hint: EPA.)

The mantra of these assholes was “Deregulate Everything,” which was exactly what the world was like in the era before FDR.

Just one problem, though. Deregulating any business is no different from getting an alligator to not bite you by removing their muzzle and then saying to them, “You’re not going to bite me, right?”

And then believing them when they swear they won’t before wondering why you and everyone you know has only one arm.

Still, while it supports an economic system that just isn’t possible today without a lot of major changes, The Middletons still provides a nice look at an America that did work because it focused on invention, industry, and manufacturing not as a way to enrich a few shareholders, but as a way to enrich everyone by creating jobs, enabling people to actually buy things, and creating a rising tide to lift all boats.

As for Bud, he probably would have wound up in the military, learned a couple of skills, finished college quickly upon getting out, and then would have gone to work for a major company, possibly Westinghouse, in around 1946, starting in an entry-level engineering job, since that’s the skill and interest he picked up during the War.

Along the way, he finds a wife, gets married and starts a family, and thanks to his job, he has full benefits — for the entire family, medical, dental, and vision; for himself, life insurance to benefit his family; a pension that will be fully vested after ten years; generous vacation and sick days (with unused sick days paid back every year); annual bonuses; profit sharing; and union membership after ninety days on the job.

He and the wife find a nice house on Long Island — big, with a lot of land, in a neighborhood with great schools, and easy access to groceries and other stores. They’re able to save long-term for retirement, as well as for shorter-term things, like trips to visit his folks in Indiana or hers in Miami or, once the kids are old enough, all the way to that new Disneyland place in California, which reminds Bud a lot of the World’s Fair, especially Tomorrowland.

If he’s typical for the era, he will either work for Westinghouse for his entire career, or make the move to one other company. Either way, he’ll retire from an executive level position in about 1988, having been in upper management since about 1964.

With savings, pensions, and Social Security, he and his wife decide to travel the world. Meanwhile, their kids, now around 40 and with kids about to graduate high school, aren’t doing so well, and aren’t sure how they’re going to pay for their kids’ college.

They approach Dad and ask for help, but he can’t understand. “Why don’t you just do what I did?” he asks them.

“Because we can’t,” they reply.

That hopeful world of 1939 is long dead — although, surprisingly, the actor who played Bud is still quite alive.

Image: Don O’Brien, Flickr, 2.0 Generic (CC BY 2.0), the Middleton Family in the May 1939 Country Gentleman ad for the Westinghouse World’s Fair exhibits.

Friday Free-for-All #16

In which I answer a random question generated by a website. Here’s this week’s question Feel free to give your own answers in the comments.

What piece of technology brings you the most joy?

This one is actually very simple. It is the lowly but very important integrated circuit, or IC. They combine a host of functions previously performed by much larger and more complicated devices — mostly transistors, resistors, and capacitors — which can create all sorts of tiny components, like logic gates, microcontrollers, microprocessors, sensors, and on and on.

In the old pre-ICs days, transistors, resistors, and capacitors all existed on a pretty large scale, as in big enough to pick up with your fingers and physically solder into place.

Before that, old school “integrated circuits” were big enough to hold in your hand and resembled very complicated lightbulbs. These were vacuum tubes, and essentially performed the same functions as a transistor — as either an amplifier or a switch. And yes, they were considered analog technology.

The way vacuum tubes worked was actually via heat. A piece of metal would be warmed up to release electrons, which was also the reason for the vacuum. This meant that there were no air molecules to get in the way as the electrons flowed from one end (the cathode) to the other (the anode), causing the current to flow in the other direction. (Not a typo. It’s a relic from an early misconception about how electricity works that was never corrected.)

The transition away from vacuum tubes to transistorized TV sets began in 1960, although the one big vacuum tube in the set — the TV screen itself — stuck around until the early 2000s.

But back to the vacuum tube function. Did it seem off that I described transistors as either amplifiers or switches? That’s probably because you might think of the former in terms of sound and the latter in terms of lights, but what we’re really talking about here is voltage.

Here’s the big secret of computers and other modern electronic devices. The way they really determine whether a bit value is 0 or 1 is not via “on” or “off” of a switch. That’s a simplification. Instead, what they really use is high or low voltage.

Now, granted, those voltages are never that “high,” being measured in milliamps, but the point is that it’s the transistor that decides either to up a voltage before passing it along, or which of an A/B input to pass along which circuit.

Meanwhile, resistors are sort of responsible for the math because they either slow down currents, so to speak, or let them pass as-is. Finally, capacitors are analogous to memory, because they store a received current for later use.

Put these all together, and that’s how you get all of those logic gates, microcontrollers, microprocessors, sensors, and on and on. And when you put all of these together, ta-da: electronics.

These can be as simple as those dollar store calculators that run on solar power and can only do four functions, or as complicated as the fastest supercomputers in the world. (Note: Quantum computers don’t count here because they are Next Gen, work in an entirely different way, and probably won’t hit consumer tech for at least another thirty years.)

So why do ICs give me joy? Come on. Look around you. Modern TVs; LCD, LED, and OLED screens; eReaders; computers; cell phones; GPS; synthesizers; MIDI; CDs, DVDs, BluRay; WiFi and BlueTooth; USB drives and peripherals; laser and inkjet printers; microwave ovens; anything with a digital display in it; home appliances that do not require giant, clunky plugs to go into the wall; devices that change to or from DST on their own; most of the sensors in your car if it was built in this century; the internet.

Now, out of that list, a trio stands out: computers, synthesizers, and MIDI, which all sort of crept into the consumer market at the same time, starting in the late 70s and on into the 80s. The funny thing, though, is that MIDI (which stands for Musical Instrument Digital Interface) is still around and mostly unchanged. Why? Because it was so incredibly simple and robust.

In a way, MIDI was the original HTML — a common language that many different devices could speak in order to reproduce information in mostly similar ways across platforms and instruments. Started with sixteen channels, it’s proven to be a ridiculously robust and backwards-compatible system.

Over time, the number of channels and bit-depth has increased, but a MIDI keyboard from way back in the early 80s will still communicate with a device using MIDI 2.0. You can’t say the same for, say, storage media and readers from different time periods. Good luck getting that early 80s 5-inch floppy disc to work with any modern device.

What’s really remarkable about MIDI is how incredibly robust it is, and how much data it can transfer in real time. Even more amazing is that MIDI has been adapted to more than just musical instruments. It can also be used for things like show control, meaning that a single MIDI system runs the lights, sound systems and, in some cases, even the practical effects in a concert or stage production.

And, again, while MIDI 1.0 was slowly tweaked over time between 1982 and 1996, it still went almost 25 years before it officially went from version 1.0 to 2.0, in January 2020. Windows 1.0 was released on November 20, 1985, although it was really just an overlay of MS-DOS. It lasted until December 9, 1987, when Windows 2.0 came out. This was also when Word and Excel first happened.

Apple has had a similar history with its OS, and in about the same period of time that MID has been around, both of them have gone through ten versions with lots of incremental changes along the way.

Now, granted, you’re not going to be doing complex calculations or spreadsheets or anything like that with MIDI, and it still doesn’t really have a GUI beyond the independent capabilities of the instruments you’re using.

However, with it, you can create art — anywhere from a simple song to a complex symphony and, if you’re so inclined, the entire stage lighting and sound plot to go along with it.

And the best part of that is that you can take your musical MIDI data, put it on whatever kind of storage device is currently the norm, then load that data back onto any other MIDI device.

Then, other than the specific capabilities of its onboard sound-generators, you’re going to hear what you wrote, as you wrote it, with the same dynamics.

For example, the following was originally composed on a fairly high-end synthesizer with really good, realistic tone generators. I had to run the MIDI file through an online MIDI to audio site that pretty much uses the default PC cheese-o-phone sounds, but the intent of what I wrote is there.

Not bad for a standard that has survived, even easily dumping its proprietary 5-pin plug and going full USB without missing a beat. Literally. Even while others haven’t been able to keep up so well.

So kudos to the creation of ICs, and eternal thanks for the computers and devices that allow me to use them to be able to research, create, and propagate much more easily than I ever could via ancient analog techniques.

I mean, come on. If I had to do this blog by typing everything out on paper, using Wite-Out or other correction fluid constantly to fix typos, then decide whether it was worth having it typeset and laid out (probably not) and debating whether to have it photocopied and mimeographed.

Then I’d have to charge all y’all to get it via the mail, maybe once a month — and sorry, my overseas fans, but you’d have to pay a lot more and would probably get it after the fact, or not at all if your postal censors said, “Oh, hell noes.”

Or, thanks to ICs, I can sit in the comfort of my own isolation on the southwest coast of the middle country in North America, access resources for research all over the planet, cobble together these ramblings, and then stick them up to be blasted into the ether to be shared with my fellow humans across the globe, and all it costs me is the internet subscription fee that I would pay anyway, whether I did this or not.

I think we call that one a win-win. And if I went back and told my first-grade self, who was just having his first music lessons on a decidedly analog instrument, in a couple of years, science is going to make this a lot more easy and interesting, he probably would have shit his pants.

Okay. He probably would have shit his pants anyway. Mainly by realizing, “Wait, what. You’re me? Dude… you’re fucking old!”

Oh well.

Image (CC BY 3.0) by user Mataresephotos.

 

Wonderous Wednesday: 5 Things that are older than you think

Quarantine is hard, so in lieu of not posting anything, here’s a blast from the past, an article posted in February 2019, but which is still relevant today.

A lot of our current technology seems surprisingly new. The iPhone is only twelve years old, for example, although the first Blackberry, a more primitive form of smart phone, came out in 1999. The first actual smart phone, IBM’s Simon Personal Communicator, was introduced in 1992 but not available to consumers until 1994. That was also the year that the internet started to really take off with people outside of universities or the government, although public connections to it had been available as early as 1989 (remember Compuserve, anyone?), and the first experimental internet nodes were connected in 1969.

Of course, to go from room-sized computers communicating via acoustic modems along wires to handheld supercomputers sending their signals wirelessly via satellite took some evolution and development of existing technology. Your microwave oven has a lot more computing power than the system that helped us land on the moon, for example. But the roots of many of our modern inventions go back a lot further than you might think. Here are five examples.

Alarm clock

As a concept, alarm clocks go back to the ancient Greeks, frequently involving water clocks. These were designed to wake people up before dawn, in Plato’s case to make it to class on time, which started at daybreak; later, they woke monks in order to pray before sunrise.

From the late middle ages, church towers became town alarm clocks, with the bells set to strike at one particular hour per day, and personal alarm clocks first appeared in 15th-century Europe. The first American alarm clock was made by Levi Hutchins in 1787, but he only made it for himself since, like Plato, he got up before dawn. Antoine Redier of France was the first to patent a mechanical alarm clock, in 1847. Because of a lack of production during WWII due to the appropriation of metal and machine shops to the war effort (and the breakdown of older clocks during the war) they became one of the first consumer items to be mass-produced just before the war ended. Atlas Obscura has a fascinating history of alarm clocks that’s worth a look.

Fax machine

Although it’s pretty much a dead technology now, it was the height of high tech in offices in the 80s and 90s, but you’d be hard pressed to find a fax machine that isn’t part of the built-in hardware of a multi-purpose networked printer nowadays, and that’s only because it’s such a cheap legacy to include. But it might surprise you to know that the prototypical fax machine, originally an “Electric Printing Telegraph,” dates back to 1843. Basically, as soon as humans figured out how to send signals down telegraph wires, they started to figure out how to encode images — and you can bet that the second image ever sent in that way was a dirty picture. Or a cat photo. Still, it took until 1964 for Xerox to finally figure out how to use this technology over phone lines and create the Xerox LDX. The scanner/printer combo was available to rent for $800 a month — the equivalent of around $6,500 today — and it could transmit pages at a blazing 8 per minute. The second generation fax machine only weighed 46 lbs and could send a letter-sized document in only six minutes, or ten page per hour. Whoot — progress! You can actually see one of the Electric Printing Telegraphs in action in the 1948 movie Call Northside 777, in which it plays a pivotal role in sending a photograph cross-country in order to exonerate an accused man.

In case you’re wondering, the title of the film refers to a telephone number from back in the days before what was originally called “all digit dialing.” Up until then, telephone exchanges (what we now call prefixes) were identified by the first two letters of a word, and then another digit or two or three. (Once upon a time, in some areas of the US, phone numbers only had five digits.) So NOrthside 777 would resolve itself to 667-77, with 667 being the prefix. This system started to end in 1958, and a lot of people didn’t like that.

Of course, with the advent of cell phones prefixes and even area codes have become pretty meaningless, since people tend to keep the number they had in their home town regardless of where they move to, and a “long distance call” is mostly a dead concept now as well, which is probably a good thing.

CGI

When do you suppose the first computer animation appeared on film? You may have heard that the original 2D computer generated imagery (CGI) used in a movie was in 1973 in the original film Westworld, inspiration for the recent TV series. Using very primitive equipment, the visual effects designers simulated pixilation of actual footage in order to show us the POV of the robotic gunslinger played by Yul Brynner. It turned out to be a revolutionary effort.

The first 3D CGI happened to be in this film’s sequel, Futureworld in 1976, where the effect was used to create the image of a rotating 3D robot head. However, the first ever CGI sequence was actually made in… 1961. Called Rendering of a planned highway, it was created by the Swedish Royal Institute of Technology on what was then the fastest computer in the world, the BESK, driven by vacuum tubes. It’s an interesting effort for the time, but the results are rather disappointing.

Microwave oven

If you’re a Millennial, then microwave ovens have pretty much always been a standard accessory in your kitchen, but home versions don’t predate your birth by much. Sales began in the late 1960s. By 1972 Litton had introduced microwave ovens as kitchen appliances. They cost the equivalent of about $2,400 today. As demand went up, prices fell. Nowadays, you can get a small, basic microwave for under $50.

But would it surprise you to learn that the first microwave ovens were created just after World War II? In fact, they were the direct result of it, due to a sudden lack of demand for magnetrons, the devices used by the military to generate radar in the microwave range. Not wanting to lose the market, their manufacturers began to look for new uses for the tubes. The idea of using radio waves to cook food went back to 1933, but those devices were never developed.

Around 1946, engineers accidentally realized that the microwaves coming from these devices could cook food, and voìla! In 1947, the technology was developed, although only for commercial use, since the devices were taller than an average man, weighed 750 lbs and cost the equivalent of $56,000 today. It took 20 years for the first home model, the Radarange, to be introduced for the mere sum of $12,000 of today’s dollars.

Music video

Conventional wisdom says that the first music video to ever air went out on August 1, 1981 on MTV, and it was “Video Killed the Radio Star” by The Buggles. As is often the case, conventional wisdom is wrong. It was the first to air on MTV, but the concept of putting visuals to rock music as a marketing tool goes back a lot farther than that. Artists and labels were making promotional films for their songs back at almost the beginning of the 1960s, with the Beatles a prominent example. Before these, though, was the Scopitone, a jukebox that could play films in sync with music popular from the late 1950s to mid-1960s, and their predecessor was the Panoram, a similar concept popular in the 1940s which played short programs called Soundies. However, these programs played on a continuous loop, so you couldn’t chose your song. Soundies were produced until 1946, which brings us to the real predecessor of music videos: Vitaphone Shorts, produced by Warner Bros. as sound began to come to film. Some of these featured musical acts and were essentially miniature musicals themselves. They weren’t shot on video, but they introduced the concept all the same. Here, you can watch a particularly fun example from 1935 in 3-strip Technicolor that also features cameos by various stars of the era in a very loose story.

Do you know of any things that are actually a lot older than people think? Let us know in the comments!

Photo credit: Jake von Slatt

Wednesday Wonders: Adding depth

Sixty-seven years ago today, on April 29, 1953, the first-ever experimental broadcast of a TV show in 3D happened, via KECA-TV in Los Angeles. If those call letters don’t sound familiar to any of my Southern California audience, that’s because they only lasted for about the first four-and-a-half years of the station’s existence, at which point they became the now very familiar KABC-TV, the local ABC affiliate also known as digital and broadcast channel 7.

The program itself was a show called Space Patrol, which was originally a 15-minute program that was aimed at a juvenile audience and aired daily. But once it became a hit with adults, ABC added half-hour episodes on Saturday.

Remember, at this point in television, they were at about the same place as internet programming was in 2000.

By the way, don’t confuse this show with the far more bizarre British production of 1962 with the same name. It was done with marionettes, and judging from this promotional trailer for a DVD release of restored episodes, it was incredibly weird.

Anyway, because of its subject matter and popularity, it was a natural for this broadcast experiment. This was also during the so-called “golden age” of 3D motion pictures, and since the two media were in fierce competition back in the day, it was an obvious move.

Remember — at that time, Disney didn’t own ABC, or anything else. In fact, the studios were not allowed to own theaters, or TV stations.

The original 3D broadcast was designed to use glasses, of course, although not a lot of people had them, so it would have been a blurry mess. Also note that color TV was also a rarity, so they would have been polarizing lenses rather than the red/blue possible in movies.

Since it took place during the 31st gathering of what was then called the National Association of Radio and Television Broadcasters (now just the NAB) it was exactly the same as any fancy new tech rolled out at, say, CES. Not so much meant for immediate consumption but rather to wow the organizations and companies that could afford to develop and exploit it.

Like pretty much every other modern innovation in visual arts and mass media, 3D followed the same progression through formats: still photography, motion pictures, analog video and broadcast, physical digital media, streaming digital media.

It all began with the stereoscope way back in 1838. That’s when Sir Charles Wheatstone realized that 3D happened because of binocular vision, and each eye seeing a slightly different image, which the brain would combine to create information about depth.

Early efforts at putting 3D images into motion were akin to later animated GIFs (hard G, please), with just a few images repeating in a loop.

giphy-downsized

While there was a too-cumbersome to be practical system that projected separate images side-by-side patented in 1890, the first commercial test run with an audience came in 1915, with  series of short test films using a red/green anaglyph system. That is, audience members wore glasses with one red and one green filter, and the two images, taken by two cameras spaced slightly apart and dyed in the appropriate hues, were projected on top of each other.

The filters sent each of the images to a different eye and the brain did the rest, creating the illusion of 3D, and this is how the system has worked ever since.

The first actual theatrical release in 3D premiered in Los Angeles on September 27, 1922. It was a film called The Power of Love, and it screened at the Ambassador Hotel Theater, the first of only two public showings.

You might think that 3D TV took a lot longer to develop, since TV had only been invented around this time in 1926, but, surprisingly, that’s not true. John Logie Baird first demonstrated a working 3D TV set in 1928. Granted, it was an entirely mechanical system and not very high-res, but it still worked.

Note the timing, too. TV was invented in the 1920s, but didn’t really take off with consumers until the 1950s. The world wide web was created in the 1960s, but didn’t really take off with consumers until the 1990s. You want to get rich? Invest in whatever the big but unwieldly and expensive tech of the 1990s was. (Hint, related to this topic: 3D printing.)

That 30 year repeat happens in film, too. As previously noted, the first 3D film premiered in the 1920s, but the golden age came in the 1950s. Guess when 3D came back again? If you said the 1980s, you win a prize. And, obviously, we’ve been in another return to 3D since the ‘10s. You do the math.

Oh, by the way… that 30 year thing applies to 3D printing one more generation back as well. Computer aided design (CAD), conceived in the very late 1950s, became a thing in the 1960s. It was the direct precursor to the concept of 3D printing because, well, once you’ve digitized the plans for something, you can then put that info back out in vector form and, as long as you’ve got a print-head that can move in X-Y-Z coordinates and a way to not have layers fall apart before the structure is built… ta-da!

Or, in other words, this is why developing these things takes thirty years.

Still, the tech is one step short of Star Trek replicators and true nerdvana. And I am so glad that I’m not the one who coined that term just now. But, dammit… now I want to go to Tennessee on a pilgrimage, except that I don’t think it’s going to be safe to go there for another, oh, ten years or so. Well, there’s always Jersey. Or not. Is Jersey ever safe?

I kid. I’ve been there. Parts of it are quite beautiful. Parts of it are… on a shitty reality show. Pass.

But… I’d like to add that 3D entertainment is actually far, far older than any of you can possibly imagine. It doesn’t just go back a couple of centuries. It goes back thousands of years. It also didn’t require any fancy technology to work. All it needed was an audience with a majority of members with two eyes.

That, plus performers acting out scenes or telling stories for that audience. And that’s it. There’s you’re 3D show right there.

Or, as I like to remind people about the oldest and greatest art form: Theatre Is the original 3D.

Well, nowadays, the original virtual reality as well, but guess what? VR came 30 years after the 80s wave of 3D film as well, and 60 years after the 50s. Funny how that works, isn’t it? It’s almost like we’re totally unaware that our grandparents invented the stuff that our parents perfected but which we’re too cool to think that any of them are any good at.

So… maybe let’s look at 3D in another way or two. Don’t think of it as three dimensions. Think of it as two times three decades — how long it took the thing to go from idea to something you take for granted. Or, on a generational level, think of it roughly as three deep: me, my parents, and my grandparents.

Talk about adding depth to a viewpoint.

Image licensed by (CC BY-ND 2.0), used unaltered, Teenager wears Real 3D Glasses by Evan via flickr.

Sunday Nibble #4

While the internet really was born in 1989 and didn’t explode until 1993, it was born in 1969, with the first (failed) outside attempt to log on to what at started as a military network designed to survive nuclear war. And while Al Gore was derided for having claimed to invent the internet, A) he never said that, and B) he actually was instrumental in crafting legislation that led to what we have today.

Also, the internet, like GPS, was a majorly expensive government program originally designed for the military that we wound up getting back because great minds said, “Hey. The people paid for this shit, and it’s really useful, so hand it over.”

But the real point of this nibble is to remind you that today, February 16, is a hidden but important anniversary in the history of the internet. It’s considered to be the birthdate, in 1978, of the first computerized bulletin board system, or CBBS, the precursor to BBSs (the “internet” of the 80s through early 90s) as well as a head-start on the whole concept of HTML and creating a mark-up language in order to allow different computers with different operating systems in different parts of the world to “talk” to each other.

The first CBBS was basically a glorified answering machine, one user at a time via a dial-up modem that must have been painfully slow compared to now. But it got the ball rolling, and it was created by a couple of dudes who were in their early 30s at the time but who, ironically, would be derided as boomers now. Well, at least the one who’s still alive. The (recently) dead one, not so much.

So as you have your morning avocado toast, or breakfast scramble, or latte to go, or whatever it is you nosh on early on a Sunday morning, just keep in mind that today is a milestone — one among many — that led directly to your ability to read this on your phone while your car tells you how to get to where you’re going.

Whee!

Wednesday Wonders: Now, Voyager

+Wednesday’s theme will be science, a subject that excites me as much as history on Monday and language on Tuesday. Here’s the first installment of Wednesday Wonders — all about science.

Now, Voyager

Last week, NASA managed something pretty incredible. They managed to bring the Voyager 2 probe back online after a system glitch forced it to shut down. Basically, the craft was supposed to do a 360° roll in order to test its magnetometer.

When the maneuver didn’t happen (or right before it was going to), two separate, energy-intensive systems wound up running at the same time and the probe went into emergency shut-down to conserve energy, turning off all of its scientific instruments, in effect causing data transmission back to home to go silent.

The twin Voyager probes are already amazing enough. They were launched in 1977, with Voyager 2 actually lifting off sixteen days earlier. The reason for the backwards order at the start of the mission is that Voyager 1 was actually going to “get there first” as it were.

It was an ambitious project, taking advantage of planetary placement to use various gravitational slingshot maneuvers to allow the probes to visit all of the outer planets — Jupiter and Saturn for both probes, and Uranus and Neptune as well for Voyager 2.

Not included: Pluto, which was still considered a planet at the time. It was in a totally different part of the solar system. Also, by the time the probes got there in 1989, Pluto’s eccentric orbit had actually brought it closer to the Sun than Neptune a decade earlier, a place where it would remain until February 11, 1999. While NASA could have maneuvered Voyager 2 to visit Pluto, there was one small hitch. The necessary trajectory would have slammed it right into Neptune.

Space and force

Navigating space is a tricky thing, as it’s a very big place, and things don’t work like they do down on a solid planet. On Earth, we’re able to maneuver, whether on foot, in a wheeled vehicle, or an aircraft, because of friction and gravity. Friction and gravity conspire to hold you or your car down to the Earth. In the air, they conspire to create a sort of tug of war with the force of lift to keep a plane up there.

When you take a step forward, friction keeps your back foot in place, and the friction allows you to use your newly planted front foot to move ahead. Note that this is why it’s so hard to walk on ice. It’s a low-friction surface.

The same principle works with cars (which also don’t do well on ice) with the treads on the tires gripping the road to pull you forward or stop you when you hit the brakes — which also work with friction.

Turning a car works the same way, but with one important trick that was discovered early on. If both wheels on opposite sides are on the same axle, turning the wheels does not result in a smooth turn of the vehicle. The axles need to be independent for one simple reason. The outside wheel has to travel farther to make the same turn, meaning that it has to spin faster.

Faster spin, lower friction, vehicle turns. While the idea of a differential gear doing the same thing in other mechanisms dates back to the 1st century BCE, the idea of doing it in wheeled vehicles wasn’t patented until 1827. I won’t explain it in full here because others have done a better job, but suffice it to say that a differential is designed to transfer power from the engine to the wheels at a relative rate dependent upon which way they’re aimed in a very simple and elegant way.

Above the Earth, think of the air as the surface of the road and an airplane’s wings as the wheels. The differential action is provided by flaps which block airflow and slow the wing. So… if you want to turn right, you slow down the right wing by lifting the flaps, essentially accelerating the left wing around the plane, and vice versa for a left turn.

In space, no one can feel you turn

When it comes to space, throw out everything in the last six paragraphs, because you don’t get any kind of friction to use, and gravity only comes into play in certain situations. Bookmark for later, though, that gravity did play a really big part in navigating the Voyager probes.

So, because no friction, sorry, but dog-fights in space are not possible. Hell, spacecraft don’t even need wings at all. The only reason that the Space Shuttle had them was because it had to land in an atmosphere, and even then they were stubby and weird, and even NASA engineers dubbed the thing a flying brick.

Without friction, constant acceleration is not necessary. One push starts you moving, and you’ll just keep going until you get a push in the opposite direction or you slam into something — which is just a really big push in the opposite direction with more disastrous results.

Hell, this is Newton’s first law of motion in action. “Every object persists in its state of rest or uniform motion — in a straight line unless it is compelled to change that state by forces impressed on it.” Push an object out in the vacuum of space, and it will keep on going straight until such point that another force is impressed upon it.

Want to turn right or left? Then you need to fire some sort of thruster in the direction opposite to the one you want to turn — booster on the right to turn left, or on the left to turn right. Want to slow down? Then you need to fire that thruster forward.

Fun fact: there’s no such thing as deceleration. There’s only acceleration in the other direction.

Also, if you keep that rear thruster going, your craft is going to keep on accelerating, and over time, this can really add up. For example, Voyager 2 is currently traveling at 15.4 kilometers (9.57 miles) per second — meaning that for it to take a trip from L.A. to New York would take five minutes.

Far and away

At the moment, though, this probe is 11.5 billion miles away, which is as long as four million trips between L.A. and New York. It’s also just over 17 light hours away, meaning that a message to and response from takes one day and ten hours.

And you thought your S.O. was blowing you off when it took them twenty minutes to reply to your text. Please!

But consider that challenge. Not only is the target so far away, but NASA is aiming at an antenna only 3.66 meters (12 feet) in diameter, and one that’s moving away so fast. Now, granted, we’re not talking “dead on target” here because radio waves can spread out and be much bigger than the target. Still… it is an impressive feat.

The more impressive part, though? We’re talking about technology that is over forty years old and still functioning and, in fact, providing valuable data and going beyond its design specs. Can you point to one piece of tech that you own and still use that’s anywhere near that old? Hell, you’re probably not anywhere near that old, but did your parents or grandparents leave you any tech from the late 70s that you still use? Probably not unless you’re one of those people still inexplicably into vinyl (why?)

But NASA has a track record of making its stuff last well beyond its shelf-life. None of the Mars rovers were supposed to keep on going like they have, for example, but Opportunity, intended to only last 90 days, kept on going for fifteen years, and the NASA Mars probes that actually made it all seem to last longer than intended.

In the case of Voyager, the big limit is its power supply, provided by plutonium-238 in the form of plutonium oxide. The natural decay of this highly radioactive element generates heat, which is then used to drive a bi-metallic thermoelectric generator. At the beginning, it provided 470 Watts of 30 volt DC power, but as of 1997 this had fallen to 335 Watts.

It’s interesting to note NASA’s estimates from over 20 years ago: “As power continues to decrease, power loads on the spacecraft must also decrease. Current estimates (1998) are that increasingly limited instrument operations can be carried out at least until 2020. [Emphasis added].”

Nerds get it done.

Never underestimate the ability of highly motivated engineers to find workarounds, though, and we’ve probably got at least another five years in Voyager 2, if not more. How do they do it? The same way that you conserve your phone’s battery when you forgot your charger and you hit 15%: power save mode. By selectively turning stuff off — exactly the same way your phone’s power-saver mode does it by shutting down apps, going into dark mode, turning off fingerprint and face-scan recognition, and so on. All of the essential features are still there. Only the bells and whistles are gone.

And still, the durability of NASA stuff astounds. Even when they’ve turned off the heaters for various detectors, plunging them into very sub-zero temperatures, they have often continued to function way beyond the conditions they were designed and tested for.

NASA keeps getting better. Nineteen years after the Voyagers, New Horizons was launched, and it managed to reach Pluto’s orbit and famously photograph that not-a-planet object only 9½ years after lift-off — and with Pluto farther out — as opposed to Voyager’s 12 years.

Upward and onward, and that isn’t even touching upon the utter value of every bit of information that every one of these probes sends us. We may leave this planet in such bad shape that space will be the only way to save the human race, and NASA is paving the way in figuring out how to do that.

Pretty cool, huh?