Being a basic bit

Zeroes and Ones are the building blocks of what’s known as binary, and the juice that our digital world runs on. Another way to think of it is this. In our so-called Base 10 world, we deal with ten digits, and depending upon what you’re counting, you can either go from 0 to 9 (things) or 1 to (1)0 (dates, place order). Since 9 is the highest digit, when any column hits it, the 9 next rolls back to 0 and the digit to its left increments up, initially from 1.

So after the number 9, we get 10. After 19, we get 20, after 99, it’s 100, and so on. Also note that 100 happens to be 10 x 10, 1,000 is 10 x 100, 10,000 is 100 x 100, etc. This will be important in a moment.

In the binary world, things roll over faster. In fact, the only digits you have are 0 and 1, so counting works like this: start with 0, then 1. But 1 is as high as we can go, so after 1 comes 10, which, in binary, represents 2.

That might seem strange, but here’s the logic behind it, going back to decimal 10. What is 10, anyway? Well, it’s the number that comes after we’ve run out of digits. Since we’re used to base 10, it doesn’t require any explanation to see that 10 always comes after 9. At least in base 10. I’ll get to that in a moment, but first there’s a very important concept to introduce, and that’s called “powers.”

The powers that be

No, I’m not talking Austin Powers. Rather, raising a number to a power just means multiplying the number by itself that many times. In its basic form, you’ll often see Xn. That’s what this means. It’s just a more efficient way of writing things out:

            2 x 2 = 22 = 4

            3 x 3 x 3 = 33 = 3 x 9 = 27

            10 x 10 x 10 x 10 x 10 = 105 = 100 x 100 x 10 = 10,000 x 10 = 100,000

Here’s an interesting thing about powers of 10, though. The end result will always have exactly as many zeros as the exponent, or power that you raised 10 to. 109. Simple: 1,000,000,000. If it’s 102, 100, and so on.

And the two fun sort of exceptions that aren’t exceptions to keep in mind:

            X x 0 x N = N, aka X0 = 1

            X x 1 = X1 = X.

101 is 10 with 1 zero, or 10; 100 is 10 with no zeroes, or 1.

In other words, any number to the power of zero equals 1, and any number to the power of 1 equals itself. And there you go, that’s all you need except for this: When it comes to determining what the power is, we count “backwards” from right to left. The last digit before the decimal takes the 0 power, next to the left is 1, next over from that is 2, and so on.

Everything in its place

Since places correspond to powers, in Base 10 we would call the digits, right to left, the ones, tens, hundreds, thousands, ten-thousands, hundred-thousands, and so on places. In binary, you’d have the ones, twos, fours, eights, sixteens, thirty-twos, etc.

Makes sense? Then let’s look at a four-digit number in binary: 1776.

But here’s an interesting trick: in computer logic, it often becomes much easier for the circuits to literally read in the digits backwards in order to do these steps upwards in the proper order. This saves the step of having to figure out how long a string is before assigning the proper power to the most significant digit, which is the last one on the left.

So, to calculate, we’ll count it from right to left, which will make it easier to follow what’s happening. Let’s go with 6771 for ease of use. The 6 is in the zero position, so it represents 6 x 100, in which case this is 6 x 1, meaning just plain old 6.

Next, a 7 times 101, which is just 10, so this spot is worth 70 and we’re up to 76.

Next, 7 times 102, which is 100 times 7. Add that to the rest, it’s now 776.

Finally, a 1 in the spot multiplied by 103, which is 10 x 10 x 10, which is 10 x 100, so… 1,000. Slap that on the rest, and there you go: 1776.

This works exactly the same way in any other base. So let’s look at a typical binary number: 1011 1110. As humans, we could deal with doing the whole thing backwards, but again, let’s make it easy for the machine, feed it in right to left, and watch the sequence in action:

Digit (D) 0    1    1    1    1    1    0    1
Power (p) 0    1    2    3    4    5    6    7
2^p       1    2    4    8    16   32   64   128
2^p x D   0    2    4    8    16   32   0    128
SUM       0    2    6    14   30   62   62   190

Or in base three or trinary, let’s look at 21221121, entered again in reverse:

Digit (D) 1    2    1    1    2    2    1    2
Power (p) 0    1    2    3    4    5    6    7
3^p       1    3    9    27   81   243  729  2187
3^p x D   1    6    9    27   162  486  729  4374
SUM       1    7    16   43   205  691  1420 5794

Now, let’s take a look at an interesting property in Base 10 and see if it translates over.

Dressed to the nines

In Base 10, any number divisible by nine also has all of its digits add up to nine. You can easily see this with the first few pairs of two-digit multiples of nine: 18, 27, 36, 45, 54, and so on. The tens digit goes up by one while the ones digit goes down by one, and that makes perfect sense. Why? Because when you add nine, what you’re really doing is the same as adding 10 and then taking away one.

It doesn’t matter how big the number is. If you can add the digits up to nine, then you can say it’s divisible by nine. To just pull a number out of thin air, I guarantee that 83,764,251 is evenly divisible by nine. I could also put any number of nines anywhere in that number and it would still be divisible, or put the digits in any order. And if you have a number that has all of the digits from 0 to 9 in any order, then it’s divisible by 9.

So does this property hold for other bases? What about Base 8? In that case, we should expect seven to be the magic number. I’ll spare you the torturing of Excel I did to run a  test, but the answer is: Yes. If a number is divisible by seven in Base 8, then its digits add up to seven. Here’s the list from the Base 8 equivalent of 1 to 99 (which is 1 to 77): 7, 16, 25, 34, 43, 52, 61, 70. Now none of those numbers in Base 10 is divisible by seven, but in Base 8 they are. Here’s how and why it works.

When you divide a number in Base 10 by 9, you start on the left, figure out how many times 9 goes into that whole number, carry the remainder to the next digit, and repeat the process. So to divide 27 by 9, you start by dividing 20 by 9. This gives you 2 times 9 = 18. Subtract 18 from 20, you get 2. Carry that over to the next place, which is 7, add 2 and 7, you get 9, which is divisible by 9. Add the 2 from the first result to 1, and your answer is 3.

Did you notice anything interesting there? It’s that you happened to wind up with the number in the Base digit twice. Two times 9, with the remainder of 2 adding to the other digit, and what was the other thing we noticed? That’s right. The sum of the digits is 9, so what’s left when you divide the ten’s digit by 9 has to add to the one’s digit to total 9.

This is true in any other base. Let’s look at our Base 8 example of 34. We can’t cheat by converting to Base 10, so the 3 tells us that 7 goes into the number three times. But since 3 times 7 is 3 less than 3 times 8, that’s our remainder. Add that to the 4 to get 7, and boom, done. In Base 8 34/7 = 3+1 = 4. Convert the Base 8 to Base 10 to get 28, and voila… 4 times 7 is 28. The answer is the same either way when you reduce it to a single digit.

A spot check bears this out with other bases, so it would seem to be a rule (though I’m not sure how to write the formula) that for any Base, B, and any number evenly divisible by B-1, the digits of that number will add up to B-1.

That’s the funny thing about whole numbers and integers. They have periodicity. What they do is predictable. Multiples of any integer will appear at regular intervals without jumping around no matter how far towards any particular ∞ you go. Irrational numbers and primes, not so much. But it’s good to know that the basis of digital computing is so reliable and regular. In fact, here’s a funny thing about binary: Every single number in binary is evenly divisible by 1 because all of the digits of every single number in it adds up to a number divisible by… 1. And a Base 1 numbering system is impossible, because the largest possible number in it is 0. It also breaks the whole Base rule above, because nothing can be divided by 0. Base 1 is the Black Hole of the numbering system. The rest should be pretty transparent.

Sunday Nibble Extra: Power up

You could say that May 16 can be an electrifying day in history. Or at least a very energetic one. On this day in 1888, Nikola Tesla described what equipment would be needed to transmit alternating current over long distances. Remember, at this time, he was engaged in the “War of the Currents” with that douche, Edison, who was a backer of DC. The only problem with DC (the kind of energy you get out of batteries) is that you need retransmission stations every mile or so. With Tesla’s version, you can send that power a long way down the wires before it needs any bump up in energy.

Of course, it might help to understand in the first place what electric charge is. Here’s Nick Lucid from Science Asylum to explain:

But if you think that electric current flows through a wire like water flows through a pipe, you’re wrong, and there’s a really interesting and big difference between the one and the other, as well as between AC and DC current. DC, meaning “direct current,” only “flows” in one direction, from higher to lower energy states. This is why it drains your batteries, actually — all of the energy potential contained therein sails along its merry way, powers your device, and then dumps off in the lower energy part of the battery, where it isn’t inclined to move again.

A simplification, to be sure, but the point is that any direct current, by definition, loses energy as it moves. Although here’s the funny thing about it, which Nick explains in this next video: neither current moves through that wire like it would in a pipe.

Although the energy in direct current moves from point A to point B at the speed of light, the actual electrons wrapped up in the electromagnetic field do not, and their progress is actually rather slow. If you think about it for a minute, this makes sense. Since your battery is drained when all of the negatively charged electrons move down to their low energy state, if they all moved at the speed of light, your battery would drain in nanoseconds. Rather, it’s the field that moves, while the electrons take their own sweet time moving down the crowded center of the wire — although move they do. It just takes them a lot of time because they’re bouncing around chaotically.

As for alternating current, since its thing is to let the field oscillate back and forth from source to destination, it doesn’t lose energy, but it also keeps its electrons on edge, literally, and they tend to sneak down the inside edges of the wire. However, since they’re just as likely to be on any edge around those 360 degrees, they have an equally slow trip. Even more so, what’s really guiding them isn’t so much their own momentum forward as it is the combination of electricity and magnetism. In AC, it’s a dance between the electric field in the wire and the magnetic field outside of it, which is exactly why the current seems to wind up in a standing wave between points A and B without losing energy.

I think you’re ready for part three:

By the way, as mentioned in that last video, Ben Franklin blew it when he defined positive and negative, but science blew it in not changing the nomenclature, so that the particle that carries electrical charge, the electron, is “negative,” while we think of energy as flowing from the positive terminal of batteries.

It doesn’t. It flows backwards into the “positive” terminals, but that’s never going to get fixed, is it?

But all of that was a long-winded intro to what the Germans did on this same day three years later, in 1891. It was the International Electrotechnical Exhibition, and they proved Edison dead wrong about which form of energy transmission was more efficient and safer. Not only did they use magnetism to create and sustain the energy flow, they used Tesla’s idea of three-phase electric power, and if you’ve got outlets at home with those three prongs, frequently in an unintended smiley face arrangement, then you know all about it.

Eleven years later, Edison would film the electrocution of an elephant in order to “prove” the danger of AC, but he was fighting a losing battle by that point. Plus, he was a colossal douche.

Obviously, the power of AC gave us nationwide electricity, but it also powered our earliest telegraph systems, in effect the great-grandparent of the internet. Later on, things sort of went hybrid, with the external power for landlines coming from AC power, but that getting stepped down and converted to operate the internal electronics via DC.

In fact, that’s the only reason that Edison’s version wound up sticking around: the rise of electronics, transistors, microchips, and so on. Powering cities and neighborhoods and so on requires the oomph of AC, but dealing with microcircuits requires the “directionality” of DC.

It does make sense though, if we go back to the water through a house analogy, wrong as it is. Computer logic runs on transistors, which are essentially one-way logic gates — input, input, compare, output. This is where computers and electricity really link up nicely. Computers work in binary: 1 or 0; on or off. So does electricity. 1 or 0; positive voltage, no voltage. Alternating current is just going to give you a fog of constant overlapping 1s and 0s. Direct current can be either, or. And that’s why computers manage to convert one to the other before the power gets to any of the logic circuits.

There’s one other really interesting power-related connection to today, and it’s this: on May 16, 1960, Theodore Maiman fired up the first optical LASER in Malibu, California, which he is credited with creating. Now… what does this have to do with everything before it? Well… everything.

LASER, which should only properly ever be spelled like that, is an acronym for the expression Light Amplification by Stimulated Emission of Radiation.

But that’s it. It was basically applying the fundamentals of electromagnetism (see above) to electrons and photons. The optical version of electrical amplification, really. But here’s the interesting thing about it. Once science got a handle on how LASERs worked, they realized that they could use to send the same information that they could via electricity.

So… all those telegraphs and telephone calls that used to get shot down copper wires over great distances in analog form? Yeah, well… here was a media that could do it through much cheaper things called fiber optics, transmit the same data much more quickly, and do it with little energy loss over the same distances.

And, ironically, it really involved the same dance of particles that Tesla realized in figuring out how AC worked way back in the day, nearly a century before that first LASER.

All of these innovations popped up on the same day, May 16, in 1888, 1891, and 1960. I think we’re a bit overdue for the next big breakthrough to happen on this day. See you in 2020?

What is your favorite science innovation involving energy? Tell us in the comments!

Friday Free-for-All #59: Multiple viewings, theater or home, hobby time, techsplosion

The next in an ongoing series in which I answer random questions generated by a website. Here are this week’s questions. Feel free to give your own answers or ask your own questions in the comments.

What movie have you seen more than seven times?

For starters, I know that I’ve watched Stanley Kubrick’s 2001: A Space Odyssey way more than seven times. Once home video and DVD happened, watching 2001 on New Year’s Day instead of a certain parade became a long-standing tradition with me.

The more than seven viewings is also true of several of his films, including Dr. Strangelove, or How I learned to Stop Worrying and Love the Bomb and A Clockwork Orange.

I can’t leave off The Rocky Horror Picture Show. I’m pretty sure I saw that more than seven times in high school alone, and The Wizard of Oz, It’s a Wonderful Life, and The Ten Commandments also make the list because they are still being rerun at least once a year on TV.

I can’t forget the Star Wars Original Trilogy and most of the Prequel Trilogy. The Sequel Trilogy hasn’t been around long enough yet. As for Star Trek, The Wrath of Khan and The Voyage Home are the only ones I’ve definitely seen that often.

There are a few James Bond films — definitely Goldfinger, Live and Let Die, and Moonraker (one good, one okay, and one cheesy as hell) again because of the TV return thing.

I’m not sure, but I think that Willy Wonka and the Chocolate Factory (that’s the amazing Gene Wilder-starring version and not the Tim Burton travesty) probably also makes the list. Oh. Also, Cabaret, All that Jazz, and Westside Story.

There are probably others, but these are the ones that I can definitely put in the more than seven list.

Do you prefer to watch movies in the theater or in the comfort of your own home?

This is an answer that’s changed enormously. Once upon a time, my reply would have been absolutely in a theater, because that’s where they were made to be seen.

But then as my interest in seeing all the latest MCU/DCEU franchise films fell to zero, waiting for home video or streaming became enough mostly — although I would still go out for the big event films that interested me, mainly Star Wars installments and Bladerunner 2049.

The last film I did see in an actual theatre was Star Wars: The Rise of Skywalker, back in February 2020. It was a mid-weekday thing and there were about four of us in the place.

So already having discovered the joys and convenience of streaming, not to mention the lower cost if it’s something on a service you already have, by the time the theaters shut down it was a no-brainer, and I’m not really inclined to go back anytime soon.

Honestly, seeing a Marvel movie on a big screen doesn’t really add much to it, not compared to the quality I can get at home. Plus I also don’t have to put up with other people, sticky floors, or an endless parade of pre-show trailers and adverts.

What hobby would you get into if time and money weren’t an issue?

I would become a total model train geek, although it would be about more than just the trains. I’d want to create an entire miniature city in a dedicated room, like a full basement, and build it in something like N Scale, which is ¾” to 1 foot, or 1:160 scale.

This would make a model of the Empire State building just over 9 feet tall at the tip of its mast, although it would take 33 linear feet of model to make up one mile of street, so it wouldn’t be a very big city. (Z scale would cut this down to 24 feet per mile, but definitely sacrifice some realism.)

To get a scale model of all of San Francisco into an area 33 feet on a side, you’d wind up with city buses being just under half an inch long and a tenth of an inch wide. You’d only need to cut the N scale in half to model two square miles of Midtown Manhattan.

But wait… it does say that time and money aren’t an issue, right? So instead of building a single square mile of city in a basement, why not go for a warehouse or buy an abandoned big box store? Aim for something that would fit fifty or a hundred square miles of city, and if it had multiple floors, go for various layouts — urban mega-city, suburban smaller town, historical city — with a scale ten mile footprint, you could easily build two separate 19th century Main Street towns surrounded by countryside and connected by railroad and telegraph.

And I wouldn’t need to go it alone. Hell, it could become an entire project that would employ model and miniature makers, urban planners, painters, designers, builders, electricians, programmers, and more. Give the big city a working harbor and airport, also have miniature cars and people moving around, design it to not only have a night and day cycle but seasons and weather as well, and it could be quite a thing.

It could even become a tourist attraction. Hell, they already did it in Hamburg, Germany.

And why does the idea fascinate me so much? Maybe because I was into model trains as a kid, although never had a neat, permanent layout. But this also led to me becoming a big fan of games like Sim City, in which I could indulge my curiosity about building and creating things and see where they led — especially urban landscapes.

Hm. Give me all the resources, and I just might make TinyTowns a major tourist destination.

Why did technology progress more slowly in the past than it does now?

I believe that this is because technological development is exponential, not algebraic. The latter is a very slow, additive process. You go from 1 to 1+1, or 2, then to 2+1 for 3 and so and so on. Repeat the process 100 times, and you land on 101.

Take the simplest exponential progression, though, in which each subsequent step is double the one before it. That is, go from 1 to 1×2, or 2, then 2 to 2×2 for 4, and so on. After a hundred steps, your result is 1.25×10^30, or roughly 1 followed by 30 zeros, which is one nonillion.

For perspective, a yottabyte — currently the largest digital storage standard yet set — is equal to one trillion terabytes, the latter currently being a very common hard drive size on a home computer.  The number noted above is ten thousand times that.

It’s exactly how we wound up with terabyte drives being so common when, not very long ago, a 30 megabyte drive was a big deal. That was really only within a generation or so. This relates to Moore’s Law, stated in 1965 as “the number of transistors in a computer chip doubles every 18 to 24 months.”

What wasn’t stated with the law was that this doubling didn’t just affect the number of transistors, and therefore the number of simultaneous operations, that a chip could perform. It extended to every other aspect of computers. More operations meant more data, so you could either speed up your clocks or widen your data buses (i.e. length of allowable piece of information in bits) or both.

And this is why we’ve seen things like computers going from 8 to 64 and 128 bit operating systems, and memory size ballooning from a few kilobytes to multiple gigabytes, and storage likewise exploding from a handful of kilobytes to terabytes and soon to be commercial petabyte drives.

Perspective: A petabyte drive would hold the entire Library of Congress print archive ten times over. If would probably also hold a single print archive and all the film, audio, and photographic content comfortably as well.

Now, all of this exploding computer technology fuels everything else. A couple of interesting examples: Humans went from the first ever manned flight of an airplane to walking on the moon in under 66 years. We discovered radioactivity in 1895 and tested the first atomic bomb 50 years later. The transistor was invented in 1947. The silicon chip integrating multiple transistors was devised in 1959, twelve years later.

And so on. Note, too, that a transistor’s big trick is that it turns old mathematical logic into something that can be achieved by differences in voltage. a transistor has two inputs and an output, and depending how it’s programmed, it can be set up to do various things, depending upon how the inputs compare and what the circuit has been designed to do.

The beauty of the system comes in stringing multiple transistors together, so that one set may determine whether digits from two different inputs are the same or not, and pass that info on to a third transistor, which may be set to either increment of leave unchanged the value of another transistor, depending on the info it receives.

Or, in other words, a series of transistors can be set up to perform addition, subtraction, multiplication, or division. It’s something that mechanical engineers had figured out ages previously using cogs and levers and gears, and adding machines and the like were a very  19th century technology. But the innovation that changed it all was converting decimal numbers into binary, realizing that the 0 and 1 of binary corresponded perfect to the “off” and “on” of electrical circuits, then creating transistors that did the same thing those cogs and levers did.

Ta-da! You’ve now turned analog math into digital magic. And once that system was in place and working, every other connected bit developed incredibly over time. Some people focused on making the human interfaces easier, moving from coding in obscure and strictly mathematical languages, often written via punch cards or paper tape, into not much improved but still infinitely better low level languages that still involved a lot of obscure code words and direct entry of numbers (this is where Hex, or Base 16 came into computing) but which was at least much more intelligible than square holes a card.

At the same time, there had to be better outputs than another set of punched cards, or a series of lights on a readout. And the size of data really needed to be upped, too., With only four binary digits, 1111, the highest decimal number you could represent was 15. Jump it to eight digits, 1111 1111, and you got… 255. Don’t forget that 0 is also included in that set, so you really have 256 values, and voila! The reason for that being such an important number in computing is revealed.

Each innovation fueled the need for the next, and so the ways to input and readout data kept improving until we had so-called high-level programming languages, meaning that on a properly equipped computer, a programmer could type in a command in fairly intelligible language, like,

10 X = “Hello world.”

20 PRINT X

30 END

Okay, stupid example, but you can probably figure out what it does. You could also vary it by starting with INPUT X, in which case the user would get a question mark on screen and the display would return whatever they typed.

Oh yeah… at around the same time, video displays had become common, replacing actual paper printouts that had a refresh rate slower than a naughty JPG download on 1200 baud modem. (There’s one for the 90s kids!) Not to mention a resolution of maybe… well, double digits lower than 80 in either direction, anyway.

Surprisingly, the better things got, the better the next versions seemed to get, and faster. Memory exploded. Computer speeds increased. Operating systems became more intuitive and responsive.

And then things that relied on computers took off as well. Car manufacturers started integrating them slowly, at first. Present day, your car is run more by computer than physical control, whether you realize it or not. Cell phones and then smart phones are another beneficiary — and it was the need to keep shrinking transistors and circuits to fit more of them onto chips in the first place that finally made it possible to stick a pretty amazing computer into a device that will fit in your pocket.

Oh yeah… first telephone, 1875. Landline phones were ubiquitous in less than a hundred years, and began to be taken over by cell phones, with the first one being demonstrated in 1973 (with a 4.4 lb handset, exclusive of all the other equipment required), and affordable phones themselves not really coming along until the late 1990s.

But, then, they never went away, and then they only just exploded in intelligence. Your smart phone now has more computing power than NASA and the Pentagon combined did at the time of the Moon landings.

Hell, that $5 “solar” (but probably not) calculator you grabbed in the grocery checkout probably has more computing power than the Lunar Lander that made Neil Armstrong the first human on the Moon.

It’s only going to keep getting more advanced and faster, but that’s a good thing, and this doesn’t even account for how explosions in computing have benefited medicine, communications, entertainment, urban planning, banking, epidemiology, cryptography, engineering, climate science, material design, genetics, architecture, and probably any other field you can think of — scientific, artistic, financial, or otherwise.

We only just began to escape the confines of Analog Ville less than 40 years ago, probably during the mid to late 80s, when Millennials were just kids. By the time the oldest of them were second semester sophomores in college, we had made a pretty good leap out into Digital World, and then just started doubling down, so that two decades into this century, the tech of the turn of the century (that’d be 2000) looks absolutely quaint.

Remember — we had flip phones then, with amazing (cough) 640×480 potato-grade cameras.

Compare that to 1920 vs 1900. A few advances, but not a lot. The only real groundbreaker was that women could now vote in the U.S., but that wasn’t really a technological advance, just a social one. And if you look at 1820 vs. 1800, or any twenty-year gap previously, things would not have changed much at all except maybe in terms of fashion, who current world monarch were, or which countries you were currently at war with.

And that, dear readers, is how exponential change works, and why technology will continue to grow in this manner. It’s because every new innovation in technology sews the seeds for both the need and inevitability of its next round of advancement and acceleration.

We pulled the genie out of the bottle in 1947. It’s never going back in.

Momentous Monday: Meet the Middletons

Thanks to boredom and Amazon Prime, I watched a rather weird movie from the 1930s tonight. While it was only 55 minutes long, it somehow seemed much longer because it was so packed with… all kinds of levels of stuff.

The title is The Middleton Family at the New York World’s Fair, and while the content is exactly what it says on the tin, there are so goddamn many moving parts in that tin that this is one worth watching in depth, mainly because it’s a case study in how propaganda can be sometimes wrong, sometimes be right and, really, only hindsight can excavate the truth from the bullshit.

While it seems like a feature film telling the fictional story of the (snow-white but they have a black maid!) Middleton Family from Indiana who goes back east ostensibly to visit grandma in New York but, in reality, in order to attend the New York World’s Fair of 1939, in reality this was nothing more than a piece of marketing and/or propaganda created by the Westinghouse Corporation, major sponsors of the fair, poised on the cusp of selling all kinds of new and modern shit to the general public.

Think of them as the Apple, Microsoft and Tesla of their day, with solutions to everything, and the World’s Fair as the biggest ThingCon in the world.

Plus ça change, right?

But there’s also a second, and very political, vein running through the family story. See, Dad decided to bring the family to the fair specifically to convince 16 year-old son Bud that, despite the bad economic news he and his older friends have been hearing about there being no job market (it is the Great Depression, after all) that there are, in fact, glorious new careers waiting out there.

Meanwhile, Mom is hoping that older daughter Babs will re-connect with high school sweetheart Jim, who had previously moved to New York to work for (wait for it) Westinghouse. Babs is having none of it, though, insisting that she doesn’t love him but, instead, is in love with her art teacher, Nick.

1939: No reaction.

2020: RECORD SCRATCH. WTF? Yeah, this is one of the first of many disconnect moments that are nice reminders of how much things have changed in the 82 years since this film happened.

Girl, you think you want to date your teacher, and anyone should be cool with that? Sorry, but listen to your mama. Note: in the world of the film, this relationship will become problematic for other reasons but, surprise, the reason it becomes problematic then is actually problematic in turn now. More on which later.

Anyway, obviously richer than fuck white family travels from Indiana to New York (they’re rich because Dad owns hardware stores and they brought their black maid with them) but are too cheap to spring for a hotel, instead jamming themselves into Grandma’s house, which is pretty ritzy as well and that says grandma has money too, since her place is clearly close enough to Flushing Meadows in Queens to make the World’s Fair a daily day trip over the course of a weekend.

But it’s okay — everyone owned houses then! (Cough.)

And then it’s off to the fair, and this is where the real value of the film comes in because when we aren’t being propagandized by Westinghouse, we’re actually seeing the fair, and what’s really surprising is how modern and familiar everything looks. Sure, there’s nothing high tech about it in modern terms, but if you dropped any random person from 2020 onto those fairgrounds, they would not feel out of place.

Well, okay, you’d need to put them in period costume first and probably make sure that if they weren’t completely white they could pass for Italian or Greek.

Okay, shit. Ignore that part, let’s move along — as Jimmy, Babs’ high school sweetheart and Westinghouse Shill character, brings us into the pavilion. And there are two really weird dynamics here.

First is that Jimmy is an absolute cheerleader for capitalism, which is jarring without context — get back to that in a moment.

The other weird bit is that Bud seems to be more into Jimmy than Babs ever was, and if you read too much gay subtext into their relationship… well, you can’t read too much , really. Watch it through that filter, and this film takes on a very different and subversive subplot. Sure, it’s clear that the family really wishes Jimmy was the guy Babs stuck with, but it sure feels like Bud wouldn’t mind calling him “Daddy.”

But back to Jimmy shilling for Westinghouse. Here’s the thing: Yeah, sure, he’s all “Rah-Rah capitalism!” and this comes into direct conflict with Nicholas, who is a self-avowed communist. But… the problem is that in America, in 1939, capitalism was the only tool that socialism could use to lift us out of depression and, ultimately, create the middle class.

There’s even a nod to socialism in the opening scene, when Bud tells his dad that the class motto for the guys who graduated the year before was, “WPA, here we come!” The WPA was the government works program designed to create jobs with no particular aim beyond putting people to work.

But once the WPA partnered with those corporations, boom. Jobs. And this was the beginning of the creation of the American Middle Class, which led to the ridiculous prosperity for (white) people from the end of WW II until the 1980s.

More on that later, back to the movie now. As a story with relationships, the film actually works, because we do find ourselves invested in the question, “Who will Babs pick?” It doesn’t help, though, that the pros and cons are dealt with in such a heavy-handed manner.

Jimmy is amazing in every possible way — young, tall, intelligent, handsome, and very knowledgeable at what he does. Meanwhile, Nicholas is short, not as good-1ooking (clearly cast to be more Southern European), obviously a bit older than Babs, and has a very unpleasant personality.

They even give him a “kick the puppy” moment when Babs introduces brother Bud, and Nicholas pointedly ignores the kid. But there’s that other huge issue I already mentioned that just jumps out to a modern audience and yet never gets any mention by the other characters. The guy Babs is dating is her art teacher. And not as in past art teacher, either. As in currently the guy teaching her art.

And she’s dating him and considering marriage.

That wouldn’t fly more than a foot nowadays, and yet in the world of 1939 it seems absolutely normal, at least to the family. Nowadays, it would be the main reason to object to the relationship. Back then, it isn’t even considered.

Wow.

The flip side of the heavy-handed comes in some of Jimmy’s rebukes of Nicholas’ claims that all of this technology and automation will destroy jobs. While the information Jimmy provides is factual, the way his dialogue here is written and delivered comes across as condescending and patronizing to both Nicholas and the audience, and these are the moments when Jimmy’s character seems petty and bitchy.

But he’s also not wrong, and history bore that out.

Now this was ultimately a film made to make Westinghouse look good, and a major set piece involved an exhibit at the fair that I actually had to look up because at first it was very easy to assume that it was just a bit of remote-controlled special effects set up to pitch an idea that didn’t really exist yet — the 1930s version of vaporware.

Behold Elektro! Here’s the sequence from the movie and as he was presented at the fair. Watch this first and tell me how you think they did it.

Well, if you thought remote operator controlling movement and speaking lines into a microphone like I did at first, that’s understandable. But the true answer is even more amazing: Elektro was completely real.

The thing was using sensors to actually interpret the spoken commands and turn them into actions, which it did by sending light signals to its “brain,” located at the back of the room. You can see the lights flashing in the circular window in the robot’s chest at around 2:30.

Of course, this wouldn’t be the 1930s if the robot didn’t engage in a little bit of sexist banter — or smoke a cigarette. Oh, such different times.

And yet, in a lot of ways, the same. Our toys have just gotten a lot more powerful and much smaller.

You can probably guess which side of the argument wins, and while I can’t disagree with what Westinghouse was boosting at the time, I do have to take issue with one explicit statement. Nicholas believes in the value of art, but Jimmy dismisses it completely, which is a shame.

Sure, it’s coming right out of the Westinghouse corporate playbook, but that part makes no sense, considering how much of the world’s fair and their exhibit hall itself relied on art, design, and architecture. Even if it’s just sizzle, it still sells the steak.

So no points to Westinghouse there but, again, knowing what was about to come by September of 1939 and what a big part industry would have in ensuring that the anti-fascists won, I can sort of ignore the tone-deafness of the statement.

But, like the time-capsule shown in the film, there was a limited shelf-life for the ideas Westinghouse was pushing, and they definitely expired by the dawn of the information age, if not a bit before that.

Here’s the thing: capitalism as a system worked in America when… well, when it worked… and didn’t when it didn’t. Prior to about the early 1930s, when it ran unfettered, it didn’t work at all — except for the super-wealthy robber barons.

Workers had no rights or protections, there were no unions, or child-labor laws, or minimum wages, standard working hours, safety rules, or… anything to protect you if you didn’t happen to own a big chunk of shit.

In other words, you were management, or you were fucked.

Then the whole system collapsed in the Great Depression and, ironically, it took a member of the 1% Patrician Class (FDR) being elected president to then turn his back on his entire class and dig in hard for protecting the workers, enacting all kinds of jobs programs, safety nets, union protections, and so on.

Or, in other words, capitalism in America didn’t work until it was linked to and reined-in by socialism. So we never really had pure capitalism, just a hybrid.

And, more irony: this socio-capitalist model was reinforced after Pearl Harbor Day, when everyone was forced to share and work together and, suddenly, the biggest workforce around was the U.S. military. It sucked in able-bodied men between 17 and 38, and the weird side-effect of the draft stateside was that suddenly women and POC were able to get jobs because there was no one else to do them.

Manufacturing, factory jobs, support work and the like boomed, and so did the beginnings of the middle class. When those soldiers came home, many of them returned to benefits that gave them cheap or free educations, and the ability to buy homes.

They married, they had kids, and they created the Boomers, who grew up in the single most affluent time period in America ever.

Side note: There were also people who returned from the military who realized that they weren’t like the other kids. They liked their own sex, and couldn’t ever face returning home. And so major port towns — San Francisco, Los Angeles, Long Beach, San Diego, Boston, New York, Miami, New Orleans — were flooded with the seeds of future GLB communities. Yes, it was in that order back then, and TQIA+ hadn’t been brought into the fold yet. Well, later, in the 60s. There really wasn’t a name for it or a community in the 1940s.

In the 60s, because the Boomers had grown up with affluence, privilege, and easy access to education, they were also perfectly positioned to rebel their asses off because they could afford to, hence all of the protests and whatnot of that era.

And this sowed the seeds of the end of this era, ironically.

The socio-capitalist model was murdered, quite intentionally, beginning in 1980, when Ronald fucking Reagan became President, and he and his cronies slowly began dismantling everything created by every president from FDR through, believe it or not, Richard Nixon. (Hint: EPA.)

The mantra of these assholes was “Deregulate Everything,” which was exactly what the world was like in the era before FDR.

Just one problem, though. Deregulating any business is no different from getting an alligator to not bite you by removing their muzzle and then saying to them, “You’re not going to bite me, right?”

And then believing them when they swear they won’t before wondering why you and everyone you know has only one arm.

Still, while it supports an economic system that just isn’t possible today without a lot of major changes, The Middletons still provides a nice look at an America that did work because it focused on invention, industry, and manufacturing not as a way to enrich a few shareholders, but as a way to enrich everyone by creating jobs, enabling people to actually buy things, and creating a rising tide to lift all boats.

As for Bud, he probably would have wound up in the military, learned a couple of skills, finished college quickly upon getting out, and then would have gone to work for a major company, possibly Westinghouse, in around 1946, starting in an entry-level engineering job, since that’s the skill and interest he picked up during the War.

Along the way, he finds a wife, gets married and starts a family, and thanks to his job, he has full benefits — for the entire family, medical, dental, and vision; for himself, life insurance to benefit his family; a pension that will be fully vested after ten years; generous vacation and sick days (with unused sick days paid back every year); annual bonuses; profit sharing; and union membership after ninety days on the job.

He and the wife find a nice house on Long Island — big, with a lot of land, in a neighborhood with great schools, and easy access to groceries and other stores. They’re able to save long-term for retirement, as well as for shorter-term things, like trips to visit his folks in Indiana or hers in Miami or, once the kids are old enough, all the way to that new Disneyland place in California, which reminds Bud a lot of the World’s Fair, especially Tomorrowland.

If he’s typical for the era, he will either work for Westinghouse for his entire career, or make the move to one other company. Either way, he’ll retire from an executive level position in about 1988, having been in upper management since about 1964.

With savings, pensions, and Social Security, he and his wife decide to travel the world. Meanwhile, their kids, now around 40 and with kids about to graduate high school, aren’t doing so well, and aren’t sure how they’re going to pay for their kids’ college.

They approach Dad and ask for help, but he can’t understand. “Why don’t you just do what I did?” he asks them.

“Because we can’t,” they reply.

That hopeful world of 1939 is long dead — although, surprisingly, the actor who played Bud is still quite alive.

Image: Don O’Brien, Flickr, 2.0 Generic (CC BY 2.0), the Middleton Family in the May 1939 Country Gentleman ad for the Westinghouse World’s Fair exhibits.

Wonderous Wednesday: 5 Things that are older than you think

A lot of our current technology seems surprisingly new. The iPhone is only about fourteen years old, for example, although the first Blackberry, a more primitive form of smart phone, came out in 1999. The first actual smart phone, IBM’s Simon Personal Communicator, was introduced in 1992 but not available to consumers until 1994. That was also the year that the internet started to really take off with people outside of universities or the government, although public connections to it had been available as early as 1989 (remember Compuserve, anyone?), and the first experimental internet nodes were connected in 1969.

Of course, to go from room-sized computers communicating via acoustic modems along wires to handheld supercomputers sending their signals wirelessly via satellite took some evolution and development of existing technology. Your microwave oven has a lot more computing power than the system that helped us land on the moon, for example. But the roots of many of our modern inventions go back a lot further than you might think. Here are five examples.

Alarm clock

As a concept, alarm clocks go back to the ancient Greeks, frequently involving water clocks. These were designed to wake people up before dawn, in Plato’s case to make it to class on time, which started at daybreak; later, they woke monks in order to pray before sunrise.

From the late middle ages, church towers became town alarm clocks, with the bells set to strike at one particular hour per day, and personal alarm clocks first appeared in 15th-century Europe. The first American alarm clock was made by Levi Hutchins in 1787, but he only made it for himself since, like Plato, he got up before dawn. Antoine Redier of France was the first to patent a mechanical alarm clock, in 1847. Because of a lack of production during WWII due to the appropriation of metal and machine shops to the war effort (and the breakdown of older clocks during the war) they became one of the first consumer items to be mass-produced just before the war ended. Atlas Obscura has a fascinating history of alarm clocks that’s worth a look.

Fax machine

Although it’s pretty much a dead technology now, it was the height of high tech in offices in the 80s and 90s, but you’d be hard pressed to find a fax machine that isn’t part of the built-in hardware of a multi-purpose networked printer nowadays, and that’s only because it’s such a cheap legacy to include. But it might surprise you to know that the prototypical fax machine, originally an “Electric Printing Telegraph,” dates back to 1843.

Basically, as soon as humans figured out how to send signals down telegraph wires, they started to figure out how to encode images — and you can bet that the second image ever sent in that way was a dirty picture. Or a cat photo.

Still, it took until 1964 for Xerox to finally figure out how to use this technology over phone lines and create the Xerox LDX. The scanner/printer combo was available to rent for $800 a month — the equivalent of around $6,500 today — and it could transmit pages at a blazing 8 per minute. The second generation fax machine only weighed 46 lbs and could send a letter-sized document in only six minutes, or ten page per hour. Whoot — progress!

You can actually see one of the Electric Printing Telegraphs in action in the 1948 movie Call Northside 777, in which it plays a pivotal role in sending a photograph cross-country in order to exonerate an accused man.

In case you’re wondering, the title of the film refers to a telephone number from back in the days before what was originally called “all digit dialing.” Up until then, telephone exchanges (what we now call prefixes) were identified by the first two letters of a word, and then another digit or two or three. (Once upon a time, in some areas of the US, phone numbers only had five digits.) So NOrthside 777 would resolve itself to 667-77, with 667 being the prefix. This system started to end in 1958, and a lot of people didn’t like that.

Of course, with the advent of cell phones, prefixes and even area codes have become pretty meaningless, since people tend to keep the number they had in their home town regardless of where they move to, and a “long distance call” is mostly a dead concept now as well, which is probably a good thing.

CGI

When do you suppose the first computer animation appeared on film? You may have heard that the original 2D computer generated imagery (CGI) used in a movie was in 1973 in the original film Westworld, inspiration for the recent TV series. Using very primitive equipment, the visual effects designers simulated pixilation of actual footage in order to show us the POV of the robotic gunslinger played by Yul Brynner. It turned out to be a revolutionary effort.

The first 3D CGI happened to be in this film’s sequel, Futureworld in 1976, where the effect was used to create the image of a rotating 3D robot head. However, the first ever CGI sequence was actually made in… 1961. Called Rendering of a planned highway, it was created by the Swedish Royal Institute of Technology on what was then the fastest computer in the world, the BESK, driven by vacuum tubes. It’s an interesting effort for the time, but the results are rather disappointing.

Microwave oven

If you’re a Millennial, then microwave ovens have pretty much always been a standard accessory in your kitchen, but home versions don’t predate your birth by much. Sales began in the late 1960s. By 1972 Litton had introduced microwave ovens as kitchen appliances. They cost the equivalent of about $2,400 today. As demand went up, prices fell. Nowadays, you can get a small, basic microwave for under $50.

But would it surprise you to learn that the first microwave ovens were created just after World War II? In fact, they were the direct result of it, due to a sudden lack of demand for magnetrons, the devices used by the military to generate radar in the microwave range. Not wanting to lose the market, their manufacturers began to look for new uses for the tubes. The idea of using radio waves to cook food went back to 1933, but those devices were never developed.

Around 1946, engineers accidentally realized that the microwaves coming from these devices could cook food, and voìla! In 1947, the technology was developed, although only for commercial use, since the devices were taller than an average man, weighed 750 lbs and cost the equivalent of $56,000 today. It took 20 years for the first home model, the Radarange, to be introduced for the mere sum of $12,000 of today’s dollars.

Music video

Conventional wisdom says that the first music video to ever air went out on August 1, 1981 on MTV, and it was “Video Killed the Radio Star” by The Buggles. As is often the case, conventional wisdom is wrong. It was the first to air on MTV, but the concept of putting visuals to rock music as a marketing tool goes back a lot farther than that.

Artists and labels were making promotional films for their songs back at almost the beginning of the 1960s, with the Beatles a prominent example. Before these, though, was the Scopitone, a jukebox that could play films in sync with music popular from the late 1950s to mid-1960s, and their predecessor was the Panoram, a similar concept popular in the 1940s which played short programs called Soundies.

However, these programs played on a continuous loop, so you couldn’t chose your song. Soundies were produced until 1946, which brings us to the real predecessor of music videos: Vitaphone Shorts, produced by Warner Bros. as sound began to come to film. Some of these featured musical acts and were essentially miniature musicals themselves. They weren’t shot on video, but they introduced the concept all the same. Here, you can watch a particularly fun example from 1935 in 3-strip Technicolor that also features cameos by various stars of the era in a very loose story.

Do you know of any things that are actually a lot older than people think? Let us know in the comments!

Photo credit: Jake von Slatt

Why astrology is bunk

This piece, which I first posted in 2019, continues to get constant traffic and I haven’t had a week go by that someone hasn’t given it a read. So I felt that it was worth bringing to the top again.

I know way too many otherwise intelligent adults who believe in astrology, and it really grinds my gears, especially whenever I see a lot of “Mercury is going retrograde — SQUEEEE” posts, and they are annoying and wrong.

The effect that Mercury in retrograde will have on us: Zero.

Fact

Mercury doesn’t “go retrograde.” We catch up with and then pass it, so it only looks like it’s moving backwards. It’s an illusion, and entirely a function of how planets orbit the sun, and how things look from here. If Mars had (semi)intelligent life, they would note periods when the Earth was in retrograde, but it’d be for the exact same reason.

Science

What force, exactly, would affect us? Gravity is out, because the gravitational effect of anything else in our solar system or universe is dwarfed by the Earth’s. When it comes to astrology at birth, your OB/GYN has a stronger gravitational effect on you than the Sun.

On top of that, the Sun has 99.9% of the mass of our solar system, which is how gravity works, so the Sun has the greatest gravitational influence on all of the planets. We only get a slight exception because of the size of our Moon and how close it is, but that’s not a part of astrology, is it? (Not really. They do Moon signs, but it’s not in the day-to-day.)

Some other force? We haven’t found one yet.

History

If astrology were correct, then there are one of two possibilities. A) It would have predicted the existence of Uranus and Neptune, and possibly Pluto, long before they were discovered, since astrology goes back to ancient times, but those discoveries happened in the modern era, or B) It would not have allowed for the addition of those three planets (and then the removal of Pluto) once discovered, since all of the rules would have been set down. And it certainly would have accounted for the 13th sign, Ophiuchus, which, again, wasn’t found until very recently, by science.

So… stop believing in astrology, because it’s bunk. Mercury has no effect on us whatsoever, other than when astronomers look out with telescopes and watch it transit the Sun, and use its movements to learn more about real things, like gravity.

Experiment

The late, great James Randi, fraud debunker extraordinaire, did a classroom exercise that demolishes the accuracy of those newspaper horoscopes, and here it is — apologies for the low quality video.

Yep. Those daily horoscopes you read are general enough to be true for anyone, and confirmation bias means that you’ll latch onto the parts that fit you and ignore the parts that don’t although, again, they’re designed to fit anyone — and no one is going to remember the generic advice or predictions sprinkled in or, if they do, will again pull confirmation bias only when they think they came true.

“You are an intuitive person who likes to figure things out on your own, but doesn’t mind asking for help when necessary. This is a good week to start something new, but be careful on Wednesday. You also have a coworker who is plotting to sabotage you, but another who will come to your aid. Someone with an S in their name will become suddenly important, and they may be an air sign. When you’re not working on career, focus on home life, although right now your Jupiter is indicating that you need to do more organizing than cleaning. There’s some conflict with Mars, which says that you may have to deal with an issue you’ve been having with a neighbor. Saturn in your third house indicates stability, so a good time to keep on binge-watching  your favorite show, but Uranus retrograde indicates that you’ll have to take extra effort to protect yourself from spoilers.”

So… how much of that fit you? Or do you think will? Honestly, it is 100% pure, unadulterated bullshit that I just made up, without referencing any kind of astrological chart at all, and it could apply to any sign because it mentions none.

Plus I don’t think it’s even possible for Uranus to go retrograde from the Earth’s point of view.

Conclusion

If you’re an adult, you really shouldn’t buy into this whole astrology thing. The only way any of the planets would have any effect at all on us is if one of them suddenly slammed into the Earth. That probably only happened once, or not, but it’s probably what created the Moon. So ultimately not a bad thing… except for anything living here at the time.

Friday Free for all #50: Weird, rude, escalation, old-fashioned

The next in an ongoing series in which I answer random questions generated by a website. Here, are this week’s questions. Feel free to give your own answers in the comments.

What’s the weirdest thing about modern life that people just accept as normal?

Hands down, it has to be the tone and level of discourse on social media. Can you imagine if, say, real-life parties or bars worked like this? Well… I mean, when they open again. Sure, every bar has its occasional fight break out, but if they were anything like social media, the things would turn into constant riots.

I’d imagine that conversations would go something like this. One friend says to another, “I really didn’t like that last moving starring X,” and her friend agrees. A passing stranger walks up and says, “You’re full of shit and don’t know what you’re talking about.”

Suddenly, a bunch of friends (and strangers) are coming up to mostly defend her, some to attack her, and some to support the stranger. When someone else outside the group starts to make random comments attacking people that are rude, racist, sexist, homophobic, transphobic, or any combination of any or all of the above, that’s when the fists start flying.

Of course, some of this crap has spilled out into real life, seen most recently with the failed insurrection of January 6. That, everybody, was an example of a typical internet chat thread bursting out into real life — except, of course, that the conversation was mostly one-sided and completely stupid.

Speaking of which, this will post the day after all of those folks’ fantasies about March 4 absolutely fail to come true. I wonder what they’ll all do then. Ideally, just slink back home to their caves and shut up for good.

Except that I don’t half doubt that they’re going to pick another date to conspire about in anticipation instead.

If animals could talk, which would be the rudest?

Absolutely no question, I think it would be cats. They’d probably be very opinionated, sarcastic, and demanding. They probably also have very foul mouths.

What escalated very quickly?

January 6, 2021. I don’t think I need to explain why.

What’s something you like to do the old-fashioned way?

Nothing. It’s called the “old’fashioned way” for a reason, and that’s because it’s old-fashioned. I prefer to take advantage of whatever technology can offer me, shady sides of social media included.

I can’t even imagine trying to write things on a typewriter, or all the crap that goes with that — correction fluid or tape, carbon paper, only having a single physical copy of the first draft until you go out and photocopy it at great cost.

Or phones. A phone that’s physically wired into a wall? No thanks. That’s so last century. So is a wireless phone — that connects to a cradle that’s physically wired into a wall. Not to mention phone calls. Who does those anymore?

Except maybe for business, and only if you’re dealing with a company that’s too behind the times to have a useful web presence, but damn is that annoying.

I can’t think of the last time I’ve mailed anything with a stamp on it, or handwritten a letter, and that’s fine with me. And speaking of handwritten, are we done with cursive yet? That shit should have gone out with the first word processors.

I’ve given up on broadcast TV — not that there are that many channels left, even via HD — and only get my programming through streaming. I will sometimes listen to the radio in the car, but only if none of the podcasts I follow has a new update.

Speaking of music, I am so glad I don’t have to deal with vinyl or record players. Not only is vinyl cumbersome, heavy, and not all that environmentally friendly, but the sound quality is not that great, unless you like pops and hisses, or the needle suddenly skipping or getting stuck. Give me digital any day.

That’s probably the big difference between modern and old-fashioned, really. To modernize is to learn to let go of the need to own tangible versions of things. Music, movies, books, photos, and more — you name it and you can digitize it, then carry it with you on your phone, stick it on your computer, or keep it in the cloud to access from anywhere.

One big advantage? You can’t lose it all in a fire if it’s not all living in one place.

Yet… I do know people who insist on doing things the old-fashioned way. My last job was totally like that, although only two of us working there were under 60, which could explain a lot. So, while we could have gone a lot more digital and modern with things, everybody else wanted to do it on paper, which I think really slowed us down.

Not to mention that the clients, who were 99.9% 65 and up, tended to mostly be barely technologically literate, and that made things difficult as well. I can’t tell you how many times someone would tell me, “I sent it to your email, but it came back undeliverable.”

“What email did you send it to?”

“www.yourcompanyname.com.”

“Um… that’s not an email.”

But it’s not just Boomers that have the issue, either. I know people my age and younger who don’t do computers, some of whom even use typewriters or do everything on paper, and I just don’t get it.

Why, in this day and age, when you can carry more computer power in your pocket than NASA had when they landed a human on the Moon, would you not avail yourself of it?

So, yeah. About the only thing I’ll do old-fashioned is a donut, and that’s only because that’s what they call the style. Otherwise, no thank you.

Wednesday Wonders: Adding depth

Sixty-eight years ago next month, the first-ever experimental broadcast of a TV show in 3D happened, via KECA-TV in Los Angeles. If those call letters don’t sound familiar to any of my Southern California audience, that’s because they only lasted for about the first four-and-a-half years of the station’s existence, at which point they became the now very familiar KABC-TV, the local ABC affiliate also known as digital and broadcast channel 7.

The program itself was a show called Space Patrol, which was originally a 15-minute program that was aimed at a juvenile audience and aired daily. But once it became a hit with adults, ABC added half-hour episodes on Saturday.

Remember, at this point in television, they were at about the same place as internet programming was in 2000.

By the way, don’t confuse this show with the far more bizarre British production of 1962 with the same name. It was done with marionettes, and judging from this promotional trailer for a DVD release of restored episodes, it was incredibly weird.

Anyway, because of its subject matter and popularity, it was a natural for this broadcast experiment. This was also during the so-called “golden age” of 3D motion pictures, and since the two media were in fierce competition back in the day, it was an obvious move.

Remember — at that time, Disney didn’t own ABC, or anything else. In fact, the studios were not allowed to own theaters, or TV stations.

The original 3D broadcast was designed to use glasses, of course, although not a lot of people had them, so it would have been a blurry mess. Also note that color TV was also a rarity, so they would have been polarizing lenses rather than the red/blue possible in movies.

Since it took place during the 31st gathering of what was then called the National Association of Radio and Television Broadcasters (now just the NAB) it was exactly the same as any fancy new tech rolled out at, say, CES. Not so much meant for immediate consumption but rather to wow the organizations and companies that could afford to develop and exploit it.

Like pretty much every other modern innovation in visual arts and mass media, 3D followed the same progression through formats: still photography, motion pictures, analog video and broadcast, physical digital media, streaming digital media.

It all began with the stereoscope way back in 1838. That’s when Sir Charles Wheatstone realized that 3D happened because of binocular vision, and each eye seeing a slightly different image, which the brain would combine to create information about depth.

Early efforts at putting 3D images into motion were akin to later animated GIFs (hard G, please), with just a few images repeating in a loop.

giphy-downsized

While there was a too-cumbersome to be practical system that projected separate images side-by-side patented in 1890, the first commercial test run with an audience came in 1915, with  series of short test films using a red/green anaglyph system. That is, audience members wore glasses with one red and one green filter, and the two images, taken by two cameras spaced slightly apart and dyed in the appropriate hues, were projected on top of each other.

The filters sent each of the images to a different eye and the brain did the rest, creating the illusion of 3D, and this is how the system has worked ever since.

The first actual theatrical release in 3D premiered in Los Angeles on September 27, 1922. It was a film called The Power of Love, and it screened at the Ambassador Hotel Theater, the first of only two public showings.

You might think that 3D TV took a lot longer to develop, since TV had only been invented around this time in 1926, but, surprisingly, that’s not true. John Logie Baird first demonstrated a working 3D TV set in 1928. Granted, it was an entirely mechanical system and not very high-res, but it still worked.

Note the timing, too. TV was invented in the 1920s, but didn’t really take off with consumers until the 1950s. The world wide web was created in the 1960s, but didn’t really take off with consumers until the 1990s. You want to get rich? Invest in whatever the big but unwieldly and expensive tech of the 1990s was. (Hint, related to this topic: 3D printing.)

That 30 year repeat happens in film, too. As previously noted, the first 3D film premiered in the 1920s, but the golden age came in the 1950s. Guess when 3D came back again? If you said the 1980s, you win a prize. And, obviously, we’ve been in another return to 3D since the ‘10s. You do the math.

Oh, by the way… that 30 year thing applies to 3D printing one more generation back as well. Computer aided design (CAD), conceived in the very late 1950s, became a thing in the 1960s. It was the direct precursor to the concept of 3D printing because, well, once you’ve digitized the plans for something, you can then put that info back out in vector form and, as long as you’ve got a print-head that can move in X-Y-Z coordinates and a way to not have layers fall apart before the structure is built… ta-da!

Or, in other words, this is why developing these things takes thirty years.

Still, the tech is one step short of Star Trek replicators and true nerdvana. And I am so glad that I’m not the one who coined that term just now. But, dammit… now I want to go to Tennessee on a pilgrimage, except that I don’t think it’s going to be safe to go there for another, oh, ten years or so. Well, there’s always Jersey. Or not. Is Jersey ever safe?

I kid. I’ve been there. Parts of it are quite beautiful. Parts of it are… on a shitty reality show. Pass.

But… I’d like to add that 3D entertainment is actually far, far older than any of you can possibly imagine. It doesn’t just go back a couple of centuries. It goes back thousands of years. It also didn’t require any fancy technology to work. All it needed was an audience with a majority of members with two eyes.

That, plus performers acting out scenes or telling stories for that audience. And that’s it. There’s you’re 3D show right there.

Or, as I like to remind people about the oldest and greatest art form: Theatre Is the original 3D.

Well, nowadays, the original virtual reality as well, but guess what? VR came 30 years after the 80s wave of 3D film as well, and 60 years after the 50s. Funny how that works, isn’t it? It’s almost like we’re totally unaware that our grandparents invented the stuff that our parents perfected but which we’re too cool to think that any of them are any good at.

So… maybe let’s look at 3D in another way or two. Don’t think of it as three dimensions. Think of it as two times three decades — how long it took the thing to go from idea to something you take for granted. Or, on a generational level, think of it roughly as three deep: me, my parents, and my grandparents.

Talk about adding depth to a viewpoint.

Image licensed by (CC BY-ND 2.0), used unaltered, Teenager wears Real 3D Glasses by Evan via flickr.

Look, up in the sky!

Throughout history, humans have been fascinated with the sky, and a lot of our myths were attempts to explain what goes on up there. In many cultures, the five planets visible to the naked eye (Mercury, Venus, Mars, Jupiter, and Saturn) were named after deities or attributes of the planets with surprising consistency.

Mercury was often named for its swiftness in orbiting the Sun; Venus was always associated with beauty because of its brightness; Mars’s red color led to it being named either for a violent deity or that color; Jupiter was always associated with the chief deity even though nobody in those times had any idea it was the largest planet; and Saturn, at least in the west, was named after Jupiter’s father.

This led to Uranus, which wasn’t discovered until the 18th century, being named after Saturn’s father, i.e. Jupiter’s grandfather. Neptune, discovered in the 19th century, and Pluto, discovered in the 20th century before being rightfully demoted from planetary status, were only named for Jupiter’s less cool brothers.

Since the planets were given attributes associated with deities, their relationship to each other must have meant something, and so the bogus art of astrology was invented, although it was obviously not complete prior to Uranus, Neptune, and Pluto being added, but then was clearly incorrect during the entire period of time that Pluto was a planet. (Hint: That was a joke. It was incorrect the entire time.)

Since humans are also hard-wired to see patterns, the stars above led to the definition of constellations, the night-time version of the “What is that cloud shaped like?” game.

It wasn’t really until the renaissance and the rise of science, including things like optics (one of Newton’s discoveries), which gave us telescopes, that we really started to take a look at the skies study them. But it is still astounding how so many laypeople know so little about what’s up there that we have had completely natural phenomena freaking us out since forever. Here are five examples of things in the sky that made people lose their stuff.

1. Total eclipse of the heart… er… Sun

Until science openly explained them, eclipses of any kind were scary. For one thing, nobody knew when they were coming until Royal Astronomer became a thing, but only the elite were privy to the information, so the Sun would go out or the Moon would turn blood red, or either one of them would appear to lose a piece at random and without warning. Generally, the belief was that the Moon or Sun (particularly the latter) was being consumed by some malevolent yet invisible beast that needed to be scared away.

But long after modern science explained that an eclipse was nothing more than the Moon passing in front of the Sun or the Earth passing in front of the Moon, shit went down in 1878, at least in low-information areas.

The thing about this eclipse was that it had been predicted close to a century before, had been well-publicized, and was going to put the path of totality across the entire U.S. for the first time since its founding. There’s even a book about it, American Eclipse. But there’s also a tragic part of the story. While the news had spread across most of the growing nation, it didn’t make it to Texas, and farm workers there, confronted with the sudden loss of the Sun, took it to mean all kinds of things. A lot of them thought that it was a portent of the return of Jesus, and in at least one case, a father killed his children and then himself in order to avoid the apocalypse.

2. Captain Comet!

Ah, comets. They are an incredibly rare sight in the sky and well worth traveling to see if that’s what you need to do. I remember a long trek into the darkness when I was pretty young to go see Comet Hyakutake, and yes it was worth it. It was a glorious blue-green fuzzball planted in space with a huge tail. Of course, I knew what it was. In the past, not so much.

In the ancient world, yet again, they were seen as bad omens because something in the heavens had gone wrong. The heavens, you see, were supposed to be perfect, but there was suddenly this weird… blot on them. Was it a star that grew fuzzy? Was it coming to eat the Earth? What could be done?

That may all sound silly, but as recently as 1910, people still flipped their shit over the return of one of the more predictable and periodic of “fuzzy stars.” That would be Comet Halley. And, by the way, it’s not pronounced “Hay-lee.” It’s “Hall-lee.”

And why did it happen? Simple. A French astronomer who should have known better, wrote that the tail of the comet was full of gases, including hydrogen and cyanide, and if the Earth passed through the tail, we would either be gassed to death or blown up. Unfortunately, another French astronomer at the time actually played “Got your back” with him, and that was all it took.

It was pseudoscience bullshit at its finest, propagated by the unquestioning and uninformed (when it comes to science) media, and it created a panic even though it was completely wrong.

The worst part about Halley’s 1910 appearance? It bore out Mark Twain’s statement, paraphrased probably: “I came into the world with it, I will go out with it.” And he did. Goddamit.

3. Meteoric rise is an oxymoron

And it definitely is, because a meteor only becomes one because it’s falling. And while we’re here, let’s look at three often confused words: Meteor, meteoroid, and meteorite.

The order is this: Before it gets here and is still out in space, it’s a meteoroid. Once it hits our atmosphere and starts to glow and burn up, it has become a meteor. Only the bits that actually survive to slam into the planet get to be called meteorites. Oid, or, ite. I suppose you could think of it as being in the vOID, coming fOR you, and then crash, goodnITE.

So the things that mostly cause panic are meteors, and quite recently, a meteor blowing up over Russia convinced people that they were under attack. It was a fireball that crashed into the atmosphere on February 15, 2013, and it actually did cause damage and injuries on the ground.

The numbers on the Chelyabinsk meteor are truly staggering, especially to think that they involved no high explosives, just friction and pure physics (Hello again, Sir Isaac!) The thing was about 66 feet in diameter, which is the length of a cricket pitch, or about four feet longer than a bowling lane. It compares to a lot of things, and you can find some fun examples here.

But there was nothing fun about this asteroid. It came screaming through our atmosphere at about 41,000 miles an hour at a steep angle. The heat from the friction of our atmosphere quickly turned it into a fireball of the superbolide variety, which is one that is brighter than the sun. It exploded about 18 miles up. That explosion created a fireball of hot gas and dust a little over four miles in diameter. The kinetic energy of the event was about 30 times the force of the atom bomb dropped on Hiroshima.

Over 7,200 buildings were damaged and 1,500 people were injured enough to need medical attention, mostly due to flying glass and other effects of the shockwave. Unlike other items on this list, these events actually can be dangerous, although this was the first time in recorded history that people were known to have been injured by a meteor. The Tunguska event, in 1908, was a little bit bigger and also in Russia, but happened in a remote and sparsely populated area, with no reported human injuries. Local reindeer were not so lucky.

4. Conjunction junction, what’s your function?

A conjunction is defined as any two or more objects in space which appear to be close together or overlapping when viewed from the Earth. Every solar eclipse is a conjunction of the Sun and Moon as seen from the Earth. Oddly enough, a lunar eclipse is not a conjunction from our point of view, because it’s our planet that’s casting the shadow on the Moon.

Conjunctions shouldn’t be all that surprising for a few reasons.

First is that most of the planets pretty much orbit in the same plane, defined by the plane in which the Earth orbits because that makes the most sense from an observational standpoint.

The inclination of Earth’s orbit is zero degrees by definition and the plane we orbit in is called the ecliptic. You can probably figure out where that name came from. Out of the planets, the one with the greatest inclination is Mercury, at 7º. Counting objects in the solar system in general, the dwarf planet Pluto has an inclination of 17.2º — which is just another argument against it being a true planet. None of the planets not yet mentioned have an inclination of more than 4º, which really isn’t a whole lot.

The second reason conjunctions should not be all that surprising is because each planet has to move at a particular velocity relative to its distance from the Sun to maintain its orbit. The farther out you are, the faster you have to go. Although this is a function of gravity, the airplane analogy will show you why this makes sense.

As an airplane gains speed, the velocity of air over the wings increases, generating more lift, bringing the airplane higher. In space, there’s no air to deal with, but remember that any object in orbit is essentially falling around the body it orbits, but doing it fast enough to keep missing.

If it slows down too much, it will start to fall, but if it speeds up its orbit will get bigger. This is directly analogous to ballistics, which describes the arc of a flying projectile. The faster it launches the farther it goes and the bigger the arc it makes. An arc in orbit becomes an ellipse.

Since every planet is moving at the speed required to keep it at the distance it is, things are likely to sync up occasionally. Sometimes, it will only be one or two planets, but on certain instances, it will be most or all of them. This video is a perfect example. Each one of the balls is on a string of a different length, so its natural period is different. Sometimes, the pattern becomes quite chaotic, but every so often it syncs up perfectly. Note that all of them did start in sync, so it is mathematically inevitable that they will sync up again at the point that all of the different period multiply to the same number. Our solar system is no different since the planets all started as rings of gas and dust swirling around the Sun. There was a sync point somewhen.

So conjunctions are a completely normal phenomenon, but that doesn’t mean that people haven’t gone completely stupid with them. The first way is via astrology, which isn’t even worth debunking because it’s such a load. The Sun is 99.8% of the mass of the solar system, so it constantly has more influence in every possible way over everything else hands down. What influence did the planets have upon your personality at birth? Less than zero. The only relevant factor, really, is that people’s personalities are formed by their relative age when they started school, so that is influenced by the season they were born in, but that’s about it.

As for the modern version of people going completely stupid over conjunctions, it happened in the early 1980s, after the 1974 publication of the book The Jupiter Effect by John Gribbin and Stephen Plagemann. In short, they predicted that a conjunction of the planets on March 10, 1982 would cause a series of earthquakes that would wipe out Los Angeles.

Since you’re reading this in at least the year 2020 and I’m quite safely in Los Angeles, you know how their prediction turned out. This didn’t stop them from backtracking a month later and releasing a follow-up book called The Jupiter Effect Reconsidered (aka We Want More Money from the Gullible) in which they claimed, “Oh… we miscalculated. The date was actually in 1980, and the conjunction (that hadn’t happened yet) caused Mount St. Helens to erupt.”

Still, just like with the whole end of the world 2012 predictions, at least some people bought into it.

5. The original star wars

The last item on our list is possibly a one-off, occurring on April 14, 1561 in Nuremberg, Germany. Whether it actually even happened is debatable since only a single account of it survives in the form of a broadsheet — basically the blog post of its day. If it had been as widespread as the story makes it seem, after all, there should have been reports from all across Europe unless, of course, the point of view from Nuremberg created the exceptional event in the first place.

It was described as an aerial battle that began between 4 and 5 a.m. when “a dreadful apparition occurred on the sun.” I’ll quote the rest of the paragraph in translation in full from the article linked above: “At first there appeared in the middle of the sun two blood-red semicircular arcs, just like the moon in its last quarter. And in the sun, above and below and on both sides, the color was blood, there stood a round ball of partly dull, partly black ferrous color. Likewise there stood on both sides and as a torus about the sun such blood-red ones and other balls in large numbers, about three in a line and four in a square, also some alone.”

The unknown author goes on to describe the objects — spheres, rods, and crosses — as battling with each other for about an hour, swirling back and forth. Eventually, the objects seemed to become fatigued and fell to the Earth, where they “wasted away… with immense smoke.

Now, what could have caused this phenomenon? The obvious answers are that it was a slow news day or that it was religious propaganda or some other wind-up. But if it were an actual phenomenon and really only remarked on in one village, then it’s quite possible that it was a meteor shower with an apparent radiant, or source, that happened to line up with the Sun.

It was a Monday, with a new Moon. The Sun rose in the east at 5:05 a.m., so the invisible Moon was somewhere around that part of the sky, too. But this also immediately calls the story into question, since the phenomenon seen coming from the Sun happened before sunrise according to the account. But if we consider that to just be human error, what we have is the Pearl Harbor effect. The attackers come in with the rising Sun behind them, making them hard to see or understand.

On top of that, if they’re coming in from that direction, they’re coming in at a very shallow angle. See the notes on the Russian meteor above. This can lead to some super-heated objects, which would glow red as reported, and anything not red hot against the Sun would appear black. If it happened to be a swarm of objects, like a bunch of small rocks and dust or a bigger piece that broke up, all flying in at once, the chaotic motion could certainly make it seem like a battle.

There is a meteor shower that happens around that time of year called the Lyrids, which is very short-lived, although I haven’t yet been able to find out whether its radiant was near the Sun in 1561. But a particularly heavy shower coming in at just the right angle could have an unusual effect in a limited area.

Or… the author just pulled it out of his ass for his own reasons. We can never know.


Photo credit: Staff Sgt. Christopher Ruano, F.E. Warren Air Force Base.

More stupid Excel tricks: a secret power of IF

The hardest part about working with data, especially in large sets, is the people who input it in the first place. The reason they make it so difficult is because they’re inconsistent, not only in their day-to-day habits, but between one or more different people all entering info into the same database.

When you’re creating something solely for yourself, then by all means be as inconsistent or idiosyncratic as you want. But if it’s a group project creating information that someone like me is going to have to derive useful information from at some point in the future, inconsistency can make my job infinitely more difficult.

This is the reason why things like style guides were created — and they don’t just exist for the written word. Accounting and data management have their own style guides. So does computer programming, although that field has the advantage, because the program itself won’t let you get it wrong. Excel is the same way, although it won’t always tell you how to make it right.

Little things can cause problems and cost a business money. Sally may prefer to spell out words in addresses, like Avenue or Boulevard, while Steve likes to abbreviate with Ave or Blvd. Sam is also big on abbreviations, but always with periods. Seems innocuous, doesn’t it?

It does until the only way to make sure that a massive mailing doesn’t go to the same household at the same address twice is to compare the addresses to each other. That’s because, to a computer, 1234 Main Street, 1234 Main St, and 1234 Main St. are all completely different addresses. There’s no easy way to fix that because computers don’t have a “kinda sorta look the same” function.

Garbage in, garbage out

It’s also important that a database be designed properly. For example, names should always be entered as separate units — title/prefix, first name, middle name, last name, suffix. They can be combined later when necessary. A lot of good databases do this, but it’s completely worthless if somebody enters the first and middle names in the first name field or adds the suffix to the last name. You may have heard the expression “garbage in, garbage out,” and this is a prime example of that. All of the right fields were there, but if used improperly, it doesn’t matter.

Of course, the proper fields aren’t always included. One example I’ve had to wrestle with recently is a database showing the various insurance policies people have had with the agency. Now, that is useful and necessary information, as well as something that legally needs to be maintained. And it’s all right that a person gets one row of data for each policy that they’ve had. Some people will have one or two rows, others might have a dozen or more.

So what’s the problem? This: There are no data flags to indicate “this is the policy currently in effect.” This is doubly complicated since it’s Medicare related health insurance, so someone can have up to two active policies at a time, one covering prescription medications and the other a Medicare supplement. Or a policy may have expired after they decide to drop an MAPD and go back to “original” Medicare but the only way to know that is to look for an ending or termination date — if it was ever entered.

The secret power of “IF”

This is where one of my “stupid Excel tricks” came into it. You may or may not be familiar with some of the numeric functions dealing with columns or rows of numbers, but they basically operate on a whole range. They include functions like SUM, MAX, MIN, and AVG. The usual usage is to apply them to a defined range or series of cells and they have no operators, so you get things like:

=SUM([Range])
=MAX([Range])
=MIN([Cell1],[Cell2],[Cell3],...[Cellx])

Here’s the fun trick, though. If you add one or more “IF” statements within any of these functions, you can perform the operation on a sub-range of data defined by certain criteria. In the example I’m giving, it would look at all of the insurance effective dates for one person and determine the most recent one, which is usually a good indicator of which policy is in effect.

Generally, each item you’re evaluating is in the form of [DataRange]=[CellValue], or in actual Excel terminology, it might look like “$A$1:$A$470=A12” for the version entered in row 12. After the criteria ranges, you enter the range that you want to perform the operation on, close out the parenthesis, then enter.

So let’s say that we have last name in column B, first name in column D, and the dates we want to look at to find the latest are in column N. Our formula would look like this, assuming that the first row has the field headers and the data starts in row two:

=MAX(IF($B$1:$B$525=B2,IF($D$1:$D$525=D2,$N$1:$N$525))

If you’ve entered it right, the formula should be displaying the right number. In effect, you’ll have created a column down the far right side in which the value opposite any particular person’s name equals the maximum date value, meaning the latest. Then you can do an advanced filter (oh, google it!) to pull out just the unique name data and date, then use that to do an INDEX and MATCH to create a dataset of just the most recent plan and effective date. (I covered those two functions in a previous post.)

Or… the original database administrator could have just put those current plan flags in the data in first place, set them to automatically update whenever a newer plan of the same type was added, and voilà! Every step since I wrote “This is where one of my stupid Excel tricks came into it” 396 words ago would have been unnecessary. Time and money saved and problem solved because there was never a problem in the first place.

The art of improv in Excel

On the other hand… solving these ridiculous problems of making large but inconsistent datasets consistent with as little need as possible to look at every individual record just lets me show off my ninja skills with Excel.

It’s really no different than improv. Badly entered data keeps throwing surprises at me, and I have to keep coming up with new and improved ways to ferret out and fix that bad data. In improv, this is a good thing, and one of our mottos is, “Get yourself in trouble,” because that creates comedy gold as things in the scene either get irredeemably worse or are suddenly resolved.

Damn, I miss doing Improv, and long for the days when we can finally return to the stage. But I do digress…

Back to the point: In real life, not so much for easy resolution. It’s a pain in the ass to have to fix the curveballs tossed at us by other people’s laziness and lack of diligence — unless we approach it like a game and an interesting challenge. Then, real life becomes improv again in the best sense.

And I’ll find it forever amusing that the same rules can apply to both a spontaneous, unplanned, free-wheeling art form, and an un-wielding, rigid and unforgiving computer program. They both have their rules. Only the latter won’t allow them to be bent. Okay, some improv games have rules that are not supposed to be bent. But half the fun is in bending those rules, intentionally or not.

With Excel and data-bashing, all of the fun is in following Excel’s rules, but getting them to do things they were never intended to.

Image source: Author, sample output from a completely randomized database generator in Excel used to create completely artificial test data for practicing functions and formulae without compromising anyone’s privacy. Maybe I’ll write about this one some day, if there’s interest.