Why astrology is bunk

I know way too many otherwise intelligent adults who believe in astrology, and it really grinds my gears, especially right now, because I’m seeing a lot of “Mercury is going retrograde — SQUEEEE” posts, and they are annoying and wrong.

The effect that Mercury in retrograde will have on us: Zero.

Fact

Mercury doesn’t “go retrograde.” We catch up with and then pass it, so it only looks like it’s moving backwards. It’s an illusion, and entirely a function of how planets orbit the sun, and how things look from here. If Mars had (semi)intelligent life, they would note periods when the Earth was in retrograde, but it’d be for the exact same reason.

Science

What force, exactly, would affect us? Gravity is out, because the gravitational effect of anything else in our solar system or universe is dwarfed by the Earth’s. When it comes to astrology at birth, your OB/GYN has a stronger gravitational effect on you than the Sun.

On top of that, the Sun has 99.9% of the mass of our solar system, which is how gravity works, so the Sun has the greatest gravitational influence on all of the planets. We only get a slight exception because of the size of our Moon and how close it is, but that’s not a part of astrology, is it? (Not really. They do Moon signs, but it’s not in the day-to-day.)

Some other force? We haven’t found one yet.

History

If astrology were correct, then there are one of two possibilities. A) It would have predicted the existence of Uranus and Neptune, and possibly Pluto, long before they were discovered, since astrology goes back to ancient times, but those discoveries happened in the modern era, or B) It would not have allowed for the addition of those three planets (and then the removal of Pluto) once discovered, since all of the rules would have been set down. And it certainly would have accounted for the 13th sign, Ophiuchus, which, again, wasn’t found until very recently, by science.

So…stop believing in astrology, because it’s bunk. Mercury has no effect on us whatsoever, other than when astronomers look out with telescopes and watch it transit the Sun, and use its movements to learn more about real things, like gravity.

Experiment

James Randi, fraud debunker extraordinaire, does a classroom exercise that demolishes the accuracy of those newspaper horoscopes, and here it is — apologies for the low quality video.

Yep. Those daily horoscopes you read are general enough to be true for anyone, and confirmation bias means that you’ll latch onto the parts that fit you and ignore the parts that don’t although, again, they’re designed to fit anyone — and no one is going to remember the generic advice or predictions sprinkled in or, if they do, will again pull confirmation bias only when they think they came true.

“You are an intuitive person who likes to figure things out on your own, but doesn’t mind asking for help when necessary. This is a good week to start something new, but be careful on Wednesday. You also have a coworker who is plotting to sabotage you, but another who will come to your aid. Someone with an S in their name will become suddenly important, and they may be an air sign. When you’re not working on career, focus on home life, although right now your Jupiter is indicating that you need to do more organizing than cleaning. There’s some conflict with Mars, which says that you may have to deal with an issue you’ve been having with a neighbor. Saturn in your third house indicates stability, so a good time to keep on binge watching  your favorite show, but Uranus retrograde indicates that you’ll have to take extra effort to protect yourself from spoilers.”

So… how much of that fit you? Or do you think will? Honestly, it is 100% pure, unadulterated bullshit that I just made up, without referencing any kind of astrological chart at all, and it could apply to any sign because it mentions none.

Conclusion

If you’re an adult, you really shouldn’t buy into this whole astrology thing. The only way any of the planets would have any effect at all on us is if one of them suddenly slammed into the Earth. That probably only happened once, or not, but it’s what created the Moon. So probably ultimately not a bad thing… except for anything living here at the time.

5 things space exploration brought back down to Earth

Recently, I wrote about how a thing as terrible as World War I still gave us some actual benefits, like improvements in plastic surgery, along with influencing art in the 20th century. Now, I’d like to cover something much more positive: five of the tangible, down-to-earth benefits that NASA’s space programs, including the Apollo program to the Moon, have given us.

I’m doing so because I happened across another one of those ignorant comments on the internet along the lines of, “What did going to the Moon ever really get us except a couple of bags of rocks?” That’s kind of like asking, “What did Columbus sailing to America ever really get us?” The answer to that should be obvious, although NASA did it with a lot fewer deaths and exactly zero genocide.

All of those Apollo-era deaths came with the first manned attempt, Apollo 1, which was destroyed by a cabin fire a month before its actual launch date during a test on the pad on January 27, 1967, killing all three astronauts aboard. As a consequence, missions 2 through 6 were unmanned. Apollo 7 tested docking maneuvers for the Apollo Crew and Service Modules, to see if this crucial step would work, and Apollo 8 was the first to achieve lunar orbit, circling our satellite ten times before returning to Earth. Apollo 9 tested the crucial Lunar Module, responsible for getting the first humans onto and off of the Moon, and Apollo 10 was a “dress rehearsal,” which went through all of the steps except the actual landing.

Apollo 11, of course, was the famous “one small step” mission, and after that we only flew six more times to the Moon, all of them meant to do the same as 11, but only the other one that’s most people remember, Apollo 13, is famous for failing to make it there.

I think the most remarkable part is that we managed to land on the Moon only two-and-a-half years after that disastrous first effort, and then carried out five successful missions in the three-and-a-half-years after that. What’s probably less well-known is that three more missions were cancelled between Apollo 13 and 14, but still with the higher numbers 18 through 20 because their original launch dates were not until about two years later.

Yes, why they just didn’t skip from to 17 so that the numbering worked out to 20 is a mystery.

Anyway, the point is that getting to the Moon involved a lot of really intelligent people solving a lot of tricky problems in a very short time, and as a result of it, a ton of beneficial tech came out of it. Some of this fed into or came from Apollo directly, while other tech was created or refined in successive programs, like Skylab, and  the Space Shuttle.

Here are my five favorites out of the over 6,300 technologies that NASA made great advances in on our journeys off of our home planet.

CAT scanner: Not actually an invention of NASA’s per se — that credit goes to British physicists Godfrey Hounsfield and Allan Cormack. However, the device did use NASA’s digital imaging technology in order to work, and this had been developed by JPL for NASA in order to enhance images taken on the moon. Since neither CAT scanners nor MRIs use visible light to capture images, the data they collect needs to be processed somehow and this is where digital imaging comes in.

A CAT scanner basically uses a revolving X-ray tube to repeatedly circle the patient and create a profile of data taken at various depths and angles, and this is what the computer puts together. The MRI is far safer (as long as you don’t get metal too close to it.)

This is because instead of X-rays an MRI machine works by using a magnetic field to cause the protons in every water molecule in your body to align, then pulsing a radio frequency through, which unbalances the proton alignment. When the radio frequency is then turned off, the protons realign. The detectors sense how long it takes protons in various places to do this, which tells them what kind of tissue they’re in. Once again, that old NASA technology takes all of this data and turns it into images that can be understood by looking at them. Pretty nifty, huh?

Invisible braces: You may remember this iconic moment from Star Trek IV: The One with the Whales, in which Scotty shares the secret of “transparent aluminum” with humans of 1986.

However, NASA actually developed transparent polycrystalline alumina long before that film came out and, although TPA is not a metal, but a ceramic, it contributed to advances in creating nearly invisible braces. (Note that modern invisible braces, like Invisalign, are not made of ceramic.)

But the important point to note is that NASA managed to take a normally opaque substance and allow it to transmit light while still maintaining its properties. And why did NASA need transparent ceramic? Easy. That stuff is really heat-resistant, and if you have sensors that need to see light while you’re dumping a spacecraft back into the atmosphere, well, there you go. Un-melting windows and antennae, and so on. This was also a spin-off of heat-seeking missile technology.

Joystick: You can be forgiven for thinking that computer joysticks were invented in the early 1980s by ATARI or (if you really know your gaming history) by ATARI in the early 1970s. The first home video game, Pong, was actually created in 1958, but the humble joystick itself goes back to as far as aviation does, since that’s been the term for the controller on airplanes since before World War I. Why is it called a “joystick?” We really don’t know, despite attempts at creating folk etymology after the fact.

However, those early joysticks were strictly analogue — they were connected mechanically to the flaps and rudders that they controlled. The first big innovation came thirty-two years before Pong, when joysticks went electric. Patented in 1926, it was dreamt up by C. B. Mirick at the U.S. Naval Research Laboratory. Its purpose was also controlling airplanes.

So this is yet another incidence of something that NASA didn’t invent, but boy howdy did they improv upon it — an absolute necessity when you think about it. For NASA, joysticks were used to land craft on the Moon and dock them with each other in orbit, so precision was absolutely necessary, especially when trying to touch down on a rocky satellite after descending through no atmosphere at orbital speed, which can be in the vicinity of 2,300 mph (about 3,700 km/h) at around a hundred kilometers up. They aren’t much to look at by modern design standards, but one of them sold at auction a few years back for over half a million dollars.

It gets even trickier when you need to dock two craft moving at similar speed, and in the modern day, we’re doing it in Earth orbit. The International Space Station is zipping along at a brisk 17,150 mph, or 27,600 km/h. That’s fast.

The early NASA innovations involved adding rotational control in addition to the usual X and Y axes, and later on they went digital and all kinds of crazy in refining the devices to have lots of buttons and be more like the controllers we know and love today. So next time you’re shredding it your favorite PC or Xbox game with your $160 Razer Wolverine Ultimate Chroma Controller, thank the rocket scientists at NASA. Sure, it doesn’t have a joystick in the traditional sense, but this is the future that space built, so we don’t need one!

Smoke detector: This is another device that NASA didn’t invent, but which they certainly refined and improved. While their predecessors, automatic fire alarms, date back to the 19th century, the first model relied on heat detection only. The problem with this, though, is that you don’t get heat until the fire is already burning, and the main cause of death in house fires isn’t the flames. It’s smoke inhalation. The version patented by George Andrew Darby in England in the 1890s did account for some smoke, but it wasn’t until the 1930s the concept of using ionization to detect smoke happened. Still, these devices were incredibly expensive, so only really available to corporations and governments. But isn’t that how all technological progress goes?

It wasn’t until NASA teamed with Honeywell (a common partner) in the 1970s that they managed to bring down the size and cost of these devices, as well as make them battery-operated. More recent experiments on ISS have helped scientists to figure out how to refine the sensitivity of smoke detectors, so that it doesn’t go off when your teenage boy goes crazy with the AXE body spray or when there’s a little fat-splash back into the metal roaster from the meat you’re cooking in the oven. Both are annoying, but at least the latter does have a positive outcome.

Water filter: Although it turns out that water is common in space, with comets being lousy with the stuff in the form of ice, and water-ice confirmed on the Moon and subsurface liquid water on Mars, as well as countless other places, we don’t have easy access to it, so until we establish water mining operations off-Earth, we need to bring it with us. Here’s the trick, though: water is heavy. A liter weighs a kilogram and a gallon weighs a little over eight pounds. There’s really no valid recommendation on how much water a person should drink in a day, but if we allow for two liters per day per person, with a seven person crew on the ISS, that’s fourteen kilos, or 31 pounds of extra weight per day. At current SpaceX launch rates, that can range from $23,000 to $38,000 per daily supply of water, but given a realistic launch schedule of every six weeks, that works out to around $1 to $1.5 million per launch just for the water. That six-week supply is also eating up 588 kilos of payload.

And remember: This is just for a station that’s in Earth orbit. For longer missions, the cost of getting water to them is going to get ridiculously expensive fast — and remember, too, that SpaceX costs are relatively recent. In 1981, the cost per kilogram was $85,216, although the Space Shuttles cargo capacity was slightly more than the Falcon Light.

So what’s the solution? Originally, it was just making sure all of the water was purified, leading to the Microbial Check Valve, which eventually filtered out (pun intended) to municipal water systems and dental offices. But to really solve the water problem, NASA is moving to recycling everything. And why not? Our bodies tend to excrete a lot of the water we drink when we’re done with it. Although it’s a myth that urine is sterile, it is possible to purify it to reclaim the water in it, and NASA has done just that. However, they really shouldn’t use the method shown in the satirical WW II film Catch-22

So it’s absolutely not true that the space program has given us nothing, and this list of five items barely scratches the surface. Once what we learn up there comes back down to Earth, it can improve all of our lives, from people living in the poorest remote villages on the planet to those living in splendor in the richest cities.

If you don’t believe that, here’s a question. How many articles of clothing that are NASA spin-offs are you wearing now, or do you wear on a regular basis? You’d be surprised.

Power up

You could say that May 16 can be an electrifying day in history. Or at least a very energetic one. On this day in 1888, Nikola Tesla described what equipment would be needed to transmit alternating current over long distances. Remember, at this time, he was engaged in the “War of the Currents” with that douche, Edison, who was a backer of DC. The only problem with DC (the kind of energy you get out of batteries) is that you need retransmission stations every mile or so. With Tesla’s version, you can send that power a long way down the wires before it needs any bump up in energy.

Of course, it might help to understand in the first place what electric charge is. Here’s Nick Lucid from Science Asylum to explain:

But if you think that electric current flows through a wire like water flows through a pipe, you’re wrong, and there’s a really interesting and big difference between the one and the other, as well as between AC and DC current. DC, meaning “direct current,” only “flows” in one direction, from higher to lower energy states. This is why it drains your batteries, actually — all of the energy potential contained therein sails along its merry way, powers your device, and then dumps off in the lower energy part of the battery, where it isn’t inclined to move again.

A simplification, to be sure, but the point is that any direct current, by definition, loses energy as it moves. Although here’s the funny thing about it, which Nick explains in this next video: neither current moves through that wire like it would in a pipe.

Although the energy in direct current moves from point A to point B at the speed of light, the actual electrons wrapped up in the electromagnetic field do not, and their progress is actually rather slow. If you think about it for a minute, this makes sense. Since your battery is drained when all of the negatively charged electrons move down to their low energy state, if they all moved at the speed of light, your battery would drain in nanoseconds. Rather, it’s the field that moves, while the electrons take their own sweet time moving down the crowded center of the wire — although move they do. It just takes them a lot of time because they’re bouncing around chaotically.

As for alternating current, since its thing is to let the field oscillate back and forth from source to destination, it doesn’t lose energy, but it also keeps its electrons on edge, literally, and they tend to sneak down the inside edges of the wire. However, since they’re just as likely to be on any edge around those 360 degrees, they have an equally slow trip. Even more so, what’s really guiding them isn’t so much their own momentum forward as it is the combination of electricity and magnetism. In AC, it’s a dance between the electric field in the wire and the magnetic field outside of it, which is exactly why the current seems to wind up in a standing wave between points A and B without losing energy.

I think you’re ready for part three:

By the way, as mentioned in that last video, Ben Franklin blew it when he defined positive and negative, but science blew it in not changing the nomenclature, so that the particle that carries electrical charge, the electron, is “negative,” while we think of energy as flowing from the positive terminal of batteries.

It doesn’t. It flows backwards into the “positive” terminals, but that’s never going to get fixed, is it?

But all of that was a long-winded intro to what the Germans did on this same day three years later, in 1891. It was the International Electrotechnical Exhibition, and they proved Edison dead wrong about which form of energy transmission was more efficient and safer. Not only did they use magnetism to create and sustain the energy flow, they used Tesla’s idea of three-phase electric power, and if you’ve got outlets at home with those three prongs, frequently in an unintended smiley face arrangement, then you know all about it.

Eleven years later, Edison would film the electrocution of an elephant in order to “prove” the danger of AC, but he was fighting a losing battle by that point. Plus, he was a colossal douche.

Obviously, the power of AC gave us nationwide electricity, but it also powered our earliest telegraph systems, in effect the great-grandparent of the internet. Later on, things sort of went hybrid, with the external power for landlines coming from AC power, but that getting stepped down and converted to operate the internal electronics via DC.

In fact, that’s the only reason that Edison’s version wound up sticking around: the rise of electronics, transistors, microchips, and so on. Powering cities and neighborhoods and so on requires the oomph of AC, but dealing with microcircuits requires the “directionality” of DC.

It does make sense though, if we go back to the water through a house analogy, wrong as it is. Computer logic runs on transistors, which are essentially one-way logic gates — input, input, compare, output. This is where computers and electricity really link up nicely. Computers work in binary: 1 or 0; on or off. So does electricity. 1 or 0; positive voltage, no voltage. Alternating current is just going to give you a fog of constant overlapping 1s and 0s. Direct current can be either, or. And that’s why computers manage to convert one to the other before the power gets to any of the logic circuits.

There’s one other really interesting power-related connection to today, and it’s this: on May 16, 1960, Theodore Maiman fired up the first optical LASER in Malibu, California, which he is credited with creating. Now… what does this have to do with everything before it? Well… everything.

LASER, which should only properly ever be spelled like that, is an acronym for the expression Light Amplification by Stimulated Emission of Radiation.

But that’s it. It was basically applying the fundamentals of electromagnetism (see above) to electrons and photons. The optical version of electrical amplification, really. But here’s the interesting thing about it. Once science got a handle on how LASERs worked, they realized that they could use to send the same information that they could via electricity.

So… all those telegraphs and telephone calls that used to get shot down copper wires over great distances in analog form? Yeah, well… here was a media that could do it through much cheaper things called fiber optics, transmit the same data much more quickly, and do it with little energy loss over the same distances.

And, ironically, it really involved the same dance of particles that Tesla realized in figuring out how AC worked way back in the day, nearly a century before that first LASER.

All of these innovations popped up on the same day, May 16, in 1888, 1891, and 1960. I think we’re a bit overdue for the next big breakthrough to happen on this day. See you in 2020?

What is your favorite science innovation involving energy? Tell us in the comments!

Forces of nature

If you want to truly be amazed by the wonders of the universe, the quickest way to do so is to learn about the science behind it.

And pardon the split infinitive in that paragraph, but it’s really not wrong in English, since it became a “rule” only after a very pedantic 19th century grammarian, John Comly, declared that it was wrong to do so — although neither he nor his contemporaries ever called it that. Unfortunately, he based this on the grammar and structure of Latin, to which that of English bears little resemblance.

That may seem like a digression, but it brings us back to one of the most famous modern split infinitives that still resonates throughout pop culture today: “To boldly go where no one has gone before,” and this brings us gracefully back to science and space.

That’s where we find the answer to the question “Where did we come from?” But what would you say exactly is the ultimate force that wound up directly creating each one of us?

One quick and easy answer is the Big Bang. This is the idea, derived from the observation that everything in the universe seems to be moving away from everything else, so that at one time everything must have been in the same place. That is, what became the entire universe was concentrated into a single point that then somehow exploded outward into, well, everything.

But the Big Bang itself did not instantly create stars and planets and galaxies. It was way too energetic for that. So energetic, in fact, that matter couldn’t even form in the immediate aftermath. Instead, everything that existed was an incredibly hot quantum foam of unbound quarks. Don’t let the words daunt you. The simple version is that elements are made up of atoms, and an atom is the smallest unit of any particular element — an atom of hydrogen, helium, carbon, iron, etc. Once you move to the subatomic particles that make up the atom, you lose any of the properties that make the element unique, most of which have to do with its atomic weight and the number of free electrons wrapped around it.

Those atoms in turn are made up of electrons that are sort of smeared out in a statistical cloud around a nucleus made up of at least one proton (hydrogen), and then working their way up through larger collections of protons (positively charged), an often but not always equal number of neutrons (no charge), and a number of electrons (negatively charged) that may or may not equal the number of protons.

Note that despite what you might have learned in school, an atom does not resemble a mini solar system in any particular way at all, with the electron “planets” neatly orbiting the “star” that is the nucleus. Instead, the electrons live in what are called orbitals and shells, but they have a lot more to do with energy levels and probable locations than they do with literal placement of discrete dots of energy.

Things get weird on this level, but they get weirder if you go one step down and look inside of the protons and neutrons. These particles themselves are made up of smaller particles that were named quarks by Nobel Prize winner Murray Gell-Man as a direct homage to James Joyce. The word comes from a line from Joyce’s book Finnegans Wake, which itself is about as weird and wonderful as the world of subatomic science. “Three quarks for muster mark…”

The only difference between a proton and a neutron is the configuration of quarks inside. I won’t get into it here except to say that if we call the quarks arbitrarily U and D, a proton has two U’s and one D, while a neutron has two D’s and one U.

And for the first few milliseconds after the Big Bang, the universe was an incredibly hot soup of all these U’s and D’s flying around, unable to connect to each other because the other theoretical particles that could have tied them together, gluons, couldn’t get a grip. The universe was also incredibly dark because photons couldn’t move through it.

Eventually, as things started to cool down, the quarks and gluons started to come together, creating protons and neutrons. The protons, in turn, started to hook up with free electrons to create hydrogen. (The neutrons, not so much at first, since when unbound they tend to not last a long time.) Eventually, the protons and neutrons did start to hook up and lure in electrons, creating helium. This is also when the universe became transparent, because now the photons could move through it freely.

But we still haven’t quite gotten to the force that created all of us just yet. It’s not the attractive force that pulled quarks and gluons together, nor is it the forces that bound electrons and protons. That’s because, given just those forces, the subatomic particles and atoms really wouldn’t have done much else. But once they reached the stage of matter — once there were elements with some appreciable (though tiny) mass to toss around, things changed.

Vast clouds of gas slowly started to fall into an inexorable dance as atoms of hydrogen found themselves pulled together, closer and closer, and tighter and tighter. The bigger the cloud became, the stronger the attraction until, eventually, a big enough cloud of hydrogen would suddenly collapse into itself so rapidly that the hydrogen atoms in the middle would slam together with such force that it would overcome the natural repulsion of the like-charged electron shells and push hard enough to force the nuclei together. And then you’d get… more helium, along with a gigantic release of energy.

And so, a star is born. A bunch of stars. A ton of stars, everywhere, and in great abundance, and with great energy. This is the first generation of stars in the universe and, to quote Bladerunner, “The light that burns twice as bright burns half as long.” These early stars were so energetic that they didn’t make it long, anf they managed to really squish things together. You see, after you turn hydrogen into helium, the same process turns helium into heavier elements, like lithium, carbon, neon, oxygen, and silicon. And then, once it starts to fuse atoms into iron, a funny thing happens. Suddenly, the process stops producing energy, the star collapses into itself, and then it goes boom, scattering those elements aback out into the universe.

This process will happen to stars that don’t burn as brightly, either. It will just take longer. The first stars lasted a few hundred million years. A star like our sun is probably good for about ten billion, and we’re only half way along.

But… have you figured out yet which force made these stars create elements and then explode and then create us, because that was the question: “What would you say exactly is the ultimate force that wound up directly creating each one of us?”

It’s the same force that pulled those hydrogen atoms together in order to create heavier elements and then make stars explode in order to blast those elements back out into the universe to create new stars and planets and us. It’s the same reason that we have not yet mastered doing nuclear fusion because we cannot control this force and don’t really know yet what creates it. It’s the same force that is keeping your butt in your chair this very moment.

It’s called gravity. Once the universe cooled down enough for matter to form — and hence mass — this most basic of laws took over, and anything that did have mass started to attract everything else with mass. That’s just how it works. And once enough mass got pulled together, it came together tightly enough to overcome any other forces in the universe.  Remember: atoms fused because the repulsive force of the negative charge of electrons was nowhere near strong enough to resist gravity, and neither was the nuclear force between protons and neutrons.

Let gravity grow strong enough, in fact, and it can mash matter so hard that it turns every proton in a star into a neutron which is surrounded by a surface cloud of every electron sort of in the same place, and this is called a neutron star. Squash it even harder, and you get a black hole, a very misunderstood (by lay people) object that nonetheless seems to actually be the anchor (or one of many) that holds most galaxies together.

Fun fact, though. If our sun suddenly turned into a black hole (unlikely because it’s not massive enough) the only effect on the Earth would be… nothing for about eight minutes, and then it would get very dark and cold, although we might also be fried to death by a burst of gamma radiation. But the one thing that would not happen is any of the planets suddenly getting sucked into it.

Funny thing about black holes. When they collapse like that and become one, their radius may change drastically, like from sun-sized to New York-sized, but their gravity doesn’t change at all.

But I do digress. Or maybe not. Circle back to the point of this story: The universal force that we still understand the least also happens to be the same damn force that created every single atom in every one of our bodies. Whether it has its own particle or vector, or whether it’s just an emergent property of space and time, is still anybody’s guess. But whichever turns out to be true, if you know some science, then the power of gravity is actually quite impressive.

Rewind

If you could go back in time to your younger self — say right out of high school or college — what one bit of advice would you give? I think, in my case, it would be this: “Dude, you only think you’re an introvert, but you’re really not. You just need to learn now what it took me years to understand. No one else is really judging you because they’re too busy worrying about how they come off.”

But that worry about what other people thought turned me into a shy introvert for way too long a time. At parties, I wouldn’t talk to strangers. I’d hang in the corners and observe, or hope that I knew one or two people there already, so would stick to them like your insurance agent’s calendar magnet on your fridge. Sneak in late, leave early, not really have any fun.

It certainly didn’t help on dates, especially of the first kind. “Hi, (your name). How’s it going?” Talk talk talk, question to me… awkward silence, stare at menu, or plate if order already placed.

Now this is not to imply that I had any problem going straight to close encounters of the third kind way too often, but those only happened when someone else hit on me first. Also, I had a really bad habit of not being able to say “No” when someone did show interest. I guess I should have noticed the contradiction: Can someone really be an introvert and a slut at the same time?

What I also didn’t notice was that the times I was a total extrovert all happened via art. When I wrote or acted, all the inhibitions went away. Why? Because I was plausibly not being myself. The characters I created or the characters I played were other people. They were insulation. They gave me permission to just go out there without excuse. (Okay, the same thing happened during sex, but by that point, I don’t think that introversion is even possible or very likely.)

However… the characters did not cross over into my real life. I was awkward with strangers. I was okay with friends, but only after ample time to get to know them.

And so it went until I wound up in the hospital, almost died, came out the other side alive — and then a funny thing happened. I suddenly started initiating conversations with strangers. And enjoying them. And realized that I could play myself as a character in real life and have a lot of fun doing it. And started to not really care what anyone else thought about me because I was more interested in just connecting with people and having fun.

The most important realization, though, was that I had been lying to myself about what I was for years. The “being an introvert” shtick was just an excuse. What I’d never really admitted was that I was extroverted as hell. The “almost dying” part gave the big nudge, but the “doing improv” part sealed it. Here’s the thing. Our lives, day to day and moment to moment, are performance. Most muggles never realize that. So they get stage fright, don’t know what to do or say or how to react.

But, honestly, every conversation you’ll ever have with someone else is just something you both make up on the spot, which is what improv is. The only difference is that with improv you’re making up the who, what (or want) and where, whereas in real life, you’re playing it live, so those things are already there.

Ooh, what’s that? Real life is easier than performing on stage?

One other thing that yanked me out of my “I’m an introvert” mindset, though, was an indirect result of doing improv. I’ve been working box office for ComedySportz for almost a year now — long story on how and why that happened — but I’m basically the first public face that patrons see, I’ve gotten to know a lot of our regulars, and I honestly enjoy interacting with the public, whether via walk-ups to the ticket counter or phone calls. Young me would have absolutely hated doing this, which is another reason for my intended message to that callow twat.

And so… if you’re reading this and think that you’re an introvert, do me a favor. Find something that drags you out of your comfort zone. Remind yourself that no one else is really judging you because they’re too busy worrying about themselves, then smile and tell way too much to the wait-staff or checker or usher or whomever — and then don’t give a squishy nickel over what they might think about it.

(Note: “squishy nickel” was a fifth level choice on the improv game of “New Choice” in my head just now. Which is how we do…)

5 Things that are older than you think

A lot of our current technology seems surprisingly new. The iPhone is only twelve years old, for example, although the first Blackberry, a more primitive form of smart phone, came out in 1999. The first actual smart phone, IBM’s Simon Personal Communicator, was introduced in 1992 but not available to consumers until 1994. That was also the year that the internet started to really take off with people outside of universities or the government, although public connections to it had been available as early as 1989 (remember Compuserve, anyone?), and the first experimental internet nodes were connected in 1969.

Of course, to go from room-sized computers communicating via acoustic modems along wires to handheld supercomputers sending their signals wirelessly via satellite took some evolution and development of existing technology. Your microwave oven has a lot more computing power than the system that helped us land on the moon, for example. But the roots of many of our modern inventions go back a lot further than you might think. Here are five examples.

Alarm clock

As a concept, alarm clocks go back to the ancient Greeks, frequently involving water clocks. These were designed to wake people up before dawn, in Plato’s case to make it to class on time, which started at daybreak; later, they woke monks in order to pray before sunrise.

From the late middle ages, church towers became town alarm clocks, with the bells set to strike at one particular hour per day, and personal alarm clocks first appeared in 15th-century Europe. The first American alarm clock was made by Levi Hutchins in 1787, but he only made it for himself since, like Plato, he got up before dawn. Antoine Redier of France was the first to patent a mechanical alarm clock, in 1847. Because of a lack of production during WWII due to the appropriation of metal and machine shops to the war effort (and the breakdown of older clocks during the war) they became one of the first consumer items to be mass-produced just before the war ended. Atlas Obscura has a fascinating history of alarm clocks that’s worth a look.

Fax machine

Although it’s pretty much a dead technology now, it was the height of high tech in offices in the 80s and 90s, but you’d be hard pressed to find a fax machine that isn’t part of the built-in hardware of a multi-purpose networked printer nowadays, and that’s only because it’s such a cheap legacy to include. But it might surprise you to know that the prototypical fax machine, originally an “Electric Printing Telegraph,” dates back to 1843. Basically, as soon as humans figured out how to send signals down telegraph wires, they started to figure out how to encode images — and you can bet that the second image ever sent in that way was a dirty picture. Or a cat photo. Still, it took until 1964 for Xerox to finally figure out how to use this technology over phone lines and create the Xerox LDX. The scanner/printer combo was available to rent for $800 a month — the equivalent of around $6,500 today — and it could transmit pages at a blazing 8 per minute. The second generation fax machine only weighed 46 lbs and could send a letter-sized document in only six minutes, or ten page per hour. Whoot — progress! You can actually see one of the Electric Printing Telegraphs in action in the 1948 movie Call Northside 777, in which it plays a pivotal role in sending a photograph cross-country in order to exonerate an accused man.

In case you’re wondering, the title of the film refers to a telephone number from back in the days before what was originally called “all digit dialing.” Up until then, telephone exchanges (what we now call prefixes) were identified by the first two letters of a word, and then another digit or two or three. (Once upon a time, in some areas of the US, phone numbers only had five digits.) So NOrthside 777 would resolve itself to 667-77, with 667 being the prefix. This system started to end in 1958, and a lot of people didn’t like that.

Of course, with the advent of cell phones prefixes and even area codes have become pretty meaningless, since people tend to keep the number they had in their home town regardless of where they move to, and a “long distance call” is mostly a dead concept now as well, which is probably a good thing.

CGI

When do you suppose the first computer animation appeared on film? You may have heard that the original 2D computer generated imagery (CGI) used in a movie was in 1973 in the original film Westworld, inspiration for the recent TV series. Using very primitive equipment, the visual effects designers simulated pixilation of actual footage in order to show us the POV of the robotic gunslinger played by Yul Brynner. It turned out to be a revolutionary effort.

The first 3D CGI happened to be in this film’s sequel, Futureworld in 1976, where the effect was used to create the image of a rotating 3D robot head. However, the first ever CGI sequence was actually made in… 1961. Called Rendering of a planned highway, it was created by the Swedish Royal Institute of Technology on what was then the fastest computer in the world, the BESK, driven by vacuum tubes. It’s an interesting effort for the time, but the results are rather disappointing.

Microwave oven

If you’re a Millennial, then microwave ovens have pretty much always been a standard accessory in your kitchen, but home versions don’t predate your birth by much. Sales began in the late 1960s. By 1972 Litton had introduced microwave ovens as kitchen appliances. They cost the equivalent of about $2,400 today. As demand went up, prices fell. Nowadays, you can get a small, basic microwave for under $50.

But would it surprise you to learn that the first microwave ovens were created just after World War II? In fact, they were the direct result of it, due to a sudden lack of demand for magnetrons, the devices used by the military to generate radar in the microwave range. Not wanting to lose the market, their manufacturers began to look for new uses for the tubes. The idea of using radio waves to cook food went back to 1933, but those devices were never developed.

Around 1946, engineers accidentally realized that the microwaves coming from these devices could cook food, and voìla! In 1947, the technology was developed, although only for commercial use, since the devices were taller than an average man, weighed 750 lbs and cost the equivalent of $56,000 today. It took 20 years for the first home model, the Radarange, to be introduced for the mere sum of $12,000 of today’s dollars.

Music video

Conventional wisdom says that the first music video to ever air went out on August 1, 1981 on MTV, and it was “Video Killed the Radio Star” by The Buggles. As is often the case, conventional wisdom is wrong. It was the first to air on MTV, but the concept of putting visuals to rock music as a marketing tool goes back a lot farther than that. Artists and labels were making promotional films for their songs back at almost the beginning of the 1960s, with the Beatles a prominent example. Before these, though, was the Scopitone, a jukebox that could play films in sync with music popular from the late 1950s to mid-1960s, and their predecessor was the Panoram, a similar concept popular in the 1940s which played short programs called Soundies. However, these programs played on a continuous loop, so you couldn’t chose your song. Soundies were produced until 1946, which brings us to the real predecessor of music videos: Vitaphone Shorts, produced by Warner Bros. as sound began to come to film. Some of these featured musical acts and were essentially miniature musicals themselves. They weren’t shot on video, but they introduced the concept all the same. Here, you can watch a particularly fun example from 1935 in 3-strip Technicolor that also features cameos by various stars of the era in a very loose story.

Do you know of any things that are actually a lot older than people think? Let us know in the comments!

Photo credit: Jake von Slatt