Power up

You could say that May 16 can be an electrifying day in history. Or at least a very energetic one. On this day in 1888, Nikola Tesla described what equipment would be needed to transmit alternating current over long distances. Remember, at this time, he was engaged in the “War of the Currents” with that douche, Edison, who was a backer of DC. The only problem with DC (the kind of energy you get out of batteries) is that you need retransmission stations every mile or so. With Tesla’s version, you can send that power a long way down the wires before it needs any bump up in energy.

Of course, it might help to understand in the first place what electric charge is. Here’s Nick Lucid from Science Asylum to explain:

But if you think that electric current flows through a wire like water flows through a pipe, you’re wrong, and there’s a really interesting and big difference between the one and the other, as well as between AC and DC current. DC, meaning “direct current,” only “flows” in one direction, from higher to lower energy states. This is why it drains your batteries, actually — all of the energy potential contained therein sails along its merry way, powers your device, and then dumps off in the lower energy part of the battery, where it isn’t inclined to move again.

A simplification, to be sure, but the point is that any direct current, by definition, loses energy as it moves. Although here’s the funny thing about it, which Nick explains in this next video: neither current moves through that wire like it would in a pipe.

Although the energy in direct current moves from point A to point B at the speed of light, the actual electrons wrapped up in the electromagnetic field do not, and their progress is actually rather slow. If you think about it for a minute, this makes sense. Since your battery is drained when all of the negatively charged electrons move down to their low energy state, if they all moved at the speed of light, your battery would drain in nanoseconds. Rather, it’s the field that moves, while the electrons take their own sweet time moving down the crowded center of the wire — although move they do. It just takes them a lot of time because they’re bouncing around chaotically.

As for alternating current, since its thing is to let the field oscillate back and forth from source to destination, it doesn’t lose energy, but it also keeps its electrons on edge, literally, and they tend to sneak down the inside edges of the wire. However, since they’re just as likely to be on any edge around those 360 degrees, they have an equally slow trip. Even more so, what’s really guiding them isn’t so much their own momentum forward as it is the combination of electricity and magnetism. In AC, it’s a dance between the electric field in the wire and the magnetic field outside of it, which is exactly why the current seems to wind up in a standing wave between points A and B without losing energy.

I think you’re ready for part three:

By the way, as mentioned in that last video, Ben Franklin blew it when he defined positive and negative, but science blew it in not changing the nomenclature, so that the particle that carries electrical charge, the electron, is “negative,” while we think of energy as flowing from the positive terminal of batteries.

It doesn’t. It flows backwards into the “positive” terminals, but that’s never going to get fixed, is it?

But all of that was a long-winded intro to what the Germans did on this same day three years later, in 1891. It was the International Electrotechnical Exhibition, and they proved Edison dead wrong about which form of energy transmission was more efficient and safer. Not only did they use magnetism to create and sustain the energy flow, they used Tesla’s idea of three-phase electric power, and if you’ve got outlets at home with those three prongs, frequently in an unintended smiley face arrangement, then you know all about it.

Eleven years later, Edison would film the electrocution of an elephant in order to “prove” the danger of AC, but he was fighting a losing battle by that point. Plus, he was a colossal douche.

Obviously, the power of AC gave us nationwide electricity, but it also powered our earliest telegraph systems, in effect the great-grandparent of the internet. Later on, things sort of went hybrid, with the external power for landlines coming from AC power, but that getting stepped down and converted to operate the internal electronics via DC.

In fact, that’s the only reason that Edison’s version wound up sticking around: the rise of electronics, transistors, microchips, and so on. Powering cities and neighborhoods and so on requires the oomph of AC, but dealing with microcircuits requires the “directionality” of DC.

It does make sense though, if we go back to the water through a house analogy, wrong as it is. Computer logic runs on transistors, which are essentially one-way logic gates — input, input, compare, output. This is where computers and electricity really link up nicely. Computers work in binary: 1 or 0; on or off. So does electricity. 1 or 0; positive voltage, no voltage. Alternating current is just going to give you a fog of constant overlapping 1s and 0s. Direct current can be either, or. And that’s why computers manage to convert one to the other before the power gets to any of the logic circuits.

There’s one other really interesting power-related connection to today, and it’s this: on May 16, 1960, Theodore Maiman fired up the first optical LASER in Malibu, California, which he is credited with creating. Now… what does this have to do with everything before it? Well… everything.

LASER, which should only properly ever be spelled like that, is an acronym for the expression Light Amplification by Stimulated Emission of Radiation.

But that’s it. It was basically applying the fundamentals of electromagnetism (see above) to electrons and photons. The optical version of electrical amplification, really. But here’s the interesting thing about it. Once science got a handle on how LASERs worked, they realized that they could use to send the same information that they could via electricity.

So… all those telegraphs and telephone calls that used to get shot down copper wires over great distances in analog form? Yeah, well… here was a media that could do it through much cheaper things called fiber optics, transmit the same data much more quickly, and do it with little energy loss over the same distances.

And, ironically, it really involved the same dance of particles that Tesla realized in figuring out how AC worked way back in the day, nearly a century before that first LASER.

All of these innovations popped up on the same day, May 16, in 1888, 1891, and 1960. I think we’re a bit overdue for the next big breakthrough to happen on this day. See you in 2020?

What is your favorite science innovation involving energy? Tell us in the comments!

Forces of nature

If you want to truly be amazed by the wonders of the universe, the quickest way to do so is to learn about the science behind it.

And pardon the split infinitive in that paragraph, but it’s really not wrong in English, since it became a “rule” only after a very pedantic 19th century grammarian, John Comly, declared that it was wrong to do so — although neither he nor his contemporaries ever called it that. Unfortunately, he based this on the grammar and structure of Latin, to which that of English bears little resemblance.

That may seem like a digression, but it brings us back to one of the most famous modern split infinitives that still resonates throughout pop culture today: “To boldly go where no one has gone before,” and this brings us gracefully back to science and space.

That’s where we find the answer to the question “Where did we come from?” But what would you say exactly is the ultimate force that wound up directly creating each one of us?

One quick and easy answer is the Big Bang. This is the idea, derived from the observation that everything in the universe seems to be moving away from everything else, so that at one time everything must have been in the same place. That is, what became the entire universe was concentrated into a single point that then somehow exploded outward into, well, everything.

But the Big Bang itself did not instantly create stars and planets and galaxies. It was way too energetic for that. So energetic, in fact, that matter couldn’t even form in the immediate aftermath. Instead, everything that existed was an incredibly hot quantum foam of unbound quarks. Don’t let the words daunt you. The simple version is that elements are made up of atoms, and an atom is the smallest unit of any particular element — an atom of hydrogen, helium, carbon, iron, etc. Once you move to the subatomic particles that make up the atom, you lose any of the properties that make the element unique, most of which have to do with its atomic weight and the number of free electrons wrapped around it.

Those atoms in turn are made up of electrons that are sort of smeared out in a statistical cloud around a nucleus made up of at least one proton (hydrogen), and then working their way up through larger collections of protons (positively charged), an often but not always equal number of neutrons (no charge), and a number of electrons (negatively charged) that may or may not equal the number of protons.

Note that despite what you might have learned in school, an atom does not resemble a mini solar system in any particular way at all, with the electron “planets” neatly orbiting the “star” that is the nucleus. Instead, the electrons live in what are called orbitals and shells, but they have a lot more to do with energy levels and probable locations than they do with literal placement of discrete dots of energy.

Things get weird on this level, but they get weirder if you go one step down and look inside of the protons and neutrons. These particles themselves are made up of smaller particles that were named quarks by Nobel Prize winner Murray Gell-Man as a direct homage to James Joyce. The word comes from a line from Joyce’s book Finnegans Wake, which itself is about as weird and wonderful as the world of subatomic science. “Three quarks for muster mark…”

The only difference between a proton and a neutron is the configuration of quarks inside. I won’t get into it here except to say that if we call the quarks arbitrarily U and D, a proton has two U’s and one D, while a neutron has two D’s and one U.

And for the first few milliseconds after the Big Bang, the universe was an incredibly hot soup of all these U’s and D’s flying around, unable to connect to each other because the other theoretical particles that could have tied them together, gluons, couldn’t get a grip. The universe was also incredibly dark because photons couldn’t move through it.

Eventually, as things started to cool down, the quarks and gluons started to come together, creating protons and neutrons. The protons, in turn, started to hook up with free electrons to create hydrogen. (The neutrons, not so much at first, since when unbound they tend to not last a long time.) Eventually, the protons and neutrons did start to hook up and lure in electrons, creating helium. This is also when the universe became transparent, because now the photons could move through it freely.

But we still haven’t quite gotten to the force that created all of us just yet. It’s not the attractive force that pulled quarks and gluons together, nor is it the forces that bound electrons and protons. That’s because, given just those forces, the subatomic particles and atoms really wouldn’t have done much else. But once they reached the stage of matter — once there were elements with some appreciable (though tiny) mass to toss around, things changed.

Vast clouds of gas slowly started to fall into an inexorable dance as atoms of hydrogen found themselves pulled together, closer and closer, and tighter and tighter. The bigger the cloud became, the stronger the attraction until, eventually, a big enough cloud of hydrogen would suddenly collapse into itself so rapidly that the hydrogen atoms in the middle would slam together with such force that it would overcome the natural repulsion of the like-charged electron shells and push hard enough to force the nuclei together. And then you’d get… more helium, along with a gigantic release of energy.

And so, a star is born. A bunch of stars. A ton of stars, everywhere, and in great abundance, and with great energy. This is the first generation of stars in the universe and, to quote Bladerunner, “The light that burns twice as bright burns half as long.” These early stars were so energetic that they didn’t make it long, anf they managed to really squish things together. You see, after you turn hydrogen into helium, the same process turns helium into heavier elements, like lithium, carbon, neon, oxygen, and silicon. And then, once it starts to fuse atoms into iron, a funny thing happens. Suddenly, the process stops producing energy, the star collapses into itself, and then it goes boom, scattering those elements aback out into the universe.

This process will happen to stars that don’t burn as brightly, either. It will just take longer. The first stars lasted a few hundred million years. A star like our sun is probably good for about ten billion, and we’re only half way along.

But… have you figured out yet which force made these stars create elements and then explode and then create us, because that was the question: “What would you say exactly is the ultimate force that wound up directly creating each one of us?”

It’s the same force that pulled those hydrogen atoms together in order to create heavier elements and then make stars explode in order to blast those elements back out into the universe to create new stars and planets and us. It’s the same reason that we have not yet mastered doing nuclear fusion because we cannot control this force and don’t really know yet what creates it. It’s the same force that is keeping your butt in your chair this very moment.

It’s called gravity. Once the universe cooled down enough for matter to form — and hence mass — this most basic of laws took over, and anything that did have mass started to attract everything else with mass. That’s just how it works. And once enough mass got pulled together, it came together tightly enough to overcome any other forces in the universe.  Remember: atoms fused because the repulsive force of the negative charge of electrons was nowhere near strong enough to resist gravity, and neither was the nuclear force between protons and neutrons.

Let gravity grow strong enough, in fact, and it can mash matter so hard that it turns every proton in a star into a neutron which is surrounded by a surface cloud of every electron sort of in the same place, and this is called a neutron star. Squash it even harder, and you get a black hole, a very misunderstood (by lay people) object that nonetheless seems to actually be the anchor (or one of many) that holds most galaxies together.

Fun fact, though. If our sun suddenly turned into a black hole (unlikely because it’s not massive enough) the only effect on the Earth would be… nothing for about eight minutes, and then it would get very dark and cold, although we might also be fried to death by a burst of gamma radiation. But the one thing that would not happen is any of the planets suddenly getting sucked into it.

Funny thing about black holes. When they collapse like that and become one, their radius may change drastically, like from sun-sized to New York-sized, but their gravity doesn’t change at all.

But I do digress. Or maybe not. Circle back to the point of this story: The universal force that we still understand the least also happens to be the same damn force that created every single atom in every one of our bodies. Whether it has its own particle or vector, or whether it’s just an emergent property of space and time, is still anybody’s guess. But whichever turns out to be true, if you know some science, then the power of gravity is actually quite impressive.

Rewind

If you could go back in time to your younger self — say right out of high school or college — what one bit of advice would you give? I think, in my case, it would be this: “Dude, you only think you’re an introvert, but you’re really not. You just need to learn now what it took me years to understand. No one else is really judging you because they’re too busy worrying about how they come off.”

But that worry about what other people thought turned me into a shy introvert for way too long a time. At parties, I wouldn’t talk to strangers. I’d hang in the corners and observe, or hope that I knew one or two people there already, so would stick to them like your insurance agent’s calendar magnet on your fridge. Sneak in late, leave early, not really have any fun.

It certainly didn’t help on dates, especially of the first kind. “Hi, (your name). How’s it going?” Talk talk talk, question to me… awkward silence, stare at menu, or plate if order already placed.

Now this is not to imply that I had any problem going straight to close encounters of the third kind way too often, but those only happened when someone else hit on me first. Also, I had a really bad habit of not being able to say “No” when someone did show interest. I guess I should have noticed the contradiction: Can someone really be an introvert and a slut at the same time?

What I also didn’t notice was that the times I was a total extrovert all happened via art. When I wrote or acted, all the inhibitions went away. Why? Because I was plausibly not being myself. The characters I created or the characters I played were other people. They were insulation. They gave me permission to just go out there without excuse. (Okay, the same thing happened during sex, but by that point, I don’t think that introversion is even possible or very likely.)

However… the characters did not cross over into my real life. I was awkward with strangers. I was okay with friends, but only after ample time to get to know them.

And so it went until I wound up in the hospital, almost died, came out the other side alive — and then a funny thing happened. I suddenly started initiating conversations with strangers. And enjoying them. And realized that I could play myself as a character in real life and have a lot of fun doing it. And started to not really care what anyone else thought about me because I was more interested in just connecting with people and having fun.

The most important realization, though, was that I had been lying to myself about what I was for years. The “being an introvert” shtick was just an excuse. What I’d never really admitted was that I was extroverted as hell. The “almost dying” part gave the big nudge, but the “doing improv” part sealed it. Here’s the thing. Our lives, day to day and moment to moment, are performance. Most muggles never realize that. So they get stage fright, don’t know what to do or say or how to react.

But, honestly, every conversation you’ll ever have with someone else is just something you both make up on the spot, which is what improv is. The only difference is that with improv you’re making up the who, what (or want) and where, whereas in real life, you’re playing it live, so those things are already there.

Ooh, what’s that? Real life is easier than performing on stage?

One other thing that yanked me out of my “I’m an introvert” mindset, though, was an indirect result of doing improv. I’ve been working box office for ComedySportz for almost a year now — long story on how and why that happened — but I’m basically the first public face that patrons see, I’ve gotten to know a lot of our regulars, and I honestly enjoy interacting with the public, whether via walk-ups to the ticket counter or phone calls. Young me would have absolutely hated doing this, which is another reason for my intended message to that callow twat.

And so… if you’re reading this and think that you’re an introvert, do me a favor. Find something that drags you out of your comfort zone. Remind yourself that no one else is really judging you because they’re too busy worrying about themselves, then smile and tell way too much to the wait-staff or checker or usher or whomever — and then don’t give a squishy nickel over what they might think about it.

(Note: “squishy nickel” was a fifth level choice on the improv game of “New Choice” in my head just now. Which is how we do…)

5 Things that are older than you think

A lot of our current technology seems surprisingly new. The iPhone is only twelve years old, for example, although the first Blackberry, a more primitive form of smart phone, came out in 1999. The first actual smart phone, IBM’s Simon Personal Communicator, was introduced in 1992 but not available to consumers until 1994. That was also the year that the internet started to really take off with people outside of universities or the government, although public connections to it had been available as early as 1989 (remember Compuserve, anyone?), and the first experimental internet nodes were connected in 1969.

Of course, to go from room-sized computers communicating via acoustic modems along wires to handheld supercomputers sending their signals wirelessly via satellite took some evolution and development of existing technology. Your microwave oven has a lot more computing power than the system that helped us land on the moon, for example. But the roots of many of our modern inventions go back a lot further than you might think. Here are five examples.

Alarm clock

As a concept, alarm clocks go back to the ancient Greeks, frequently involving water clocks. These were designed to wake people up before dawn, in Plato’s case to make it to class on time, which started at daybreak; later, they woke monks in order to pray before sunrise.

From the late middle ages, church towers became town alarm clocks, with the bells set to strike at one particular hour per day, and personal alarm clocks first appeared in 15th-century Europe. The first American alarm clock was made by Levi Hutchins in 1787, but he only made it for himself since, like Plato, he got up before dawn. Antoine Redier of France was the first to patent a mechanical alarm clock, in 1847. Because of a lack of production during WWII due to the appropriation of metal and machine shops to the war effort (and the breakdown of older clocks during the war) they became one of the first consumer items to be mass-produced just before the war ended. Atlas Obscura has a fascinating history of alarm clocks that’s worth a look.

Fax machine

Although it’s pretty much a dead technology now, it was the height of high tech in offices in the 80s and 90s, but you’d be hard pressed to find a fax machine that isn’t part of the built-in hardware of a multi-purpose networked printer nowadays, and that’s only because it’s such a cheap legacy to include. But it might surprise you to know that the prototypical fax machine, originally an “Electric Printing Telegraph,” dates back to 1843. Basically, as soon as humans figured out how to send signals down telegraph wires, they started to figure out how to encode images — and you can bet that the second image ever sent in that way was a dirty picture. Or a cat photo. Still, it took until 1964 for Xerox to finally figure out how to use this technology over phone lines and create the Xerox LDX. The scanner/printer combo was available to rent for $800 a month — the equivalent of around $6,500 today — and it could transmit pages at a blazing 8 per minute. The second generation fax machine only weighed 46 lbs and could send a letter-sized document in only six minutes, or ten page per hour. Whoot — progress! You can actually see one of the Electric Printing Telegraphs in action in the 1948 movie Call Northside 777, in which it plays a pivotal role in sending a photograph cross-country in order to exonerate an accused man.

In case you’re wondering, the title of the film refers to a telephone number from back in the days before what was originally called “all digit dialing.” Up until then, telephone exchanges (what we now call prefixes) were identified by the first two letters of a word, and then another digit or two or three. (Once upon a time, in some areas of the US, phone numbers only had five digits.) So NOrthside 777 would resolve itself to 667-77, with 667 being the prefix. This system started to end in 1958, and a lot of people didn’t like that.

Of course, with the advent of cell phones prefixes and even area codes have become pretty meaningless, since people tend to keep the number they had in their home town regardless of where they move to, and a “long distance call” is mostly a dead concept now as well, which is probably a good thing.

CGI

When do you suppose the first computer animation appeared on film? You may have heard that the original 2D computer generated imagery (CGI) used in a movie was in 1973 in the original film Westworld, inspiration for the recent TV series. Using very primitive equipment, the visual effects designers simulated pixilation of actual footage in order to show us the POV of the robotic gunslinger played by Yul Brynner. It turned out to be a revolutionary effort.

The first 3D CGI happened to be in this film’s sequel, Futureworld in 1976, where the effect was used to create the image of a rotating 3D robot head. However, the first ever CGI sequence was actually made in… 1961. Called Rendering of a planned highway, it was created by the Swedish Royal Institute of Technology on what was then the fastest computer in the world, the BESK, driven by vacuum tubes. It’s an interesting effort for the time, but the results are rather disappointing.

Microwave oven

If you’re a Millennial, then microwave ovens have pretty much always been a standard accessory in your kitchen, but home versions don’t predate your birth by much. Sales began in the late 1960s. By 1972 Litton had introduced microwave ovens as kitchen appliances. They cost the equivalent of about $2,400 today. As demand went up, prices fell. Nowadays, you can get a small, basic microwave for under $50.

But would it surprise you to learn that the first microwave ovens were created just after World War II? In fact, they were the direct result of it, due to a sudden lack of demand for magnetrons, the devices used by the military to generate radar in the microwave range. Not wanting to lose the market, their manufacturers began to look for new uses for the tubes. The idea of using radio waves to cook food went back to 1933, but those devices were never developed.

Around 1946, engineers accidentally realized that the microwaves coming from these devices could cook food, and voìla! In 1947, the technology was developed, although only for commercial use, since the devices were taller than an average man, weighed 750 lbs and cost the equivalent of $56,000 today. It took 20 years for the first home model, the Radarange, to be introduced for the mere sum of $12,000 of today’s dollars.

Music video

Conventional wisdom says that the first music video to ever air went out on August 1, 1981 on MTV, and it was “Video Killed the Radio Star” by The Buggles. As is often the case, conventional wisdom is wrong. It was the first to air on MTV, but the concept of putting visuals to rock music as a marketing tool goes back a lot farther than that. Artists and labels were making promotional films for their songs back at almost the beginning of the 1960s, with the Beatles a prominent example. Before these, though, was the Scopitone, a jukebox that could play films in sync with music popular from the late 1950s to mid-1960s, and their predecessor was the Panoram, a similar concept popular in the 1940s which played short programs called Soundies. However, these programs played on a continuous loop, so you couldn’t chose your song. Soundies were produced until 1946, which brings us to the real predecessor of music videos: Vitaphone Shorts, produced by Warner Bros. as sound began to come to film. Some of these featured musical acts and were essentially miniature musicals themselves. They weren’t shot on video, but they introduced the concept all the same. Here, you can watch a particularly fun example from 1935 in 3-strip Technicolor that also features cameos by various stars of the era in a very loose story.

Do you know of any things that are actually a lot older than people think? Let us know in the comments!

Photo credit: Jake von Slatt