Momentous Monday: Relativity

A reminder that, while we can test our DNA or trace our family trees, we still all come from in the same place.

Almost 470 years ago, in 1553, a man named John Lyly (or Lilly, Lylie, or Lylly) was born in Kent, England, the grandson of a Swedish immigrant. A contemporary of Shakespeare’s, he was kind of a big deal in his day. He was an author and playwright of some renown, and while he failed in his attempt to be appointed the Queens Master of the Revels, he did serve in Parliament as well as served Queen Elizabeth I for many years.

Around two hundred and eighty years after that, somewhere in Massachusetts, a child was born. He grew up to become a man, and he moved west. It was the era of Manifest Destiny in America, a dark time in our history. That child was John Lyly’s seventh great-grandson.

At least we weren’t using the term “Great White Hope.” Yet. To be honest, we should have used the term “Great White Death.” But, among the death there was still hope, and that child born in Massachusetts who grew up to be a man put his ideals into action.

Along with a great wave of German immigrants to America, all of whom despised slavery, this man went west, crossed the Missouri river and landed in Kansas. For me, the movie How the West Was Won is a family documentary.

When he arrived in Kansas, he helped found the town of Burlington, was one of two attorneys in the town (and also one of two blacksmiths, the other of whom was the other attorney), mayor of the town at one point, and a proud member of the Republican Party.

Yeah… quite the opposite of my politics now, or so you’d think. Except that, before the Civil War and up until FDR, the Republicans were the liberal party in America, and the Democrats were regressive. (Woodrow Wilson was a major racist, by the way.)

That child who grew up to be a great man moved west in order to help bring Kansas into the union as a free (i.e., non-slave) state. And that child, who grew up to be a great man, was my great-great-grandfather, Silas Fearl.

Since he was Lily’s seventh great-grandson, that makes me Lily’s eleventh. (It doesn’t seem to add up, but don’t forget that I have to add in the two generations between me and Silas., plus myself.)

Fast-forward to nearly two-hundred years after Silas was born, and the evolution of the internet, and I am in touch with people who share my ancestry with him. It makes us very distant relatives, to be sure, but it means that we have a very definite connection, some by blood and some by marriage.

And this is the reason for this post. One of those third or fourth cousins, via Silas Fearl by blood, posted some pictures of her kids, and when I looked at them the thing that most struck me was this. “Wow. This person and I have an ancestor in common.” And, in fact, looking at these faces, I could see certain elements of my own face, of my dad’s, and of my grandpa’s, and of the great uncles I managed to meet, and of the people in a family portrait taken when my father’s father was an infant.

Even so many steps apart on the branches of humanity’s family tree, I could see some of me and my immediate family in them… and across the distance of never having met and Facebook, my first reaction was an enormous empathy. “This is a bit of me, and I want to protect it from everything bad forever.”

And, in a lot of ways, I have to suspect that this is just an illusion, an effect created by the empirical proof I have seen that means “You and I are related to each other.” That, and the evolutionary and biological forces that make us most protective of those who share our DNA.

Except that… I’ve felt this same way toward people who are absolutely not related, but I’ve still seen myself in them… and this is when I realize the harm that intellect can do to our species.

Intellect relies on so-called facts that it has been told. So, “Hey, you and this person are related” is a fact that ropes emotions into relating to the news. So… subject, object, emotion, bond.

In reality, anybody whose picture I see online is related, it’s just not as straightforward as “You and this person have the same great-great-grandfather.” I can trace part of my ancestry back to King Henry II of England and his wife, Eleanor of Aquitaine — The Lion in Winter is, for me, another unintended family documentary.

By that connection, I’m related to most of the population of England and the eastern US. Now, go back through them to another common ancestor, Charlemagne, and I’m related to most western Europeans and Americans — if you expand the definition of “America” to include all countries on both continents, north and south.

And, if you go back far enough to the last point in humanity’s evolutionary history at which the family tree’s branches split, then you could honestly say that everybody you have ever met is related to you and shares your DNA and your blood to some degree.

You should be able to recognize your features in them no matter their race, gender identity, sexual orientation, or religion. You should be able to see their humanity, and yours, in their faces.

And, go back far enough then we are related to all animal life on this planet. Go back a little farther, and we are related to all life not only on this planet, but in the universe. Go back far enough and follow the laws of physics, and all of us, everyone, everywhere, were once the exact same bit of incredibly condensed matter.

The universe is the mother of us all, and all divisions are illusionary.

I’m reminded of some old Beatles lyrics at the moment. “I am he as you are he as you are me and we are all together.” (And I had to look that up. It’s from I Am the Walrus and not Come Together.) Anyway, that’s a pretty good summation of my realization.

Once we put human history on a cosmic scale, our differences and squabbles become absolutely meaningless. All of us were born from the stars. All of us are in this together. Let’s act like it…

Image: The author’s great-grandparents and their four sons, including the author’s paternal grandfather.

Talky Tuesday: Punctuation

The fascinating origins of five common punctuation marks.

One of the side-effects of people texting and posting online — particularly if they do the latter with their phones — is that punctuation and, often, capitalization go by the wayside. I can understand this if you are using a phone, because the keyboard can be tiny, even on our modern oversized smart phones.

Generally, messages and posts done this way are short enough that missing punctuation, as well as regular paragraphing to indicate changes in thought, can’t hinder the meaning from getting through, at least not that much. Everyone is going to know what you mean in a short text, right?

But the longer you go and the more you write, the more you really do need to punctuate and paragraph your text. For example:

one of the side effects of people texting and posting online particularly if they do the latter with their phones is that punctuation and often capitalization go by the wayside i can understand this if you are using a phone because the keyboard can be tiny even on our modern oversized smart phones generally messages and posts done this way are short enough that missing punctuation as well as regular paragraphing to indicate changes in thought cant hinder the meaning from getting through at least not that much everyone is going to know what you mean in a short text right

How much harder was that paragraph to read than the two that opened the article? Same text exactly, just without any punctuation marks or paragraph breaks, so no road map. Which one would you rather be handed to read out loud with no preparation?

That’s pretty much the raison d’être of punctuation in any language — to clarify meaning, and especially to facilitate reading the words, whether out loud or in one’s head. But did you ever wonder where those punctuation marks came from?

Today, I’m going to focus on English, so we won’t be dealing with things like the cedilla, which you see in the word façade, or the tilde, which is common in Spanish words like mañana. I’ll even pass on the French punctuation seen above in the italicized expression which just means “purpose” — literally, reason for being.

Depending upon the source, there are either fourteen or fifteen, but I’ll be focusing on fewer. I don’t agree with the latter list’s fifteen, which is a bullet point. I consider it more of a formatting tool than a punctuation mark. In a numbered list, while the numbers may or may not have period after them, nobody thinks of the numbers as punctuation, right?

I’ll also be skipping brackets and curly braces because they really aren’t in common use. And, finally, lists of more than five items tend to get cumbersome, so I’m going to stick with the most common ones and take a look at where they came from.

By the way, missing from both of the above lists: our friend the ampersand (&) which I definitely consider a punctuation mark, but which actually used to be the 27th letter of the alphabet. In fact, under its original name, you can’t spell alphabet without it, but those two letters eventually morphed into the pretzel or, as I see it, Panda sitting down to eat bamboo, that we all know and love today. And yes, you’ll never un-see that one.

Here are the origin stories of five heroic punctuation marks.

  1. Period: While the period, known in British as the “full stop,” is probably the most common punctuation mark in European languages, it came from the same forge as all of the other “dot” punctuations, including the comma, colon, semicolon, and ellipsis. The concept of the period was originally created by a Greek playwright, Aristophanes, who had grown tired of the published works of the time having no breaks between words, making the scrolls very hard to read.

Originally, his system involved placing dots either low, in the middle, or high relative to the heights of the letters, and the position indicated the length of the pause, much as a period, comma, and colon indicate different lengths of pauses nowadays. However, his system did not pass directly to us. The Romans were not big fans of punctuation, and a lot of their works were copied down in so-called scriptio continua, or continuous writing.

IFYOUREWONDERINGCONTINUOUSWRITINGINLATINLOOKEDJUSTLIKETHIS◦EXHAUSTINGISNIT

Ironically, punctuation didn’t come back into it until Christianity began to take hold in the crumbling Roman Empire. Monks tasked with copying manuscripts by hand brought back the marks they knew from the classical Greek of Aristophanes’ era, largely to preserve the meaning of the textx, frequently biblical they were copying.

And, again, if they were working to translate the Old Testament, which was largely written in Hebrew, they were going from a language that lacked punctuation, word spacing, and vowels, with the added bonus of only being written in the present tense. Yeah, that must have been a hair-puller. And, no doubt, the New Testament stuff they were working with probably had many of the same issues, since it was written in the Greek, Latin, Hebrew, and Aramaic of the late 1st century.

These were the people instrumental in writing down the first official version of that bible in the early 4th century, starting with the Council of Nicea, and over the next 1,100 years, they also kind of invented emojis of a sort. What? They were bored college-aged dudes who weren’t allowed to get laid. What else could they do?

So things proceeded on the punctuation front without a lot happening until that dude Gutenberg got to printing in the 15th century. And that was when all of the existing punctuation got locked down because it had to be. That’s what standardization via mass manufacturing does, after all. Not necessarily a bad thing by any means.

  1. Question mark: This was another punctuation mark created by a person, Alcuin of York, an English poet and scholar who was invited to join the court of Charlemagne, who was first King of the Franks, then King of the Lombards, and finally Emperor of the Romans from the late 8th to early 9th centuries. If you have any western European blood in you, he is your ancestor — and that‘s not a “probably.”

Alcuin was a prolific author and very familiar with the old dot system of the Greeks, but he sought to improve it, so he created the punctus interrogatives, which is pretty much the Latin version of what we call it now, although his probably looked more like this: .~.

That kind of resembles a confused emoji standing on its head.

And while you may think that the question and exclamation marks are connected, with the latter just being the unsquiggled version of the former, you’d be wrong. In fact, no one is really sure where the exclamation mark came from, and it didn’t even appear on typewriter keyboards until the relatively late date of 1970.

  1. Hyphen: In the present day, hyphens pretty much exist only to join words that haven’t quite become full-on compounds But once upon a time, before computers had this wonderful ability to justify text and avoid breaking one word across two lines, hyphens did exactly that. They told you whether a word had been broken and to look for more of it on the next line. In practice, it would look something like this:

 He contemplated the scene, not sure what he was going to find, but fully ex-

pecting it to be something dangerous; something he’d rather not have to con-

front on his own.

Yeah. Messy and awkward, isn’t it? And yet, if you read any published material from earlier than about the late 80s, this is what you get and, honestly, it’s as annoying as hell.

The hyphen itself goes back, again, to ancient Greece, where it was a sort of arc drawn below the letters of the words to be joined. It was still common enough when Gutenberg got around to creating his moveable type that it was adapted. However, since he couldn’t figure out how to include punctuation below the baselines of his letters for very practical reasons, he moved the hyphen to the medial position we all know today.

  1. Parenthesis: These most useful of marks were a product of the 14th century, and also brought to us by the creativity of monks copying manuscripts. And, again, I’ll remind you that these geniuses happened to be a part of their era’s version of what we’re currently calling Gen Z. You know. The ones after the Millennials that you should be paying attention to.

Anyway… in their wisdom, these monks decided to draw half circles around certain parts of the text (mostly to indicate that it was connected to but not part of the main idea) in order to set it off from the rest. In a lot of ways, parentheticals became a mental aside for the reader — hear this in a different voice.

And, like tits and testicles, parentheses are intended to always travel in pairs. (Yes, I know that not everyone has two of either, but note the “intended” part. Nature tries. Sometimes, she fucks up.)

  1. Quotation marks: These are yet another thing that the Greeks created, the Romans ignored, and medieval monks brought back. Originally, Greeks in the second century B.C. used sort of arrows to indicate that a line was a quote, and they stuck them in the margins. This form of quotation mark is still visible in modern languages, for example in the Spanish «quotation marks», which are pairs of little arrows.

When we got to the sixteenth century, they became a pair of commas before a line and outside of the margins, and indeed to this day, you’ll see this in ,,German quotes,‘‘ which have two commas before and two open single quotes after. Nowadays, you can’t say he said, she said without quotation marks.

So there you go. The origins of five-ish common punctuation marks. Which one is your favorite, and why? Tell us in the comments!

 

Momentous Monday: Backwards and in high heels

In honor of the Webb Space Telescope let’s look at some forgotten pioneers of science.

The James Webb Space Telescope is about to fire up for first light and soon send back images of a star in the Big Dipper, HD84406. Although this star will be too bright and close to focus on once the Webb is fully operational, it was chosen as an ideal target for calibrating and focusing everything.

But as this next giant leap in human knowledge is about to happen, let’s take a look at some great discoveries of the past that may not have been made by the people you think made them.

The famous astronomer Herschel was responsible for a lot of accomplishments, including expanding and organizing the catalog of nebulae and star clusters, discovering eight comets, polishing and mounting mirrors and telescopes to optimize their light-gathering powers, and keeping meticulous notes on everything.

By being awarded a Gold Medal of the Royal Astronomical Society and being named an honorary member thereof, holding a government position and receiving a salary as a scientist, Herschel became the first woman to do so.

What? Did you I think I was talking about the other one? You know — the only one most of you had heard of previously because he discovered Uranus. Oh, and he had that whole penis thing going on.

Caroline Lucretia Herschel, who was William’s younger sister by eleven years and was born in 1850, did not have a penis, and so was ignored by history. Despite the honors she received, one of her great works, the aforementioned expansion of the New General Catalogue (NGC), was published with her brother’s name on it.

If you’re into astronomy at all, you know that the NGC is a big deal and has been constantly updated ever since.

While she lacked William’s junk, she shared his intellectual curiosity, especially when it came to space and studying the skies. It must have been genetic — William’s son John Herschel was also an astronomer of some repute — and it was his Aunt Caroline, not Dad, who gave him a huge boost.

She arranged all of the objects then in the NGC so they were grouped by similar polar coordinates — that is, at around the same number of degrees away from the celestial poles. This enabled her nephew to systematically resurvey them, add more data about them, and discover new objects.

Caroline was not the first woman in science to be swept under history’s rug by the men. The neverending story of the erasure of women told in Hidden Figures was ancient by the time the movie came out, never mind the time that it actually happened. Caroline was in good company.

Maria Winckelmann Kirch, for example, was also an astronomer, born 80 years before Caroline and most likely the first woman to actually discover a comet. But of course history gave that honor to her husband, Gottfried Kirch, who was thirty years her senior. However, Herr Kirch himself confirms in his own notes that she was the one who found it:

“Early in the morning (about 2:00 AM) the sky was clear and starry. Some nights before, I had observed a variable star and my wife (as I slept) wanted to find and see it for herself. In so doing, she found a comet in the sky. At which time she woke me, and I found that it was indeed a comet… I was surprised that I had not seen it the night before”. [Source]

Maria’s interest and abilities in science came from a person we might think of as unlikely nowadays: a Lutheran minister, who happened to be her father. Why did he teach her? Because he believed that his daughter deserved the same education any boy at the time did, so he home-schooled her. This ended when Maria lost both of her parents when she was 13, but a neighbor and self-taught astronomer, Christoph Arnold, took her on as an apprentice and took her in as part of the family.

Getting back to Hidden Figures, though, one of the earliest “computers,” as these women of astronomy were known, was Henrietta Leavitt. Given what was considered the boring and onerous task of studying a class of stars known as Cepheid variables, she actually discovered something very important.

The length of time it takes a Cepheid to go through its brightest to darkest sequence is directly proportional to its luminosity. This means that if know the timing of that sequence, you know how bright the star is. Once you know that, you can look at how bright it appears to be from Earth and, ta-da! Using very basic laws of optics, you can then determine how far away the star is.

It’s for this reason that Cepheids are known as a “standard candle.” They are the yardsticks of the universe that allow us to measure the unmeasurable. And her boss at the time took all the credit, so I’m not even going to mention his name.

And this is why we have The Leavitt Constant and the Leavitt Telescope today.

No, just kidding. Her (male) boss, who shall still remain nameless here because, “Shame, shame,” took all of the credit for work he didn’t do, and then some dude named Edwin Hubble took that work and used to to figure out how far away various stars actually were, and so determined that the universe was A) oh so very big,  and B) expanding. He got a constant and telescope named after him. Ms. Leavitt… not so much.

There are way too many examples of women as scientific discoverers being erased, with the credit being given to men, and in every scientific field. You probably wouldn’t be on the internet reading this now if no one had ever come up with the founding concepts of computer programming, aka “how to teach machines to calculate stuff for us.”

For that, you’d have to look to a woman who was basically the daughter of the David Bowie of her era, although he wasn’t a very dutiful dad. He would be Lord Byron. She would be Ada Lovelace, who was pretty much the first coder ever — and this was back in the days when computers were strictly analog, in the form of Charles Babbage’s difference and analytical engines.

The former was pretty much just an adding machine, and literally one that could only do that. So, for example, if you gave it the problem “What is two times 27,” it would find the solution by just starting with two, and then adding two to it 26 times.

The latter analytical engine was much more like a computer, with complex programming. Based on the French Jacquard loom concept, which used punched cards to control weaving, it truly mimicked all of the common parts of a modern computer as well as programming logic.

Basically, a computer does what it does by working with data in various places. There’s the slot through which you enter the data; the spot that holds the working data; the one that will pull bits out of that info, do operations on it, and put it back in other slots with the working data; and the place where it winds up, which is the user-readable output.

The analytical engine could also do all four math operations: addition, subtraction, multiplication, and division.

An analog version of this would be a clerk in a hotel lobby with a bunch of pigeonhole mail boxes behind, some with mail, some not. Guests come to the desk and ask (input), “Any mail for me?” The clerk goes to the boxes, finds the right one based on input (guest room number, most likely), then looks at the box (quaintly called PEEK in programming terms).

If the box is empty (IF(MAIL)=FALSE), the Clerk returns the answer “No.” But if it’s not empty (IF(MAIL)=TRUE), the clerk retrieves that data and gives it to the guest. Of course, the guest is picky, so tells the Clerk, “No junk mail and no bills.”

So, before handing it over, the Clerk goes through every piece, rejecting that above (IF(OR(“Junk”,”Bill”),FALSE,TRUE), while everything else is kept by the same formula. The rejected data is tossed in the recycle bin, while the rest is given to the guest — output..

Repeat the process for every guest who comes to ask.

Now, Babbage was great at creating the hardware and figuring out all of that stuff. But when it came to the software, he couldn’t quite get it, and this is where Ada Lovelace came in. She created the algorithms that made the magic happen — and then was forgotten.

By the way, Bruce Sterling and William Gibson have a wonderfully steampunk alternate history novel that revolves around the idea that Babbage and Lovelace basically launched the home computer revolution a couple of centuries early, with the British computer industry basically becoming the PC to France’s Mac. It’s worth a read.

Three final quick examples: Nettie Maria Stevens discovered the concept of biological sex being passed through chromosomes long before anyone else; it was Lise Meitner, not Otto Hahn, who discovered nuclear fission; and, in the ultimate erasure, it was Rosalind Franklin, and neither Watson nor Crick, who determined the double helix structure of DNA.

This erasure is so pronounced and obvious throughout history that it even has a name: The Matilda Effect, named by the historian Margaret Rossiter for the suffragist Matilda Joslyn Gage.

Finally, a note on the title of this piece. It comes from a 1982 comic strip called Frank and Ernest, and it pretty much sums up the plight of women trying to compete in any male-dominated field. They have to work harder at it and are constantly getting pushed away from advancement anyway.

So to all of the women in this article, and all women who are shattering glass ceilings, I salute you. I can’t help but think that the planet would be a better place with a matriarchy.

For all of the above histories and more, it’s plain to see why finally having a female Vice President of the United States (and a person of color at that) is a truly momentous and significant moment in the history of the country and the world. Here’s hoping that we get the same for the SCOTUS very soon.

Image: James Webb Space Telescope, (CC BY 2.0), via Wikimedia Commons 

Theatre Thursday: Fact and fiction

About six hundred and nine years ago, Henry V was crowned king of England. You probably know him as that king from the movie with Kenneth Branagh, or the BBC series aired under the title The Hollow Crown.

Either way, you know him because of Shakespeare. He was the king who grew up in Shakespeare’s second tetralogy. Yes, that’s a set of four plays and since it was his second, Shakespeare sort of did the Star Wars thing first: he wrote eight plays on the subject of the English Civil war.

And, much like Lucas, he wrote the original tetralogy first, then went back and did the prequels. Richard II, Henry IV, Part 1, Henry IV, Part 2 and Henry V were written after but happened before Henry VI, Part 1, Henry VI, Part 2, Henry VI, Part 3 and Richard III.

Incidentally, Henry VI, Part 1, is famous for having Joan of Arc (aka Joan la Pucelle in the play) as one of the antagonists. Funny thing is, that name wasn’t slander on Shakespeare’s part. That’s what she preferred to call herself.

Meanwhile, Richard III, of course, is the Emperor Palpatine of the series, although we never did get a Richard IV, mainly because he never existed in history. Well, not officially. Richard III’s successor was Henry VII, and Shakespeare never wrote about him, either, although he did gush all over Henry VIII, mainly because he was the father of the Bard’s patron, Elizabeth I. CYA.

If you’ve ever seen the film My Own Private Idaho, directed by Gus Van Sant and staring River Phoenix and Keanu Reeves, then you’ve seen a modern retelling of the two parts of Henry IV.

Now when it comes to adapting true stories to any dramatic medium, you’re going to run into the issue of dramatic license. A documentary shouldn’t have this problem and shouldn’t play with the truth, although it happens. Sometimes, it can even prove fatal.

But when it comes to a dramatic retelling, it is often necessary to fudge things, sometimes a little and sometimes a lot. It’s not at all uncommon for several characters to be combined into a composite just to make for a cleaner plot. After all, is it that big of a difference if, say, King Flagbarp IX in real life was warned about a plot against him in November by his chamberlain Norgelglap, but the person who told him the assassin’s name in February was his chambermaid Hegrezelda?

Maybe, maybe not, but depending on what part either of those characters plays in the rest of the story, as well as the writer’s angle, they may both be combined as Norgelglap or as Hegrezelda, or become a third, completely fictionalized character, Vlanostorf.

Time frames can also change, and a lot of this lands right back in Aristotle’s lap. He created the rules of drama long before hacks like the late Syd Field tried (and failed), and Ari put it succinctly. Every dramatic work has a beginning, a middle, and an end, and should have unity of place, unity of time, and unity of action.

A summary of the last three is this, although remember that Aristotle was writing about the stage. For film and video, your mileage will vary slightly.

The story takes place in one particular location, although that location can be a bit broad. It can be the king’s castle, or it can be the protagonist’s country.

It should take place over a fairly short period of time. Aristotle liked to keep it to a day, but there’s leeway, and we’ve certainly seen works that have taken place over an entire lifetime — although that is certainly a form of both unity of time and unity of place, if you consider the protagonist to be the location as well.

Unity of action is a little abstract, but in a nutshell it’s this: Your plot is about one thing. There’s a single line that goes from A to Z: What your protagonist wants, and how they get it.

Now, my own twist on the beginning, middle, and end thing is that this is a three act structure that gives us twenty-seven units. (Aristotle was big on 5 acts, which Shakespeare used, but that’s long since fallen out of fashion.)

Anyway, to me, we have Act I, II, and III. Beginning, middle, and end. But each of those has its own beginning, middle and end. So now we’re up to nine: I: BME; II: BME; III: BME.

Guess what? Each of those subunits also has a beginning, middle, and end. I’m not going to break that one down further than this. The beginning of the beginning, Act I: B, has its own BME, repeat eight more times.

The end result is 3 x 3 x 3, or twenty-seven.

And that’s my entire secret to structure. You’re welcome.

But because of these little constraints, and because history is messy, it’s necessary to switch things up to turn a true story into a “based on true events” work. Real life doesn’t necessarily have neat beginnings, middles, and endings. It also doesn’t necessarily take place in one spot, or in a short period of time.

So it becomes the artist’s job to tell that story in a way that is as true to reality as possible without being married to the facts.

Although it is also possible to go right off the rails with it, and this is one of the reasons I totally soured on Quentin Tarantino films. It’s one thing to fudge facts a little bit, but when he totally rewrites history in Inglorious Basterds, ignores historical reality in Django Unchained, and then curb stomps reality and pisses on its corpse in Once Upon a Time in Hollywood, I’m done.

Inglorious Misspelling is a particularly egregious example because the industry does a great disservice in selling false history to young people who unfortunately, aren’t getting the best educations right now.

Anecdotal moment: A few years back, an Oscar-winning friend of mine had a play produced that told the story of the 442nd Infantry Regiment. They were a company composed almost entirely of second-generation Japanese Americans during WW II, and joining the company was the alternative given to going to an internment camp.

Of course, being racists, the U.S. government couldn’t send them to the Pacific Theatre to fight, so they sent them to Europe, and a lot of the play takes place in Italy, where the regiment was stationed. And, at intermission, my playwright friend heard two 20-something audience members talking to each other. One of them asked, “What was the U.S. even doing in Italy in World War II?” and the other just shrugged and said, “Dunno.”

So, yeah. If you’re going to go so far as to claim that Hitler was killed in a burning movie theater before the end of the war, just stop right there before you shoot a frame. Likewise with claiming that the Manson murders never happened because a couple of yahoos ran into the killers first.

Yeah, Quentin, you were old, you were there, you remember. Don’t stuff younger heads with shit.

But I do digress.

In Shakespeare’s case, he was pretty accurate in Henry V, although in both parts of Henry IV, he created a character who was both one of his most memorable and one of his more fictional: Sir John Falstaff. In fact, the character was so popular that, at the Queen’s command, Shakespeare gave him his own spinoff, The Merry Wives of Windsor. Hm. Shades of Solo in the Star Wars universe?

Falstaff never existed in real life, but was used as a way to tell the story of the young and immature Henry (not yet V) of Monmouth, aka Prince Hal.

Where Shakespeare may have played more fast and loose was in Richard III. In fact, the Bard vilified him when it wasn’t really deserved. Why? Simple. He was kissing up to Elizabeth I. She was a Tudor, daughter of Henry VIII who, as mentioned previously, was the son of Henry VII, the king who took over when Richard III lost the war of the roses.

The other time that Shakespeare didn’t treat a king so well in a play? King John — which I personally take umbrage to, because I’m directly descended from him. No, really. But the idea when Willie Shakes did that was to draw a direct contrast to how Good Queen Bess did so much better in dealing with Papal interference. (TL;DR: He said, “Okay,” she said, “Eff off.”)

Since most of my stage plays have been based on true stories, I’ve experienced this directly many times, although one of the more interesting came with the production of my play Bill & Joan, because I actually accidentally got something right.

When I first wrote the play, the names of the cops in Mexico who interrogated him were not included in the official biography, so I made up two fictional characters, Enrique and Tito. And so they stayed like that right into pre-production in 2013.

Lo and behold, a new version of the biography of Burroughs I had originally used for research came out, and I discovered two amazing things.

First… I’d always known that Burroughs’ birthday was the day before mine, but I suddenly found out that his doomed wife actually shared my birthday. And the show happened to run during both dates.

Second… the names of the cops who interrogated him were finally included, and one of them was named… Tito.

Of course, I also compressed time, moved shit around, made up more than a few characters, and so forth. But the ultimate goal was to tell the truth of the story, which was: Troubled couple who probably shouldn’t have ever gotten together deals with their issues in the most violent and tragic way possible, and one of them goes on to become famous. The other one dies.

So yes, if you’re writing fiction it can be necessary to make stuff up, but the fine line is to not make too much stuff up. A little nip or tuck here and there is fine. But, outright lies? Nah. Let’s not do that.

Two more Nights at the Museum

It looks like Disney+ is going to be bringing back the Night at the Museum Franchise. Here’s why the first three are so much fun.

I reviewed the first Night at the Museum film back in November. I tell the story of how I came to watch it in that article, but at the time none of the other films were on Disney+. That changed at some point and I found them this week, so I watched the two sequels.

The first is 2009’s Night at the Museum: Battle of the Smithsonian and the third, and so-far last, is 2014’s Night at the Museum: Secret of the Tomb.

I say “so-far last” because it looks like Disney is trying to revive the IP in some way, shape, or form. But of course. I guess that’s fitting because it’s kind of in keeping with the exhibits in the film that are in whichever museum winds up with the Golden Tablet of Pharaoh Ahkmenrah — they just won’t die.

The first film caught me off-guard and pulled me in because it sets up (and sets off) the premise so economically and quickly, and Ben Stiller has a great knack for playing that Everyperson character we can all relate to.

Each of the three films follow the same general outline, naturally — Stiller’s Larry Daley winds up getting drawn into (or back to) the (night) lives of his beloved museum exhibits, managing to survive the challenge and learn things along the way. There’s a very strong father-son element built into the trilogy, set up in the first film as Larry only takes the Museum job so he can keep his apartment and visitation rights with his son.

It all pays off with a nice parallel story in the third film.

Over the course of the films, the producers and writers do what so many long-running franchise films have done, and I was reminded in many ways of both the Indiana Jones and James Bond Franchises.

That is, you’re working with basically the same group of heroes/supporting staff, so you need to change up the locations and villains with each outing. The Indiana Jones movies did it by moving through both time and space, as well as changing the McGuffin each time.

With James Bond, each film was set in whatever present day it was made in even though time was visibly passing with each new film. However, other than home base in England, the principal action of the films took place all over the world, with some of the installments covering multiple countries.

Night at the Museum doesn’t get quite that elaborate, but it does have a nice, logical progression. The first film takes place in New York City, the second in Washington, D.C., and the third in London, England, with a prologue set in 1938 in Egypt.

Night at the Museum: Battle of the Smithsonian pretty much dives right into the action, with a number of our familiar characters/exhibits from the first film being crated up for delivery to the National Archives in D.C. under the opening credits. Meanwhile, we learn that Larry Daley (Stiller) left his museum job a few years earlier and did go on to be a successful entrepreneur, selling his own line of inventions via informercials and now living in a very upscale place and being a much better provider for his son.

For some reason, Daley drops by the museum that evening, only to learn from the curator, Dr. McPhee (Ricky Gervais) that by “Shipped to the National Archives,” it really means stowed away forever — and the Tablet of Ahkmenrah is not scheduled to make the trip, meaning that Daley’s friends from his museum days will never wake up again.

However, the night before they make the trip, they do wake up and bring along the tablet, which leads to an emergency phone call from the gang to Daley — they seem to be in a bit of a pickle.

Daley heads to D.C. only to find out that the National Archives are not open to the public and are also located deep below ground under the entire complex of 19 museums the Smithsonian comprised at the time. After very cleverly stealing a local guard (Jonah Hill)’s ID Daley coordinates with his son by phone to get down to the archives, only to quickly learn that cell phone reception only works about a floor and a half down.

From there, it charges into non-stop action as we learn that Ahkmenrah’s older (but snubbed) brother Kahmunrah (Hank Azaria) has learned of the existence of the tablet and wants it for himself. He’s been holding the New York exhibits captive.

Oh — Daley also meets up with Amelia Earhart (Amy Adams), who seems to take an immediate shine to him. It’s all good, silly fun from this point to the climax, with most of it held up by the fun dialogue and situations, but especially the performances.

Stiller, as with the first film, grounds everything, and American actor Owen Wilson and British thespian Steve Coogan continue to provide their “odd couple” pairing as two museum miniatures brought to life — the former an American cowboy (Jedediah) and the latter a Roman soldier (Octavius).

Normally, I can’t stand Owen Wilson, but his characterization works for me in all three films, and Coogan is a perfect foil for him — or vice versa. (It wouldn’t half surprise me if Disney+ didn’t spin off these two characters in some animated series, like “Miniatures of the Museum” or something like that.)

Rounding out the cast, Azaria plays his villainous pharaoh to perfection, wisely opting to use a voice that has strong hints of Boris Karloff — who, besides Frankenstein’s monster, was also famous for playing the Mummy — and who brings his usual single-minded focus to a role to make it perhaps greater than the sum of its lines.

He manages to be by turns menacing and ridiculous and every shade in-between, which is exactly the tone that a villain in these films needs to have.

It’s probably not a huge spoiler to say that Daley and his museum pals save the day and Daley learns another life lesson, leaving everything set up with the third film but, refreshingly, without any annoying, “Wait for the sequel!” flags hung in place. We do end with Daley going back to work at the museum, extending evening hours, and letting the exhibits interact with visitors — who, of course, assume that the exhibits are either actors or elaborate special effects, and business is good.

The series could have ended there and been perfectly satisfying, but the next film took everything a bit farther and a step further.

Night at the Museum: Secret of the Tomb starts out with no credits, just the aforementioned prologue set in 1938, which is when a joint U.S.-British-Egyptian expedition discovers the tomb of Merenkahre, Ahkmenrah’s father and original creator of the golden tablet. Despite warnings that disturbing the tomb means “the end will come,” the expedition proceeds to load everything up.

One of the members of that expedition is 12-year-old C.J. Fredericks. If you’ve been paying attention, you’ve seen that name before, and somebody that age in 1988 could conceivably still be alive in 2014. (In fact, the actor cast in that grown-up role was born about the same time as the character and is still alive now, knock wood.)

Again, this film dives right into the action after the opening, as we learn that Daley has gone back to his job as night guard, and is overseeing the re-opening of the Hayden Planetarium, to also be hosted by the re-animated exhibits. Everything seems to be going well until it suddenly all starts to glitch out. Ahkmenrah explains to Daley that the tablet is starting to corrode, and the magic may end soon.

Daley runs across a photo of C.J. Fredericks while researching the tablet, and after a museum librarian mentions that C.J. worked there as a night guard for years, Daley puts it together. He tracks down Fredericks (Dick van Dyke) as well as his other former workmates (Mickey Rooney as Gus and Bill Cobbs as Reginald) in a rest home.

Denying everything at first, Daley uses the photo to get Fredericks to spill the beans. The ones with the answer are Ahkmenrah’s parents, but they’re at the British Museum. Daley convinces his boss, McPhee, to let him take Ahkmenrah and the tablet to the British Museum. McPhee reluctantly agrees.

Of course, Daley and his son have stowaways on the journey, and most of the core group wind up in the British Museum, night guard Tilly (Rebel Wilson) none-the-wiser. After dark, the tablet does its magic and brings the exhibits to life, and our gang has to find the exhibit with Ahkmenrah’s parents’ tomb in it in order to learn the secrets of the tablet in order to save it.

The first hitch in their plans comes when they are rescued from a triceratops skeleton by a wax statue of Sir Lancelot come to life, but he can’t just let them all waltz off. He’s a Knight of the Round Table, after all, and is sworn to protect those on quests.

While he seeks the holy grail, the others seek the secret of the tablet — and also the whereabouts of Jedediah and Octavius, who were sucked down an air vent in the floor. While Dexter the capuchin monkey heads off through the ducts to locate the miniatures, the rest head off to try to find the Egyptology section of the collection. Once there, they find Ahkmenrah’s parents, and his father explains that the only way to save the tablet is to charge it by full exposure to moonlight — it’s been inside for too long.

However, someone else has other plans, taking the tablet and running off. Will our heroes be able to stop them in time and save all of the living exhibits?

Given the franchise so far, the answer to that question is probably obvious, but the one nice bit about it is that rather than have it be a “Hero physically defeats villain” moment, it happens because the villain suddenly realizes what’s actually at stake for the hero in this whole thing. It isn’t the tablet but, definitely requires the tablet in order to happen.

Back home, Daley quits the museum again, this time having no idea what comes next, but he and his son have grown a lot closer. An epilogue three years later takes place when a touring exhibit from the British Museum drops in on the New York Museum and it’s party time, a light and fitting end to the entire series.

So — are they among the greatest film trilogies ever made? Not really. But will they keep you and your family entertained while introducing a bit of (mostly accurate although with tropes played for laughs) history? Most certainly.

The cast carries the show here, with Stiller’s Daley, Robin Williams’ Teddy Roosevelt, Rami Malek’s Ahkmenrah, and Wilson and Coogan’s Jedediah and Octavius carrying things.

Other stand-outs include Mizuo Peck as Sacagawea, Patrick Gallagher as Attila the Hun, and Ricky Gervais as Dr. McPhee, all three of whom appear in all three movies.

Dick van Dyke gets a lot to do in the first film, doesn’t appear in the second, and has a cameo in the third. Mickey Rooney has the same pattern of appearances, but the writers never knew what to do with his character, other than make him a belligerent little man who threatens to punch out Daley from the get-go and who never changes.

That part is kind of sad, because in the third film, which was shot two months before he died, Rooney has clearly had some physical disabilities, with his character in a wheelchair and the right side of his face kept mostly away from camera.

Robin Williams, meanwhile, took his own life in August of that year, so when the film was released in December 2014, it carried memorial notices for both actors. Still, that shouldn’t dampen any of the humor and adventure in the films. The three together and individually have some great lessons to teach, both of the historical variety and of the emotional variety.

Grab your family or friends, and have a little Museum marathon.

Wonderous Wednesday: 5 Things that are older than you think

A lot of our current technology seems surprisingly new. The iPhone is only about fourteen years old, for example, although the first Blackberry, a more primitive form of smart phone, came out in 1999. The first actual smart phone, IBM’s Simon Personal Communicator, was introduced in 1992 but not available to consumers until 1994. That was also the year that the internet started to really take off with people outside of universities or the government, although public connections to it had been available as early as 1989 (remember Compuserve, anyone?), and the first experimental internet nodes were connected in 1969.

Of course, to go from room-sized computers communicating via acoustic modems along wires to handheld supercomputers sending their signals wirelessly via satellite took some evolution and development of existing technology. Your microwave oven has a lot more computing power than the system that helped us land on the moon, for example. But the roots of many of our modern inventions go back a lot further than you might think. Here are five examples.

Alarm clock

As a concept, alarm clocks go back to the ancient Greeks, frequently involving water clocks. These were designed to wake people up before dawn, in Plato’s case to make it to class on time, which started at daybreak; later, they woke monks in order to pray before sunrise.

From the late middle ages, church towers became town alarm clocks, with the bells set to strike at one particular hour per day, and personal alarm clocks first appeared in 15th-century Europe. The first American alarm clock was made by Levi Hutchins in 1787, but he only made it for himself since, like Plato, he got up before dawn. Antoine Redier of France was the first to patent a mechanical alarm clock, in 1847. Because of a lack of production during WWII due to the appropriation of metal and machine shops to the war effort (and the breakdown of older clocks during the war) they became one of the first consumer items to be mass-produced just before the war ended. Atlas Obscura has a fascinating history of alarm clocks that’s worth a look.

Fax machine

Although it’s pretty much a dead technology now, it was the height of high tech in offices in the 80s and 90s, but you’d be hard pressed to find a fax machine that isn’t part of the built-in hardware of a multi-purpose networked printer nowadays, and that’s only because it’s such a cheap legacy to include. But it might surprise you to know that the prototypical fax machine, originally an “Electric Printing Telegraph,” dates back to 1843.

Basically, as soon as humans figured out how to send signals down telegraph wires, they started to figure out how to encode images — and you can bet that the second image ever sent in that way was a dirty picture. Or a cat photo.

Still, it took until 1964 for Xerox to finally figure out how to use this technology over phone lines and create the Xerox LDX. The scanner/printer combo was available to rent for $800 a month — the equivalent of around $6,500 today — and it could transmit pages at a blazing 8 per minute. The second generation fax machine only weighed 46 lbs and could send a letter-sized document in only six minutes, or ten page per hour. Whoot — progress!

You can actually see one of the Electric Printing Telegraphs in action in the 1948 movie Call Northside 777, in which it plays a pivotal role in sending a photograph cross-country in order to exonerate an accused man.

In case you’re wondering, the title of the film refers to a telephone number from back in the days before what was originally called “all digit dialing.” Up until then, telephone exchanges (what we now call prefixes) were identified by the first two letters of a word, and then another digit or two or three. (Once upon a time, in some areas of the US, phone numbers only had five digits.) So NOrthside 777 would resolve itself to 667-77, with 667 being the prefix. This system started to end in 1958, and a lot of people didn’t like that.

Of course, with the advent of cell phones, prefixes and even area codes have become pretty meaningless, since people tend to keep the number they had in their home town regardless of where they move to, and a “long distance call” is mostly a dead concept now as well, which is probably a good thing.

CGI

When do you suppose the first computer animation appeared on film? You may have heard that the original 2D computer generated imagery (CGI) used in a movie was in 1973 in the original film Westworld, inspiration for the recent TV series. Using very primitive equipment, the visual effects designers simulated pixilation of actual footage in order to show us the POV of the robotic gunslinger played by Yul Brynner. It turned out to be a revolutionary effort.

The first 3D CGI happened to be in this film’s sequel, Futureworld in 1976, where the effect was used to create the image of a rotating 3D robot head. However, the first ever CGI sequence was actually made in… 1961. Called Rendering of a planned highway, it was created by the Swedish Royal Institute of Technology on what was then the fastest computer in the world, the BESK, driven by vacuum tubes. It’s an interesting effort for the time, but the results are rather disappointing.

Microwave oven

If you’re a Millennial, then microwave ovens have pretty much always been a standard accessory in your kitchen, but home versions don’t predate your birth by much. Sales began in the late 1960s. By 1972 Litton had introduced microwave ovens as kitchen appliances. They cost the equivalent of about $2,400 today. As demand went up, prices fell. Nowadays, you can get a small, basic microwave for under $50.

But would it surprise you to learn that the first microwave ovens were created just after World War II? In fact, they were the direct result of it, due to a sudden lack of demand for magnetrons, the devices used by the military to generate radar in the microwave range. Not wanting to lose the market, their manufacturers began to look for new uses for the tubes. The idea of using radio waves to cook food went back to 1933, but those devices were never developed.

Around 1946, engineers accidentally realized that the microwaves coming from these devices could cook food, and voìla! In 1947, the technology was developed, although only for commercial use, since the devices were taller than an average man, weighed 750 lbs and cost the equivalent of $56,000 today. It took 20 years for the first home model, the Radarange, to be introduced for the mere sum of $12,000 of today’s dollars.

Music video

Conventional wisdom says that the first music video to ever air went out on August 1, 1981 on MTV, and it was “Video Killed the Radio Star” by The Buggles. As is often the case, conventional wisdom is wrong. It was the first to air on MTV, but the concept of putting visuals to rock music as a marketing tool goes back a lot farther than that.

Artists and labels were making promotional films for their songs back at almost the beginning of the 1960s, with the Beatles a prominent example. Before these, though, was the Scopitone, a jukebox that could play films in sync with music popular from the late 1950s to mid-1960s, and their predecessor was the Panoram, a similar concept popular in the 1940s which played short programs called Soundies.

However, these programs played on a continuous loop, so you couldn’t chose your song. Soundies were produced until 1946, which brings us to the real predecessor of music videos: Vitaphone Shorts, produced by Warner Bros. as sound began to come to film. Some of these featured musical acts and were essentially miniature musicals themselves. They weren’t shot on video, but they introduced the concept all the same. Here, you can watch a particularly fun example from 1935 in 3-strip Technicolor that also features cameos by various stars of the era in a very loose story.

Do you know of any things that are actually a lot older than people think? Let us know in the comments!

Photo credit: Jake von Slatt

Momentous Monday: Tricky Questions

Here are five tricky questions to test how much you know about what you think you know.

  1. When did the United States become its own country?

If you’re an American, you probably wanted to say July 4, 1776, didn’t you? You could, but you’d be wrong. We had to win the war that was started when we declared independence, and that took a while.

The U.S.A. wasn’t officially that until March 4, 1789, when the Constitution went into effect — and George Washington became the first President. Why we don’t celebrate this as the birth of our nation is beyond me, but March 4 was the date of the presidential inauguration right up until 1933, when it was moved to its current January 20 date by Constitutional Amendment — number 20, to be exact, or XX if you prefer.

  1. How much gravity, in g, do astronauts on the ISS experience?

You’re probably thinking Zero, aren’t you? Nope. The gravity up there is a net downward force — as in toward the center of the Earth — of 0.89g, or almost what you’d experience on the surface of the Earth itself.

“But they’re floating around up there!” you may say.

Yes, they are, sort of, but they’re not really floating. They’re falling in the same way that you fall when you’re in a rollercoaster or other thrill ride that makes a sudden and very steep drop. It feels like you’re floating, but that’s because your downward acceleration (which makes you feel like you’re pushing up into the rollercoaster seat) counteracts the downward pull of gravity.

Drop faster than 1g, and you’ll rise out of your seat — but you’re still in Earth’s gravity.

  1. When there’s a Full Moon in the sky, how much of the Moon can we actually see?

Did you say “All of it?” Nice answer, but wrong. We’re only seeing half of it, of course, and that’s the near side. We never see the far side, but we actually do see more than just half of the Moon over time.

In fact, over time we can see up to 60% of the Moon’s surface thanks to libration, which is a tilt and wobble in the Moon itself. It wobbles along its East-West axis stopping during perigee and apogee,

The former is when the Moon is closest to Earth during its orbit, and the latter is when its at its farthest. Between each of these points, the Moon turns a bit farther, about 8 degrees  in either direction, showing a bit of its backside. Cheeky!

Likewise, the Moon “nods” north and south. This happens for the same reason that the Earth has season — the Moon’s orbital plane is tilted about 5 degrees relative to Earth’s. Also, the Moon’s equator is tilted 1.5 degrees relative to the plane of the ecliptic, which was set as the plane which contains both the Sun and the Earth’s orbit, meaning that the Earth is inclined zero degrees to the plane.

These lunar tilts add up to 6.5 degrees, though, which is exactly how much of its far side we can see to the north and south depending on where the Moon is in its orbital period.

So add it all up — 2 x 8 degrees plus 2 x 6.5 degrees, or 16 plus 13 degrees, and we get 29 degrees, more or less. Add that to the 180 we already see to get 209, divide by 360, and that’s about 58% of the surface we can see over time, give or take.

  1. So how much of the Moon do we see when the phase is a Half Moon?

You’re probably thinking “Half of the half we see, so one quarter.” Well, that’s the part we can see that’s lit — but have you ever realized that you can still see the whole near side of the Moon no matter what the phase, even if it’s a New Moon?

This is because the Earth itself has an albedo of 30 to 35%, varying due to cloud cover. This number indicates how much of the Sun’s light it reflects.

Under most circumstances, there’s enough light coming off of the Earth to illuminate the dark parts of the Moon at least enough so that they appear as a dark shadow against the night sky, and it’s much more obvious with a very starry background because there will be a “hole” in the stars where the rest of the Moon is.

If you live anywhere near the eastern shore of the Pacific, this effect is particularly pronounced, since there will be a good amount of sunlight reflecting off of the water whether it’s under cloud cover or not.

The Moon’s albedo is 12%, but it’s getting hit by a lot of light by the Sun — and this is why you can see the entire near side of a New Moon during the day. Sure, it’s fairly pale, but it’s there. Just look up in the sky away from the Sun and ta-da!

  1. One last question for the Americans: What is the official language of the United States?

Yep. Contrary to what way too many people think, the official language of the United States is not English. In fact, it’s… nothing. We as a country do not have an official language. Some states have tried to have official languages, while a number do not.

Not counting territories, we have 19 states with no official language, although some languages do have special status, like Spanish in New Mexico and French in Louisiana. the District of Columbia provides for equal access to all whether they speak English or not.

Twenty states have declared for English only, with two states (Arizona and Massachusetts) subsequently passing new English-only laws after previous laws were declared unconstitutional. My home state, California, passed an English-only initiative in 1986, when the state was much more conservative. However, for all practical purposes this isn’t really enforced, at least not in any government agency.

There are three states that have English as an official language in addition to others: Hawaii, with Hawaiian; Alaska, with over 20 indigenous languages recognized; and South Dakota, with English and Sioux. Okay, I’ll include Puerto Rico, with English and Spanish.

By the way, when the Colonies declared their independence from England, they also considered a full linguistic split as well, and there were many proponents of making Hebrew the official language of the United States.

How did you do, and how many tricky questions or errors in “common knowledge” do you know? Let me know in the comments!

Wednesday Wonders: How the world almost ended once

I happen to firmly believe that climate change is real, it is happening, and humans are contributing to and largely responsible for it, but don’t worry — this isn’t going to be a political story. And I’ll admit that I can completely understand some of the deniers’ arguments. No, not the ones that say that “global warming” is a hoax made up so that “evil liberals” in government can tax everyone even more. The understandable arguments are the ones that say, “How could mere humans have such a big effect on the world’s climate?” and “Climate change is cyclic and will happen with or without us.”

That second argument is actually true, but it doesn’t change the fact that our industrialization has had a direct and measurable impact in terms of more greenhouse gases emitted and the planet heating up. Also note: Just because you’re freezing your ass off under the polar vortex doesn’t mean that Earth isn’t getting hotter. Heat just means that there’s more energy in the system and with more energy comes more chaos. Hot places will be hotter. Cold places will be colder. Weather in general will become more violent.

As for the first argument, that a single species, like humans, really can’t have all that great an effect on this big, giant planet, I’d like to tell you a story that will demonstrate how wrong that idea is, and it begins nearly 2.5 billion years ago with the Great Oxygenation Event.

Prior to that point in time, the Earth was mostly populated by anaerobic organisms — that is, organisms that do not use oxygen in their metabolism. In fact, oxygen is toxic to them. The oceans were full of bacteria of this variety. The atmosphere at the time was about 30% carbon dioxide and close to 70% nitrogen, with perhaps a hint of methane, but no oxygen at all. Compare this to the atmosphere of Mars today, which is 95% carbon dioxide, 2.7% nitrogen, and less than 2% other gases. Side note: This makes the movie Mars Attacks! very wrong, because a major plot point was that the Martians could only breathe nitrogen, which is currently 78% of our atmosphere but almost absent in theirs. Oops!

But back to those anaerobic days and what changed them: A species of algae called cyanobacteria figured out the trick to photosynthesis — that is, producing energy not from food, but from sunlight and a few neat chemical processes. (Incidentally, this was also the first step on the evolutionary path to eyes.) Basically, these microscopic fauna would take in water and carbon dioxide, use the power of photons to break some bonds, and then unleash the oxygen from both of those elements while using the remaining carbon and hydrogen.

At first, things were okay because oxygen tended to be trapped by organic matter (any carbon containing compound) or iron (this is how rust is made), and there were plenty of both floating around to do the job, so both forms of bacteria got along fine. But there eventually became a point when there were not enough oxygen traps, and so things started to go off the rails. Instead of being safely sequestered, the oxygen started to get out into the atmosphere, with several devastating results.

First, of course, was that this element was toxic to the anaerobic bacteria, and so it started to kill them off big time. They just couldn’t deal with it, so they either died or adapted to a new ecological niche in low-oxygen environments, like the bottom of the sea. Second, though, and more impactful: All of this oxygen wound up taking our whatever atmospheric methane was left and converting it into carbon dioxide. Now the former is a more powerful greenhouse gas, and so was keeping the planet warm. The latter was and still is less effective. The end result of the change was a sudden and very long ice age known as the Huronian glaciation, which lasted for 300 million years — the oldest and longest ice age to date. The result of this was that most of the cyanobacteria died off as well.

So there you have it. A microscopic organism, much smaller than any of us and without any kind of technology or even intelligence to speak of, almost managed to wipe out all life forms on the planet and completely alter the climate for tens of millions of years, and they may have tipped the balance in as little as a million years.

We are much, much bigger than bacteria — about a million times, actually — and so our impact on the world is proportionally larger, even if they vastly outnumbered our current population of around 7.5 billion. But these tiny, mindless organisms managed to wipe out most of the life on Earth at the time and change the climate for far longer than humans have even existed.

Don’t kid yourself by thinking that humanity cannot and is not doing the same thing right now. Whether we’ll manage to turn the planet into Venus or Pluto is still up for debate. Maybe we’ll get a little of both. But to try to hand-wave it away by claiming we really can’t have that much of an impact is the road to perdition. If single-celled organisms could destroy the entire ecosystem, imagine how much worse we can do with our roughly 30 to 40 trillion cells, and then do your best to not contribute to that destruction.

Theatre Thursday: Fact and fiction

About six hundred and eight years ago, Henry V was crowned king of England. You probably know him as that king from the movie with Kenneth Branagh, or the BBC series aired under the title The Hollow Crown.

Either way, you know him because of Shakespeare. He was the king who grew up in Shakespeare’s second tetralogy. Yes, that’s a set of four plays and since it was his second, Shakespeare sort of did the Star Wars thing first: he wrote eight plays on the subject of the English Civil war.

And, much like Lucas, he wrote the original tetralogy first, then went back and did the prequels. Richard II, Henry IV, Part 1, Henry IV, Part 2 and Henry V were written after but happened before Henry VI, Part 1, Henry VI, Part 2, Henry VI, Part 3 and Richard III.

Incidentally, Henry VI, Part 1, is famous for having Joan of Arc (aka Joan la Pucelle in the play) as one of the antagonists. Funny thing is, that name wasn’t slander on Shakespeare’s part. That’s what she preferred to call herself.

Meanwhile, Richard III, of course, is the Emperor Palpatine of the series, although we never did get a Richard IV, mainly because he never existed in history. Well, not officially. Richard III’s successor was Henry VII, and Shakespeare never wrote about him, either, although he did gush all over Henry VIII, mainly because he was the father of the Bard’s patron, Elizabeth I. CYA.

If you’ve ever seen the film My Own Private Idaho, directed by Gus Van Sant and staring River Phoenix and Keanu Reeves, then you’ve seen a modern retelling of the two parts of Henry IV.

Now when it comes to adapting true stories to any dramatic medium, you’re going to run into the issue of dramatic license. A documentary shouldn’t have this problem and shouldn’t play with the truth, although it happens. Sometimes, it can even prove fatal.

But when it comes to a dramatic retelling, it is often necessary to fudge things, sometimes a little and sometimes a lot. It’s not at all uncommon for several characters to be combined into a composite just to make for a cleaner plot. After all, is it that big of a difference if, say, King Flagbarp IX in real life was warned about a plot against him in November by his chamberlain Norgelglap, but the person who told him the assassin’s name in February was his chambermaid Hegrezelda?

Maybe, maybe not, but depending on what part either of those characters plays in the rest of the story, as well as the writer’s angle, they may both be combined as Norgelglap or as Hegrezelda, or become a third, completely fictionalized character, Vlanostorf.

Time frames can also change, and a lot of this lands right back in Aristotle’s lap. He created the rules of drama long before hacks like the late Syd Field tried (and failed), and Ari put it succinctly. Every dramatic work has a beginning, a middle, and an end, and should have unity of place, unity of time, and unity of action.

A summary of the last three is this, although remember that Aristotle was writing about the stage. For film and video, your mileage will vary slightly.

The story takes place in one particular location, although that location can be a bit broad. It can be the king’s castle, or it can be the protagonist’s country.

It should take place over a fairly short period of time. Aristotle liked to keep it to a day, but there’s leeway, and we’ve certainly seen works that have taken place over an entire lifetime — although that is certainly a form of both unity of time and unity of place, if you consider the protagonist to be the location as well.

Unity of action is a little abstract, but in a nutshell it’s this: Your plot is about one thing. There’s a single line that goes from A to Z: What your protagonist wants, and how they get it.

Now, my own twist on the beginning, middle, and end thing is that this is a three act structure that gives us twenty-seven units. (Aristotle was big on 5 acts, which Shakespeare used, but that’s long since fallen out of fashion.)

Anyway, to me, we have Act I, II, and III. Beginning, middle, and end. But each of those has its own beginning, middle and end. So now we’re up to nine: I: BME; II: BME; III: BME.

Guess what? Each of those subunits also has a beginning, middle, and end. I’m not going to break that one down further than this. The beginning of the beginning, Act I: B, has its own BME, repeat eight more times.

The end result is 3 x 3 x 3, or twenty-seven.

And that’s my entire secret to structure. You’re welcome.

But because of these little constraints, and because history is messy, it’s necessary to switch things up to turn a true story into a “based on true events” work. Real life doesn’t necessarily have neat beginnings, middles, and endings. It also doesn’t necessarily take place in one spot, or in a short period of time.

So it becomes the artist’s job to tell that story in a way that is as true to reality as possible without being married to the facts.

Although it is also possible to go right off the rails with it, and this is one of the reasons I totally soured on Quentin Tarantino films. It’s one thing to fudge facts a little bit, but when he totally rewrites history in Inglorious Basterds, ignores historical reality in Django Unchained, and then curb stomps reality and pisses on its corpse in Once Upon a Time in Hollywood, I’m done.

Inglorious Misspelling is a particularly egregious example because the industry does a great disservice in selling false history to young people who unfortunately, aren’t getting the best educations right now.

Anecdotal moment: A few years back, an Oscar-winning friend of mine had a play produced that told the story of the 442nd Infantry Regiment. They were a company composed almost entirely of second-generation Japanese Americans during WW II, and joining the company was the alternative given to going to an internment camp.

Of course, being racists, the U.S. government couldn’t send them to the Pacific Theatre to fight, so they sent them to Europe, and a lot of the play takes place in Italy, where the regiment was stationed. And, at intermission, my playwright friend heard two 20-something audience members talking to each other. One of them asked, “What was the U.S. even doing in Italy in World War II?” and the other just shrugged and said, “Dunno.”

So, yeah. If you’re going to go so far as to claim that Hitler was killed in a burning movie theater before the end of the war, just stop right there before you shoot a frame. Likewise with claiming that the Manson murders never happened because a couple of yahoos ran into the killers first.

Yeah, Quentin, you were old, you were there, you remember. Don’t stuff younger heads with shit.

But I do digress.

In Shakespeare’s case, he was pretty accurate in Henry V, although in both parts of Henry IV, he created a character who was both one of his most memorable and one of his more fictional: Sir John Falstaff. In fact, the character was so popular that, at the Queen’s command, Shakespeare gave him his own spinoff, The Merry Wives of Windsor. Hm. Shades of Solo in the Star Wars universe?

Falstaff never existed in real life, but was used as a way to tell the story of the young and immature Henry (not yet V) of Monmouth, aka Prince Hal.

Where Shakespeare may have played more fast and loose was in Richard III. In fact, the Bard vilified him when it wasn’t really deserved. Why? Simple. He was kissing up to Elizabeth I. She was a Tudor, daughter of Henry VIII who, as mentioned previously, was the son of Henry VII, the king who took over when Richard III lost the war of the roses.

The other time that Shakespeare didn’t treat a king so well in a play? King John — which I personally take umbrage to, because I’m directly descended from him. No, really. But the idea when Willie Shakes did that was to draw a direct contrast to how Good Queen Bess did so much better in dealing with Papal interference. (TL;DR: He said, “Okay,” she said, “Eff off.”)

Since most of my stage plays have been based on true stories, I’ve experienced this directly many times, although one of the more interesting came with the production of my play Bill & Joan, because I actually accidentally got something right.

When I first wrote the play, the names of the cops in Mexico who interrogated him were not included in the official biography, so I made up two fictional characters, Enrique and Tito. And so they stayed like that right into pre-production in 2013.

Lo and behold, a new version of the biography of Burroughs I had originally used for research came out, and I discovered two amazing things.

First… I’d always known that Burroughs’ birthday was the day before mine, but I suddenly found out that his doomed wife actually shared my birthday. And the show happened to run during both dates.

Second… the names of the cops who interrogated him were finally included, and one of them was named… Tito.

Of course, I also compressed time, moved shit around, made up more than a few characters, and so forth. But the ultimate goal was to tell the truth of the story, which was: Troubled couple who probably shouldn’t have ever gotten together deals with their issues in the most violent and tragic way possible, and one of them goes on to become famous. The other one dies.

So yes, if you’re writing fiction it can be necessary to make stuff up, but the fine line is to not make too much stuff up. A little nip or tuck here and there is fine. But, outright lies? Nah. Let’s not do that.

Momentous Monday: Relativity

Almost 470 years ago, in 1553, a man named John Lyly (or Lilly, Lylie, or Lylly) was born in Kent, England, the grandson of a Swedish immigrant. A contemporary of Shakespeare’s, he was kind of a big deal in his day. He was an author and playwright of some renown, and while he failed in his attempt to be appointed the Queens Master of the Revels, he did serve in Parliament as well as served Queen Elizabeth I for many years.

Around two hundred and eighty years after that, somewhere in Massachusetts, a child was born. He grew up to become a man, and he moved west. It was the era of Manifest Destiny in America, a dark time in our history. That child was John Lyly’s seventh great-grandson.

At least we weren’t using the term “Great White Hope.” Yet. To be honest, we should have used the term “Great White Death.” But, among the death there was still hope, and that child born in Massachusetts who grew up to be a man put his ideals into action.

Along with a great wave of German immigrants to America, all of whom despised slavery, this man went west, crossed the Missouri river and landed in Kansas. For me, the movie How the West Was Won is a family documentary.

When he arrived in Kansas, he helped found the town of Burlington, was one of two attorneys in the town (and also one of two blacksmiths, the other of whom was the other attorney), mayor of the town at one point, and a proud member of the Republican Party.

Yeah… quite the opposite of my politics now, or so you’d think. Except that, before the Civil War and up until FDR, the Republicans were the liberal party in America, and the Democrats were regressive. (Woodrow Wilson was a major racist, by the way.)

That child who grew up to be a great man moved west in order to help bring Kansas into the union as a free (i.e., non-slave) state. And that child, who grew up to be a great man, was my great-great-grandfather, Silas Fearl.

Since he was Lily’s seventh great-grandson, that makes me Lily’s eleventh. (It doesn’t seem to add up, but don’t forget that I have to add in the two generations between me and Silas., plus myself.)

Fast-forward to nearly two-hundred years after Silas was born, and the evolution of the internet, and I am in touch with people who share my ancestry with him. It makes us very distant relatives, to be sure, but it means that we have a very definite connection, some by blood and some by marriage.

And this is the reason for this post. One of those third or fourth cousins, via Silas Fearl by blood, posted some pictures of her kids, and when I looked at them the thing that most struck me was this. “Wow. This person and I have an ancestor in common.” And, in fact, looking at these faces, I could see certain elements of my own face, of my dad’s, and of my grandpa’s, and of the great uncles I managed to meet, and of the people in a family portrait taken when my father’s father was an infant.

Even so many steps apart on the branches of humanity’s family tree, I could see some of me and my immediate family in them… and across the distance of never having met and Facebook, my first reaction was an enormous empathy. “This is a bit of me, and I want to protect it from everything bad forever.”

And, in a lot of ways, I have to suspect that this is just an illusion, an effect created by the empirical proof I have seen that means “You and I are related to each other.” That, and the evolutionary and biological forces that make us most protective of those who share our DNA.

Except that… I’ve felt this same way toward people who are absolutely not related, but I’ve still seen myself in them… and this is when I realize the harm that intellect can do to our species.

Intellect relies on so-called facts that it has been told. So, “Hey, you and this person are related” is a fact that ropes emotions into relating to the news. So… subject, object, emotion, bond.

In reality, anybody whose picture I see online is related, it’s just not as straightforward as “You and this person have the same great-great-grandfather.” I can trace part of my ancestry back to King Henry II of England and his wife, Eleanor of Aquitaine — The Lion in Winter is, for me, another unintended family documentary.

By that connection, I’m related to most of the population of England and the eastern US. Now, go back through them to another common ancestor, Charlemagne, and I’m related to most western Europeans and Americans — if you expand the definition of “America” to include all countries on both continents, north and south.

And, if you go back far enough to the last point in humanity’s evolutionary history at which the family tree’s branches split, then you could honestly say that everybody you have ever met is related to you and shares your DNA and your blood to some degree.

You should be able to recognize your features in them no matter their race, gender identity, sexual orientation, or religion. You should be able to see their humanity, and yours, in their faces.

And, go back far enough then we are related to all animal life on this planet. Go back a little farther, and we are related to all life not only on this planet, but in the universe. Go back far enough and follow the laws of physics, and all of us, everyone, everywhere, were once the exact same bit of incredibly condensed matter.

The universe is the mother of us all, and all divisions are illusionary.

I’m reminded of some old Beatles lyrics at the moment. “I am he as you are he as you are me and we are all together.” (And I had to look that up. It’s from I Am the Walrus and not Come Together.) Anyway, that’s a pretty good summation of my realization.

Once we put human history on a cosmic scale, our differences and squabbles become absolutely meaningless. All of us were born from the stars. All of us are in this together. Let’s act like it…

Image: The author’s great-grandparents and their four sons, including the author’s paternal grandfather.

%d bloggers like this: