Wonderous Wednesday: 5 Things that are older than you think

A lot of our current technology seems surprisingly new. The iPhone is only about fourteen years old, for example, although the first Blackberry, a more primitive form of smart phone, came out in 1999. The first actual smart phone, IBM’s Simon Personal Communicator, was introduced in 1992 but not available to consumers until 1994. That was also the year that the internet started to really take off with people outside of universities or the government, although public connections to it had been available as early as 1989 (remember Compuserve, anyone?), and the first experimental internet nodes were connected in 1969.

Of course, to go from room-sized computers communicating via acoustic modems along wires to handheld supercomputers sending their signals wirelessly via satellite took some evolution and development of existing technology. Your microwave oven has a lot more computing power than the system that helped us land on the moon, for example. But the roots of many of our modern inventions go back a lot further than you might think. Here are five examples.

Alarm clock

As a concept, alarm clocks go back to the ancient Greeks, frequently involving water clocks. These were designed to wake people up before dawn, in Plato’s case to make it to class on time, which started at daybreak; later, they woke monks in order to pray before sunrise.

From the late middle ages, church towers became town alarm clocks, with the bells set to strike at one particular hour per day, and personal alarm clocks first appeared in 15th-century Europe. The first American alarm clock was made by Levi Hutchins in 1787, but he only made it for himself since, like Plato, he got up before dawn. Antoine Redier of France was the first to patent a mechanical alarm clock, in 1847. Because of a lack of production during WWII due to the appropriation of metal and machine shops to the war effort (and the breakdown of older clocks during the war) they became one of the first consumer items to be mass-produced just before the war ended. Atlas Obscura has a fascinating history of alarm clocks that’s worth a look.

Fax machine

Although it’s pretty much a dead technology now, it was the height of high tech in offices in the 80s and 90s, but you’d be hard pressed to find a fax machine that isn’t part of the built-in hardware of a multi-purpose networked printer nowadays, and that’s only because it’s such a cheap legacy to include. But it might surprise you to know that the prototypical fax machine, originally an “Electric Printing Telegraph,” dates back to 1843.

Basically, as soon as humans figured out how to send signals down telegraph wires, they started to figure out how to encode images — and you can bet that the second image ever sent in that way was a dirty picture. Or a cat photo.

Still, it took until 1964 for Xerox to finally figure out how to use this technology over phone lines and create the Xerox LDX. The scanner/printer combo was available to rent for $800 a month — the equivalent of around $6,500 today — and it could transmit pages at a blazing 8 per minute. The second generation fax machine only weighed 46 lbs and could send a letter-sized document in only six minutes, or ten page per hour. Whoot — progress!

You can actually see one of the Electric Printing Telegraphs in action in the 1948 movie Call Northside 777, in which it plays a pivotal role in sending a photograph cross-country in order to exonerate an accused man.

In case you’re wondering, the title of the film refers to a telephone number from back in the days before what was originally called “all digit dialing.” Up until then, telephone exchanges (what we now call prefixes) were identified by the first two letters of a word, and then another digit or two or three. (Once upon a time, in some areas of the US, phone numbers only had five digits.) So NOrthside 777 would resolve itself to 667-77, with 667 being the prefix. This system started to end in 1958, and a lot of people didn’t like that.

Of course, with the advent of cell phones, prefixes and even area codes have become pretty meaningless, since people tend to keep the number they had in their home town regardless of where they move to, and a “long distance call” is mostly a dead concept now as well, which is probably a good thing.

CGI

When do you suppose the first computer animation appeared on film? You may have heard that the original 2D computer generated imagery (CGI) used in a movie was in 1973 in the original film Westworld, inspiration for the recent TV series. Using very primitive equipment, the visual effects designers simulated pixilation of actual footage in order to show us the POV of the robotic gunslinger played by Yul Brynner. It turned out to be a revolutionary effort.

The first 3D CGI happened to be in this film’s sequel, Futureworld in 1976, where the effect was used to create the image of a rotating 3D robot head. However, the first ever CGI sequence was actually made in… 1961. Called Rendering of a planned highway, it was created by the Swedish Royal Institute of Technology on what was then the fastest computer in the world, the BESK, driven by vacuum tubes. It’s an interesting effort for the time, but the results are rather disappointing.

Microwave oven

If you’re a Millennial, then microwave ovens have pretty much always been a standard accessory in your kitchen, but home versions don’t predate your birth by much. Sales began in the late 1960s. By 1972 Litton had introduced microwave ovens as kitchen appliances. They cost the equivalent of about $2,400 today. As demand went up, prices fell. Nowadays, you can get a small, basic microwave for under $50.

But would it surprise you to learn that the first microwave ovens were created just after World War II? In fact, they were the direct result of it, due to a sudden lack of demand for magnetrons, the devices used by the military to generate radar in the microwave range. Not wanting to lose the market, their manufacturers began to look for new uses for the tubes. The idea of using radio waves to cook food went back to 1933, but those devices were never developed.

Around 1946, engineers accidentally realized that the microwaves coming from these devices could cook food, and voìla! In 1947, the technology was developed, although only for commercial use, since the devices were taller than an average man, weighed 750 lbs and cost the equivalent of $56,000 today. It took 20 years for the first home model, the Radarange, to be introduced for the mere sum of $12,000 of today’s dollars.

Music video

Conventional wisdom says that the first music video to ever air went out on August 1, 1981 on MTV, and it was “Video Killed the Radio Star” by The Buggles. As is often the case, conventional wisdom is wrong. It was the first to air on MTV, but the concept of putting visuals to rock music as a marketing tool goes back a lot farther than that.

Artists and labels were making promotional films for their songs back at almost the beginning of the 1960s, with the Beatles a prominent example. Before these, though, was the Scopitone, a jukebox that could play films in sync with music popular from the late 1950s to mid-1960s, and their predecessor was the Panoram, a similar concept popular in the 1940s which played short programs called Soundies.

However, these programs played on a continuous loop, so you couldn’t chose your song. Soundies were produced until 1946, which brings us to the real predecessor of music videos: Vitaphone Shorts, produced by Warner Bros. as sound began to come to film. Some of these featured musical acts and were essentially miniature musicals themselves. They weren’t shot on video, but they introduced the concept all the same. Here, you can watch a particularly fun example from 1935 in 3-strip Technicolor that also features cameos by various stars of the era in a very loose story.

Do you know of any things that are actually a lot older than people think? Let us know in the comments!

Photo credit: Jake von Slatt

Momentous Monday: Tricky Questions

Here are five tricky questions to test how much you know about what you think you know.

  1. When did the United States become its own country?

If you’re an American, you probably wanted to say July 4, 1776, didn’t you? You could, but you’d be wrong. We had to win the war that was started when we declared independence, and that took a while.

The U.S.A. wasn’t officially that until March 4, 1789, when the Constitution went into effect — and George Washington became the first President. Why we don’t celebrate this as the birth of our nation is beyond me, but March 4 was the date of the presidential inauguration right up until 1933, when it was moved to its current January 20 date by Constitutional Amendment — number 20, to be exact, or XX if you prefer.

  1. How much gravity, in g, do astronauts on the ISS experience?

You’re probably thinking Zero, aren’t you? Nope. The gravity up there is a net downward force — as in toward the center of the Earth — of 0.89g, or almost what you’d experience on the surface of the Earth itself.

“But they’re floating around up there!” you may say.

Yes, they are, sort of, but they’re not really floating. They’re falling in the same way that you fall when you’re in a rollercoaster or other thrill ride that makes a sudden and very steep drop. It feels like you’re floating, but that’s because your downward acceleration (which makes you feel like you’re pushing up into the rollercoaster seat) counteracts the downward pull of gravity.

Drop faster than 1g, and you’ll rise out of your seat — but you’re still in Earth’s gravity.

  1. When there’s a Full Moon in the sky, how much of the Moon can we actually see?

Did you say “All of it?” Nice answer, but wrong. We’re only seeing half of it, of course, and that’s the near side. We never see the far side, but we actually do see more than just half of the Moon over time.

In fact, over time we can see up to 60% of the Moon’s surface thanks to libration, which is a tilt and wobble in the Moon itself. It wobbles along its East-West axis stopping during perigee and apogee,

The former is when the Moon is closest to Earth during its orbit, and the latter is when its at its farthest. Between each of these points, the Moon turns a bit farther, about 8 degrees  in either direction, showing a bit of its backside. Cheeky!

Likewise, the Moon “nods” north and south. This happens for the same reason that the Earth has season — the Moon’s orbital plane is tilted about 5 degrees relative to Earth’s. Also, the Moon’s equator is tilted 1.5 degrees relative to the plane of the ecliptic, which was set as the plane which contains both the Sun and the Earth’s orbit, meaning that the Earth is inclined zero degrees to the plane.

These lunar tilts add up to 6.5 degrees, though, which is exactly how much of its far side we can see to the north and south depending on where the Moon is in its orbital period.

So add it all up — 2 x 8 degrees plus 2 x 6.5 degrees, or 16 plus 13 degrees, and we get 29 degrees, more or less. Add that to the 180 we already see to get 209, divide by 360, and that’s about 58% of the surface we can see over time, give or take.

  1. So how much of the Moon do we see when the phase is a Half Moon?

You’re probably thinking “Half of the half we see, so one quarter.” Well, that’s the part we can see that’s lit — but have you ever realized that you can still see the whole near side of the Moon no matter what the phase, even if it’s a New Moon?

This is because the Earth itself has an albedo of 30 to 35%, varying due to cloud cover. This number indicates how much of the Sun’s light it reflects.

Under most circumstances, there’s enough light coming off of the Earth to illuminate the dark parts of the Moon at least enough so that they appear as a dark shadow against the night sky, and it’s much more obvious with a very starry background because there will be a “hole” in the stars where the rest of the Moon is.

If you live anywhere near the eastern shore of the Pacific, this effect is particularly pronounced, since there will be a good amount of sunlight reflecting off of the water whether it’s under cloud cover or not.

The Moon’s albedo is 12%, but it’s getting hit by a lot of light by the Sun — and this is why you can see the entire near side of a New Moon during the day. Sure, it’s fairly pale, but it’s there. Just look up in the sky away from the Sun and ta-da!

  1. One last question for the Americans: What is the official language of the United States?

Yep. Contrary to what way too many people think, the official language of the United States is not English. In fact, it’s… nothing. We as a country do not have an official language. Some states have tried to have official languages, while a number do not.

Not counting territories, we have 19 states with no official language, although some languages do have special status, like Spanish in New Mexico and French in Louisiana. the District of Columbia provides for equal access to all whether they speak English or not.

Twenty states have declared for English only, with two states (Arizona and Massachusetts) subsequently passing new English-only laws after previous laws were declared unconstitutional. My home state, California, passed an English-only initiative in 1986, when the state was much more conservative. However, for all practical purposes this isn’t really enforced, at least not in any government agency.

There are three states that have English as an official language in addition to others: Hawaii, with Hawaiian; Alaska, with over 20 indigenous languages recognized; and South Dakota, with English and Sioux. Okay, I’ll include Puerto Rico, with English and Spanish.

By the way, when the Colonies declared their independence from England, they also considered a full linguistic split as well, and there were many proponents of making Hebrew the official language of the United States.

How did you do, and how many tricky questions or errors in “common knowledge” do you know? Let me know in the comments!

Wednesday Wonders: How the world almost ended once

I happen to firmly believe that climate change is real, it is happening, and humans are contributing to and largely responsible for it, but don’t worry — this isn’t going to be a political story. And I’ll admit that I can completely understand some of the deniers’ arguments. No, not the ones that say that “global warming” is a hoax made up so that “evil liberals” in government can tax everyone even more. The understandable arguments are the ones that say, “How could mere humans have such a big effect on the world’s climate?” and “Climate change is cyclic and will happen with or without us.”

That second argument is actually true, but it doesn’t change the fact that our industrialization has had a direct and measurable impact in terms of more greenhouse gases emitted and the planet heating up. Also note: Just because you’re freezing your ass off under the polar vortex doesn’t mean that Earth isn’t getting hotter. Heat just means that there’s more energy in the system and with more energy comes more chaos. Hot places will be hotter. Cold places will be colder. Weather in general will become more violent.

As for the first argument, that a single species, like humans, really can’t have all that great an effect on this big, giant planet, I’d like to tell you a story that will demonstrate how wrong that idea is, and it begins nearly 2.5 billion years ago with the Great Oxygenation Event.

Prior to that point in time, the Earth was mostly populated by anaerobic organisms — that is, organisms that do not use oxygen in their metabolism. In fact, oxygen is toxic to them. The oceans were full of bacteria of this variety. The atmosphere at the time was about 30% carbon dioxide and close to 70% nitrogen, with perhaps a hint of methane, but no oxygen at all. Compare this to the atmosphere of Mars today, which is 95% carbon dioxide, 2.7% nitrogen, and less than 2% other gases. Side note: This makes the movie Mars Attacks! very wrong, because a major plot point was that the Martians could only breathe nitrogen, which is currently 78% of our atmosphere but almost absent in theirs. Oops!

But back to those anaerobic days and what changed them: A species of algae called cyanobacteria figured out the trick to photosynthesis — that is, producing energy not from food, but from sunlight and a few neat chemical processes. (Incidentally, this was also the first step on the evolutionary path to eyes.) Basically, these microscopic fauna would take in water and carbon dioxide, use the power of photons to break some bonds, and then unleash the oxygen from both of those elements while using the remaining carbon and hydrogen.

At first, things were okay because oxygen tended to be trapped by organic matter (any carbon containing compound) or iron (this is how rust is made), and there were plenty of both floating around to do the job, so both forms of bacteria got along fine. But there eventually became a point when there were not enough oxygen traps, and so things started to go off the rails. Instead of being safely sequestered, the oxygen started to get out into the atmosphere, with several devastating results.

First, of course, was that this element was toxic to the anaerobic bacteria, and so it started to kill them off big time. They just couldn’t deal with it, so they either died or adapted to a new ecological niche in low-oxygen environments, like the bottom of the sea. Second, though, and more impactful: All of this oxygen wound up taking our whatever atmospheric methane was left and converting it into carbon dioxide. Now the former is a more powerful greenhouse gas, and so was keeping the planet warm. The latter was and still is less effective. The end result of the change was a sudden and very long ice age known as the Huronian glaciation, which lasted for 300 million years — the oldest and longest ice age to date. The result of this was that most of the cyanobacteria died off as well.

So there you have it. A microscopic organism, much smaller than any of us and without any kind of technology or even intelligence to speak of, almost managed to wipe out all life forms on the planet and completely alter the climate for tens of millions of years, and they may have tipped the balance in as little as a million years.

We are much, much bigger than bacteria — about a million times, actually — and so our impact on the world is proportionally larger, even if they vastly outnumbered our current population of around 7.5 billion. But these tiny, mindless organisms managed to wipe out most of the life on Earth at the time and change the climate for far longer than humans have even existed.

Don’t kid yourself by thinking that humanity cannot and is not doing the same thing right now. Whether we’ll manage to turn the planet into Venus or Pluto is still up for debate. Maybe we’ll get a little of both. But to try to hand-wave it away by claiming we really can’t have that much of an impact is the road to perdition. If single-celled organisms could destroy the entire ecosystem, imagine how much worse we can do with our roughly 30 to 40 trillion cells, and then do your best to not contribute to that destruction.

Theatre Thursday: Fact and fiction

About six hundred and eight years ago, Henry V was crowned king of England. You probably know him as that king from the movie with Kenneth Branagh, or the BBC series aired under the title The Hollow Crown.

Either way, you know him because of Shakespeare. He was the king who grew up in Shakespeare’s second tetralogy. Yes, that’s a set of four plays and since it was his second, Shakespeare sort of did the Star Wars thing first: he wrote eight plays on the subject of the English Civil war.

And, much like Lucas, he wrote the original tetralogy first, then went back and did the prequels. Richard II, Henry IV, Part 1, Henry IV, Part 2 and Henry V were written after but happened before Henry VI, Part 1, Henry VI, Part 2, Henry VI, Part 3 and Richard III.

Incidentally, Henry VI, Part 1, is famous for having Joan of Arc (aka Joan la Pucelle in the play) as one of the antagonists. Funny thing is, that name wasn’t slander on Shakespeare’s part. That’s what she preferred to call herself.

Meanwhile, Richard III, of course, is the Emperor Palpatine of the series, although we never did get a Richard IV, mainly because he never existed in history. Well, not officially. Richard III’s successor was Henry VII, and Shakespeare never wrote about him, either, although he did gush all over Henry VIII, mainly because he was the father of the Bard’s patron, Elizabeth I. CYA.

If you’ve ever seen the film My Own Private Idaho, directed by Gus Van Sant and staring River Phoenix and Keanu Reeves, then you’ve seen a modern retelling of the two parts of Henry IV.

Now when it comes to adapting true stories to any dramatic medium, you’re going to run into the issue of dramatic license. A documentary shouldn’t have this problem and shouldn’t play with the truth, although it happens. Sometimes, it can even prove fatal.

But when it comes to a dramatic retelling, it is often necessary to fudge things, sometimes a little and sometimes a lot. It’s not at all uncommon for several characters to be combined into a composite just to make for a cleaner plot. After all, is it that big of a difference if, say, King Flagbarp IX in real life was warned about a plot against him in November by his chamberlain Norgelglap, but the person who told him the assassin’s name in February was his chambermaid Hegrezelda?

Maybe, maybe not, but depending on what part either of those characters plays in the rest of the story, as well as the writer’s angle, they may both be combined as Norgelglap or as Hegrezelda, or become a third, completely fictionalized character, Vlanostorf.

Time frames can also change, and a lot of this lands right back in Aristotle’s lap. He created the rules of drama long before hacks like the late Syd Field tried (and failed), and Ari put it succinctly. Every dramatic work has a beginning, a middle, and an end, and should have unity of place, unity of time, and unity of action.

A summary of the last three is this, although remember that Aristotle was writing about the stage. For film and video, your mileage will vary slightly.

The story takes place in one particular location, although that location can be a bit broad. It can be the king’s castle, or it can be the protagonist’s country.

It should take place over a fairly short period of time. Aristotle liked to keep it to a day, but there’s leeway, and we’ve certainly seen works that have taken place over an entire lifetime — although that is certainly a form of both unity of time and unity of place, if you consider the protagonist to be the location as well.

Unity of action is a little abstract, but in a nutshell it’s this: Your plot is about one thing. There’s a single line that goes from A to Z: What your protagonist wants, and how they get it.

Now, my own twist on the beginning, middle, and end thing is that this is a three act structure that gives us twenty-seven units. (Aristotle was big on 5 acts, which Shakespeare used, but that’s long since fallen out of fashion.)

Anyway, to me, we have Act I, II, and III. Beginning, middle, and end. But each of those has its own beginning, middle and end. So now we’re up to nine: I: BME; II: BME; III: BME.

Guess what? Each of those subunits also has a beginning, middle, and end. I’m not going to break that one down further than this. The beginning of the beginning, Act I: B, has its own BME, repeat eight more times.

The end result is 3 x 3 x 3, or twenty-seven.

And that’s my entire secret to structure. You’re welcome.

But because of these little constraints, and because history is messy, it’s necessary to switch things up to turn a true story into a “based on true events” work. Real life doesn’t necessarily have neat beginnings, middles, and endings. It also doesn’t necessarily take place in one spot, or in a short period of time.

So it becomes the artist’s job to tell that story in a way that is as true to reality as possible without being married to the facts.

Although it is also possible to go right off the rails with it, and this is one of the reasons I totally soured on Quentin Tarantino films. It’s one thing to fudge facts a little bit, but when he totally rewrites history in Inglorious Basterds, ignores historical reality in Django Unchained, and then curb stomps reality and pisses on its corpse in Once Upon a Time in Hollywood, I’m done.

Inglorious Misspelling is a particularly egregious example because the industry does a great disservice in selling false history to young people who unfortunately, aren’t getting the best educations right now.

Anecdotal moment: A few years back, an Oscar-winning friend of mine had a play produced that told the story of the 442nd Infantry Regiment. They were a company composed almost entirely of second-generation Japanese Americans during WW II, and joining the company was the alternative given to going to an internment camp.

Of course, being racists, the U.S. government couldn’t send them to the Pacific Theatre to fight, so they sent them to Europe, and a lot of the play takes place in Italy, where the regiment was stationed. And, at intermission, my playwright friend heard two 20-something audience members talking to each other. One of them asked, “What was the U.S. even doing in Italy in World War II?” and the other just shrugged and said, “Dunno.”

So, yeah. If you’re going to go so far as to claim that Hitler was killed in a burning movie theater before the end of the war, just stop right there before you shoot a frame. Likewise with claiming that the Manson murders never happened because a couple of yahoos ran into the killers first.

Yeah, Quentin, you were old, you were there, you remember. Don’t stuff younger heads with shit.

But I do digress.

In Shakespeare’s case, he was pretty accurate in Henry V, although in both parts of Henry IV, he created a character who was both one of his most memorable and one of his more fictional: Sir John Falstaff. In fact, the character was so popular that, at the Queen’s command, Shakespeare gave him his own spinoff, The Merry Wives of Windsor. Hm. Shades of Solo in the Star Wars universe?

Falstaff never existed in real life, but was used as a way to tell the story of the young and immature Henry (not yet V) of Monmouth, aka Prince Hal.

Where Shakespeare may have played more fast and loose was in Richard III. In fact, the Bard vilified him when it wasn’t really deserved. Why? Simple. He was kissing up to Elizabeth I. She was a Tudor, daughter of Henry VIII who, as mentioned previously, was the son of Henry VII, the king who took over when Richard III lost the war of the roses.

The other time that Shakespeare didn’t treat a king so well in a play? King John — which I personally take umbrage to, because I’m directly descended from him. No, really. But the idea when Willie Shakes did that was to draw a direct contrast to how Good Queen Bess did so much better in dealing with Papal interference. (TL;DR: He said, “Okay,” she said, “Eff off.”)

Since most of my stage plays have been based on true stories, I’ve experienced this directly many times, although one of the more interesting came with the production of my play Bill & Joan, because I actually accidentally got something right.

When I first wrote the play, the names of the cops in Mexico who interrogated him were not included in the official biography, so I made up two fictional characters, Enrique and Tito. And so they stayed like that right into pre-production in 2013.

Lo and behold, a new version of the biography of Burroughs I had originally used for research came out, and I discovered two amazing things.

First… I’d always known that Burroughs’ birthday was the day before mine, but I suddenly found out that his doomed wife actually shared my birthday. And the show happened to run during both dates.

Second… the names of the cops who interrogated him were finally included, and one of them was named… Tito.

Of course, I also compressed time, moved shit around, made up more than a few characters, and so forth. But the ultimate goal was to tell the truth of the story, which was: Troubled couple who probably shouldn’t have ever gotten together deals with their issues in the most violent and tragic way possible, and one of them goes on to become famous. The other one dies.

So yes, if you’re writing fiction it can be necessary to make stuff up, but the fine line is to not make too much stuff up. A little nip or tuck here and there is fine. But, outright lies? Nah. Let’s not do that.

Momentous Monday: Relativity

Almost 470 years ago, in 1553, a man named John Lyly (or Lilly, Lylie, or Lylly) was born in Kent, England, the grandson of a Swedish immigrant. A contemporary of Shakespeare’s, he was kind of a big deal in his day. He was an author and playwright of some renown, and while he failed in his attempt to be appointed the Queens Master of the Revels, he did serve in Parliament as well as served Queen Elizabeth I for many years.

Around two hundred and eighty years after that, somewhere in Massachusetts, a child was born. He grew up to become a man, and he moved west. It was the era of Manifest Destiny in America, a dark time in our history. That child was John Lyly’s seventh great-grandson.

At least we weren’t using the term “Great White Hope.” Yet. To be honest, we should have used the term “Great White Death.” But, among the death there was still hope, and that child born in Massachusetts who grew up to be a man put his ideals into action.

Along with a great wave of German immigrants to America, all of whom despised slavery, this man went west, crossed the Missouri river and landed in Kansas. For me, the movie How the West Was Won is a family documentary.

When he arrived in Kansas, he helped found the town of Burlington, was one of two attorneys in the town (and also one of two blacksmiths, the other of whom was the other attorney), mayor of the town at one point, and a proud member of the Republican Party.

Yeah… quite the opposite of my politics now, or so you’d think. Except that, before the Civil War and up until FDR, the Republicans were the liberal party in America, and the Democrats were regressive.

That child who grew up to be a great man moved west in order to help bring Kansas into the union as a free (i.e., non-slave) state. And that child, who grew up to be a great man, was my great-great-grandfather, Silas Fearl.

Since he was Lily’s seventh great-grandson, that makes me Lily’s eleventh. (It doesn’t seem to add up, but don’t forget that I have to add in the two generations between me and Silas., plus myself.)

Fast-forward to nearly two-hundred years after Silas was born, and the evolution of the internet, and I am in touch with people who share my ancestry with him. It makes us very distant relatives, to be sure, but it means that we have a very definite connection, some by blood and some by marriage.

And this is the reason for this post. One of those third or fourth cousins, via Silas Fearl by blood, posted some pictures of her kids, and when I looked at them the thing that most struck me was this. “Wow. This person and I have an ancestor in common.” And, in fact, looking at these faces, I could see certain elements of my own face, of my dad’s, and of my grandpa’s, and of the great uncles I managed to meet, and of the people in a family portrait taken when my father’s father was an infant.

Even so many steps apart on the branches of humanity’s family tree, I could see some of me and my immediate family in them… and across the distance of never having met and Facebook, my first reaction was an enormous empathy. “This is a bit of me, and I want to protect it from everything bad forever.”

And, in a lot of ways, I have to suspect that this is just an illusion, an effect created by the empirical proof I have seen that means “You and I are related to each other.” That, and the evolutionary and biological forces that make us most protective of those who share our DNA.

Except that… I’ve felt this same way toward people who are absolutely not related, but I’ve still seen myself in them… and this is when I realize the harm that intellect can do to our species.

Intellect relies on so-called facts that it has been told. So, “Hey, you and this person are related” is a fact that ropes emotions into relating to the news. So… subject, object, emotion, bond.

In reality, anybody whose picture I see online is related, it’s just not as straightforward as “You and this person have the same great-great-grandfather.” I can trace part of my ancestry back to King Henry II of England and his wife, Eleanor of Aquitaine — The Lion in Winter is, for me, another unintended family documentary.

By that connection, I’m related to most of the population of England and the eastern US. Now, go back through them to another common ancestor, Charlemagne, and I’m related to most western Europeans and Americans — if you expand the definition of “America” to include all countries on both continents, north and south.

And, if you go back far enough to the last point in humanity’s evolutionary history at which the family tree’s branches split, then you could honestly say that everybody you have ever met is related to you and shares your DNA and your blood to some degree.

You should be able to recognize your features in them no matter their race, gender identity, sexual orientation, or religion. You should be able to see their humanity, and yours, in their faces.

And, go back far enough then we are related to all animal life on this planet. Go back a little farther, and we are related to all life not only on this planet, but in the universe. Go back far enough and follow the laws of physics, and all of us, everyone, everywhere, were once the exact same bit of incredibly condensed matter.

The universe is the mother of us all, and all divisions are illusionary.

I’m reminded of some old Beatles lyrics at the moment. “I am he as you are he as you are me and we are all together.” (And I had to look that up. It’s from I Am the Walrus and not Come Together.) Anyway, that’s a pretty good summation of my realization.

Once we put human history on a cosmic scale, our differences and squabbles become absolutely meaningless. All of us were born from the stars. All of us are in this together. Let’s act like it…

Image: The author’s great-grandparents and their four sons, including the author’s paternal grandfather.

A company town

Despite its size, Los Angeles is a company town, and that company is entertainment — film, television, and music, and to a lesser extent gaming and internet. So, growing up here, seeing film crews and running into celebrities all over the place was always quite normal. Hell, I went to school with the kids of pretty big celebrities and never thought much of it. “Your dad is who? Whatever.”

It looks like that company is finally coming back to life after fifteen months of being semi-dormant. It’s tentative, of course, and we may wind up locking down again, especially if a vaccine-resistant variant suddenly pops up. But, for the moment, movie theaters and live venues are reopening, along with the restaurants and other businesses that survived.

But here’s one thing I don’t think a lot of non-locals understand: None of the major studios are actually in Hollywood. How the city of Hollywood — which is where I was actually born — became conflated with the movies is a very interesting story. Once upon a time, there were some studios there. Charlie Chaplin built his at La Brea and Sunset in 1917. It was later owned by Herb Alpert, when it was A&M Studios and produced music. Currently, it’s the location of the Jim Henson Company. The Hollywood Hills were also a popular location for celebrities to live, and a lot of the old apartment buildings in the city were originally designed for young singles who worked in the industry.

Come to think of it, they still serve that purpose, although given the cost of rent in this town, a lot of those studio units are cramming in two tenants.

The one thing that Hollywood did have in abundance: Movie premieres, and that’s still the case to this day. The Chinese, The Egyptian, and the El Capitan are perennial landmarks, and the Boulevard itself is quite often still closed down on Wednesdays for red carpet openings. Although Broadway downtown also boasts its own movie palaces from the golden age of cinema, it was always Hollywood Boulevard that had the great grand openings. It’s also still home to the Pantages, which is the biggest live theater venue outside of downtown, although they generally only do gigantic Broadway style musicals. (Side note on the Chinese Theater — although it’s technically called the TCL Chinese because, owners, nobody refers to it that way, and you’re still more likely to hear it called what it always was: Grauman’s Chinese Theater. Want to sound like a local? That’s how you do it. You’re welcome.)

There is one Hollywood tradition that does not date from the golden age of cinema, though, and it might surprise you. The Hollywood Walk of Fame wasn’t proposed until the 1950s, and construction on it didn’t begin until 1960 — long after all of the movie studios had left the area.

In case you’re wondering where those studios went, a number of them are in the oft-derided Valley: Universal in Universal City (they like to call themselves “Hollywood” but they’re not), Warner Bros. in Burbank, Disney in Burbank and Glendale, and Dreamworks Animation SKG in Glendale (across from Disney Animation!) all come to mind — and damn, I’ve worked for three out of four of them. On the other side of the hill, in L.A. proper, Sony is in Culver City, 20th Century Fox is in Century City (which was named for the studio), and Paramount is in L.A. proper, right next to RKO, which really isn’t doing much lately, both due south of Hollywood and right behind the Hollywood Forever Cemetery — which isn’t in Hollywood either, but which has a large number of dead celebrities. I think that covers most of the majors. YouTube Studios is in Playa del Rey, on the former sight of the Hughes helicopter factory that also happens to be right below the university I went to for film school, Loyola Marymount.

Like I said, company town.

The other fun part about growing up here is all of the film locations that I see every day, and there are tons. Ever see Boogie Nights? Well, most of that film was basically shot within a five mile radius of where I grew up, with only a few exceptions. Dirk Diggler’s fancy new house once he became a porn star? Yeah, my old hood. Location of the club where Burt Reynold’s character finds Mark Wahlberg’s character? I took music lessons a few blocks away from there. Parking lot where Dirk is mistakenly gay-bashed? Pretty close to the public library where I fell in love with reading.

Remember The Brady Bunch or the movies? Well, that house is only a couple of miles away from where I live now. The OG bat cave? Let me take you to Griffith Park. If you’ve ever seen Myra Breckenridge (you should if you haven’t) the place where Myra dances in the opening is right next to where Jimmy Kimmel does his show now and two doors down from the now Disney-owned El Capitan.

The Loved One (an amazing movie) — Forest Lawn Glendale, where I happen to have at least four ancestors buried. Xanadu? The major setting was the Pan Pacific Auditorium, which was a burned down wreck in my day, but it’s where my dad used to go on date night to roller skate. Go to the Vista Theatre? It sits on the site where D.W. Griffith built one of his biggest sets for Intolerance, his “mea culpa” for making The Birth of a Nation.

I’m not even going to get into how many times the complex I live in has been used for various epic TV shoots (which is a lot) or, likewise, how the area in NoHo I worked in is used by everybody, from YouTubers to major studios. Although, I can tell you that having to put up with film crews and their needs is always a major pain in the ass, especially when it comes to parking vanishing. That’s right — there’s really no glamor in show biz outside of that red carpet.

But I guess that’s the price of admission for growing up and living in a company town and, honestly, I’ve never had a single adult job that wasn’t related to that company ever. (We won’t count my high school jobs as wire-puller for an electrical contractor and pizza delivery drone.)

Otherwise, though — yep. Whether it’s been TV, film, theater, or publishing, I’ve never not worked in this crazy stupid industry that my home town is host to. And I really wouldn’t have it any other way. What? Wait tables? Never. Although sharing my home town with tourists is a distinct possibility. I love this place. A lot. And you should too, whether you’re a visitor or a transplant. Welcome!

Some Flag Day birthdays of important people

In the United States, June 14 is Flag Day, which commemorates the adoption, by the Second Continental Congress on that date in 1777, of the official flag of the British colonies. This is the familiar banner of 13 alternating red and white stripes, and a blue field with a circle of 13 white stars in it.

However, it’s important to remember that while it came after the Declaration of Independence, it also came before the country won its independence, so it started out as the battle flag of a rebellious territory. The only reason it finally became the first flag of the U.S. was because we won that war.

That’s an important distinction to make when it comes to flags, even if some people forget and have to be reminded. It’s also probably not true that Betsy Ross created that first flag. Rather, this was propaganda created nearly a century later to benefit the guy who created the famous painting of… Betsy Ross creating the first flag.

Hm. I wonder if Bob Ross is related? “And let’s paint a happy little rebel right here…”

But Flag Day as an official holiday was not declared until 1916, by Woodrow Wilson, U.S. President and noted racist dick, This was the year before the U.S. entered World War I, by the way, although it was still called The Great War at the time because Germany hadn’t come back to release the sequel and the special edition of the first one, which involved a lot of retconning.

Now, it’s probably just a coincidence, but quite a lot of babies born on Flag Day would have been conceived because their parents fucked on Labor Day weekend — no, really, they’re about 280 days apart — and although that’s just the average, it still gives us the image of Labor Day turning into labor day on Flag Day.

Which brings us to the topic at hand: People born on Flag Day who have made significant contributions to the world, ordered by date of birth.

  1. Harriet Beecher Stowe (1811) Author of Uncle Tom’s Cabin (1852), a somewhat heavy-handed and patronizing work that was sympathetic toward the plight of American slaves when it was written — a terrible example of YT people missing the target now, but incredibly progressive for its time.
  2. Pierre Salinger (1925) American journalist, author, and politician, press secretary for JFK and LBJ, briefly an interim appointed senator for California, and campaign manager for RFK in 1968. Later, a reporter for ABC News. Notably, he never lied while he was press secretary.
  3. Ernesto “Che” Guevara (1928): Argentine Marxist revolutionary, poster child for generations of college students who think they’re Marxists and don’t read his story — he did a lot of good, but was not as good as his fans think. Basically, kind of like everyone else.
  4. Marla Gibbs (1931): African-American actress, made famous by her role as George Jefferson’s maid Florence in the 1975 TV series The Jeffersons. She was one of many actors in the 70s and 80s who elevated black people in American mass media, presenting them as people who were not just pimps and junkies but, rather, who were just like everyone else.
  5. Jerzy Kosiński (1933): Polish born immigrant to America, writer. Best known for the novel Being There and the movie based on it, about a man who is so simple and who grew up so isolated from the real world that he becomes an everyman, a blank slate that people project their hopes and fears onto. While he has absolutely no real personality, empathy, education, or people skills, his fans still think he’s the greatest thing to ever happen. Hm. Sound familiar? The only difference is that Kosiński’s Chance the Gardener character was totally benign and harmless.
  6. Steny Hoyer (1939): A Democratic congressman from Maryland, former House Minority Whip and current House Majority Leader. In his last election in 2018, he defeated his Republican opponent, William Devine III, 70.3% to 27.1%.
  7. Boy George (1961): English singer, songwriter, DJ, and fashion designer who became famous for bringing gender-bending and sexual ambiguity to pop music in the early 1980s. He was largely responsible for making Boomers clutch their pearls as their Gen-X kids latched onto the music and style. OMG, Boy George wore make-up and flowing outfits that could have been gowns or muumuus and, most importantly, pissed off old people by his mere ambiguous existence.

So there are seven significant people I could think of who were born on this day. There are certainly a lot of others who may be lesser known or have done less, but I can’t think of any more important, at least not in the modern age.

Happy birthday to these seven, and happy Flag Day to my American readers.

“War is not healthy for children and other living things.” Except…

This is another one of my older posts that keeps getting new traffic over and over,  nd I don’t know why. I thought I’d give it another boost for new readers to discover.

The title of this article comes from an incredibly iconic poster that was created during the Vietnam War in the 1960s. Specifically, it was created by printmaker Lorriane Schneider in 1967, and was inspired by her concern that her oldest son would be drafted and die in a war that many Americans considered unnecessary.

However, the Vietnam War is a strange exception and beginning point for a tidal change in American wars. Post-Vietnam, the only benefits wars seem to have given us are more efficient (although not cheaper) ways to kill people, and that sucks. (Incidentally, the Korean War is technically not a war. It also technically never ended.)

But… as weird as it may sound, a lot of the major wars prior to Vietnam actually gave American society weird and unexpected benefits. Yeah, all of that death and killing and violence were terrible, but like dandelions breaking through urban sidewalks to bloom and thrive, sometimes, good stuff does come in the aftermath of nasty wars. Here are five examples.

The American Revolution, 1775-1783

The Benefit: The First Amendment (and the rest of the Constitution)

By the beginning of the 18th century, Europe was having big problems because Monarchs and the Church were all tied up together, the state dictated religion, and so on. It came to an extreme with Britain’s Act of Settlement in 1714, which barred any Catholic from ever taking the throne. The end result of this was that the next in line turned out to be the future George I, son of Sophia. Sophia, however, was an Elector of Hanover or, in other words, German. Queen Victoria was a direct descendant of George I, and spoke both English and German. In fact her husband, Prince Albert, was German.

But the net result of all the tsuris over the whole Catholic vs. Protestant thing in Europe, on top of suppression of the press by governments, led to the Founders making sure to enshrine freedom of speech and the wall between church and state in the very first Amendment to the Constitution, before anything else. To be fair, though, England did start to push for freedom of the press and an end to censorship in the 17th century, so that’s probably where the Founders got that idea. But the British monarch was (and still is) the head of the Church of England, so the score is one up, one down.

The War of 1812, 1812-1815

The Benefit: Permanent allegiance between the U.S. and Britain

This was basically the sequel to the American Revolution, and came about because of continued tensions between the two nations. Britain had a habit of capturing American sailors and forcing them into military duty against the French, for example, via what were vernacularly called “press gangs.” They also supported Native Americans in their war against the fairly new country that had been created by invading their land. So again, one up, one down. And the second one, which is the down vote to America, is rather ironic, considering that the Brits were basically now helping out the people whose land had been stolen by… the first English settlers to get there.

And, honestly, if we’re really keeping score, the U.S. has two extra dings against it in this one: We started it by declaring war — even if there were legitimate provocations from Britain — and then we invaded Canada.

But then a funny thing happened. The U.S. won the war. By all rights it shouldn’t have. It was a new country. It really didn’t have the military to do it. It was going up against the dominant world power of the time, and one that would soon become an empire to boot.

The war technically ended with the Treaty of Ghent in 1814, but there was still the Battle of New Orleans to come after that, and it happened because news of the end of the war hadn’t gotten there yet. In that one, the U.S. kicked Britain’s ass so hard that they then basically said, “Remember all the concessions we made in that treaty? Yeah, not. LOL.”

In a lot of ways, the war was really a draw, but it did get the British to remove any military presence from the parts of North America that were not Canada, and opened the door to American expansionism across the continent. It also helped to establish the boundary between the U.S. and Canada, which is to this day the world’s longest undefended border. Finally, it cemented the relationship of the U.S. and Britain as allies and BFFs, which definitely came in handy in the 20th century during a couple of little European dust-ups that I’ll be getting to shortly.

The American Civil War, 1861-1865

The Benefit: Mass-manufactured bar soap

Now in comparison to the first two, this one may seem trivial and silly, but it actually does have ramifications that go far beyond the original product itself. And it doesn’t matter whether you’re a fan of bar soap now or go for the liquid kind (my preference), because both were really born out of the same need and process.

Once upon a time, soap-making was one of the many onerous tasks that the women of the house were expected to do, along with cleaning, cooking, sewing, canning, laundry, ironing, taking care of the menfolk (husbands and sons, or fathers and brothers), and generally being the literal embodiment of the term “drudge.” But soap-making was so arduous a task in terms of difficulty and general nastiness that it was something generally done only once or twice a year, basically making enough to last six or twelve months.

To make soap involved combining rendered fat and lye. (Remember Fight Club?) The fat came easy, since people at the time slaughtered their own animals for food, so they just ripped it off of the cow or pig or whatever meat they’d eaten. The lye came from leeching water through ashes from a fire made from hardwood, believe it or not, and since wood was pretty much all they had to make fires for cooking, ashes were abundant. Yes, I know, it’s really counter-intuitive that something so caustic could be made that way, but there you go. The secret is in the potassium content of the wood. Fun fact: the terms hard- and softwood have nothing to do with the actual wood itself, but rather with how the trees reproduce. (And I’ll let your brain make the joke so I don’t have to.)

So soap was a household necessity, but difficult to make. Now, while William Procter and James Gamble started to manufacture soap in 1838, it was still a luxury product at the time. It wasn’t until a lot of men went to war in 1861 that women had to run homesteads and farms on top of all of their other duties, and so suddenly manufactured soap started to come into demand. Especially helpful was Procter and Gamble providing soap to the Union Army, so that soldiers got used to it and wanted it once they came home.

Obviously, easier access to soap helped with hygiene but, more importantly, the industry advertised like hell, and from about the 1850s onward, selling soap was big business. There’s a reason that we call certain TV shows “soap operas,” after all, and that’s because those were the companies that sponsored the shows.

World War I, 1914-1918 (U.S. involvement, 1917-1918)

The Benefit: Woman’s suffrage and the right to vote

It’s probably common knowledge — or maybe not — that two big things that happened because of World War I were an abundance of prosthetic limbs and advances in reconstructive and plastic surgery. However, neither of these were really invented because of this conflict, which “only” led to improved surgical techniques or better replacement limbs.

The real advance is sort of an echo of the rise of soap via the Civil War, in the sense that the former conflict freed women from one nasty restriction: Having no say in government. And, as usually happens when the boys march off to do something stupid, the women have to take up the reins at home, and sometimes this gets noticed. It certainly did in the case of WW I, and suffragettes wisely exploited the connection between women and the homefront war effort. Less than two years after the conflict officially ended, women were given the right to vote on August 26, 1920 with the passage of the 19th Amendment.

Hey! Only 144 years too late. Woohoo!

World War II, 1939-1945 (U.S. involvement, 1941-1945)

The Benefit: The rise of the American middle class

As World War II was starting to move to an end, the Servicemen’s Readjustment Act of 1944 was passed into law. It was designed to assist returning service members via things like creating the VA hospital system, providing subsidized mortgages, assisting with educational expenses, and providing unemployment. It was also a direct reaction to the less-than-fantastic reception returning veterans of World War I had received.

In fact, one of FDR’s goals in creating what is commonly known as the G.I. Bill was to expand the middle class, and it succeeded. Suddenly, home ownership was within reach of people who hadn’t been able to obtain it before and, as a result, new housing construction exploded and, with it, the emergence of suburbs all across the country. With education, these veterans found better jobs and higher incomes, and that money went right back into the economy to buy things like cars, TVs, and all the other accoutrements of suburban living. They also started having children — it’s not called the Baby Boom for nothing — and those children benefited with higher education themselves. The rates of people getting at least a Bachelor’s Degree began a steady climb in the 1960s, right when this generation was starting to graduate high school. At the same time, the percentage of people who hadn’t even graduated from high school plunged.

The top marginal tax rates of all time in the U.S. happened in 1944 and 1945, when they were at 94%. They remained high — at least 91% — throughout the 1950s. Oddly, despite the top rate in the 1940s being higher, the median and average top tax rates in the 1950s were higher — about 86% for both in the 40s and 91% for both in the 50s. The economy was booming, and in addition to paying for the war, those taxes provided a lot of things for U.S. Citizens.

Even as his own party wanted to dismantle a lot of FDR’s New Deal policies, President Eisenhower forged ahead with what he called “Modern Republicanism.” He signed legislation and started programs that did things like provide government assistance to people who were unemployed, whether simply for lack of work or due to age or illness. Other programs raised the minimum wage, increased the scope of Social Security, and founded the Department of Health, Education and Welfare. In a lot of ways, it was like the G.I. Bill had been extended to everyone.

While the planet became small, the people got smaller

I love the internet because it means that I’m in regular contact with people all around the planet, and have gotten to know a lot of them quite well. I have friends on every continent except Antarctica, but I’m working on that one.

Otherwise, I’ve got Australia and all of Asia covered, from those islands off of the southeast part of it to the major countries in it, from Japan to Russia, as well as Thailand. A tour through the Middle East and Africa brings us to Europe, then finally back to the Americas, where obviously the bulk of my friends are in my home country, the U.S., but quite a lot of them are also in Latin America because I’ve taken the time to become bilingual enough to communicate.

The one thing that most strikes me about chatting with any of these people no matter where they are in the world, what culture they come from, or what language they speak, is that they all want the same things that I do, and that my friends from my culture do. Remove all of the surface decorations, and every human is the same as every other one.

Having been on the internet since the beginning has definitely had one major effect on me. Hell yes, I’m a globalist, but not in the “corporations take over the world” mode. Rather, my form of globalism is this: The citizens of the planet take it back from the corporations. It’s the difference between Corporate Globalism (bad) and Humanist Globalism (good).

Corporate Globalism is a falsehood. It doesn’t unite the world by eliminating barriers and borders. It does quite the opposite. Or, sure, it pays lip service to trading partners and global commerce and all that, but how does it achieve it? By creating artificial barriers and borders.

Truth be told, the developed nations of the planet produce quite enough food to feed the underdeveloped nations, and have quite enough resources to actually pay a decent living wage to the people they currently exploit in them.

The trouble is, the corporate class has a gigantic blind spot. They don’t realize that helping the entire planet profit and prosper will, in turn, lift everyone up, themselves included. If our current billionaires stopped being so selfish for a decade or two, they would reap the rewards and become trillionaires. Give a little bit back today, collect repayment with interest tomorrow.

So that’s one of the ways people became smaller even as the world did even though they should have become bigger. The super-rich decided to keep on hogging everything for themselves, not realizing that this will leave nothing for no one, and when they’ve managed to kill off everyone slaving away to support their lifestyles, they will be left stranded, desolate, and with no idea how to do even the most basic things to survive.

“Sylvia, do you know which button on the stove turns it on to cook water?”

“No, Preston. I have no idea. We could ask Concepción.”

“She died last winter because she couldn’t afford medical insurance, remember?”

“Oh. Crap.”

At the same time, far too many regular people have become too small as well, because they’ve bought the lies of the super-rich, which all boil down to this: “Those people who (aren’t like you/aren’t from here/believe differently/speak another language) just want to come here and steal your stuff.”

Never was a bigger crock of shit foisted on the world than this thinking, which we have seen in many countries in many different eras — and we are definitely seeing far too much of it today.

And it’s nothing but the ultimate in projection, a specialty of the 1%. They are the ones who are afraid of everyone else coming to take their stuff, and they should rightfully be afraid of exactly that, because parts of the world are starting to catch on. Humanist Globalists want to eliminate borders, trade barriers, and the idea of separate nations. Yeah, I know that this can sound scary, but it does not mean eliminating national identities.

It’s kind of the opposite of that. In essence, countries would become the new corporate brands, with their citizens or residents as stakeholders. There wouldn’t be hard lines between them, but there would be ideas and commodities that each particular brand specialized in. It’s kind of a new form of capitalism where the capital isn’t the artificial idea of money. Rather, it’s what it always should have been: The people who work in the system, the fruits of their labor, and the outcome of their ideas. And, in turning it into a “share the wealth” model on a planet-wide basis, we really would have a rising tide that would lift all boats.

The Americas (all of them) sell popular culture, with dashes of Britain, Australia, and Japan included. Europe sells us ideas on how to do things better, especially in urban planning and social policy. Asia sells us technology. Africa sells us the raw materials to make this all happen. The Middle East buys everything because, in an ideal world, they no longer can sell their oil, but if they want to turn Saudi Arabia into the world’s biggest solar farm, let them have at it. And, in every case, the workers who make all of this happen are the real stakeholders.

This is essential in the near future on two fronts. One is in getting our act together to deal with the climate crisis we’re facing and, if we can’t stop it, at least mitigate it. There are going to be climate refuges by the end of this decade, like it or not. We may already have some fleeing Australia. It’s only by eliminating all borders that we can give these people a place to go without politics becoming the cruel boot-stomp in the face that sends them back.

The other front is in getting off of the planet, and the “space race” model born of the Cold War has got to go. Sure, the U.S. vs. USSR is what put us on the Moon first, but later Apollo/Soyuz missions proved that space could be a borderless entity. By this point, when we have multiple nations and private companies firing things into space, we’re basically in the modern version of seafaring in the early 17th Century, a point by which governments (England, Spain, Portugal, France) were financing expeditions to discover new lands, but so were private entities (The Dutch East India Company, Dutch West India Company, etc.)

This was really only a century after Columbus, and we’re a half century past the moon landing, so the timing fits, the only difference being the players, which are now the U.S., Europe, Japan, China, Russia, Iran, Israel, India, both Koreas, Italy, France, and the Ukraine. And, on top of that, add Elon Musk and Richard Branson, the aforementioned companies East (Branson) and West (Musk) that will probably do a better job of it.

All of which reminds me of the opening sequence of the movie Valerian and the City if a Thousand Planets, which is going to be a cult classic one of these days. I mean, come on. Just look at this.

But I do digress. The point is that as long as we remain trapped on this tiny muddy rock stuck in orbit around a flaming nuclear ball and with lots of rocks flying around that may or may not end all human life as we know it without warning, then we are stuck with what we were stuck with. The planet isn’t making any more oil or precious metals. It is kind of making more land, but only if you rely on the very long-term volcanic upwelling of new islands, although this is more than offset by the loss of land that’s going underwater.

We do get new oxygen, for the moment, but only for as long as we maintain the planet’s lungs, which are all of the forests we seem hell-bent on chopping down.

The only things we do get more of every second of every day are… energy, from the sun, wind, and tides, all natural forces. They are limitless, at least for our purposes, driven by physics, and if we could harness even one tenth of their energy, we could change the world and save ourselves.

Why doesn’t it happen? As it’s been put in the past, there’s only one reason. Corporations haven’t figured out how to put a meter on natural processes. And this is perhaps the stupidest thinking ever. What about hydroelectric dams or nuclear plants? Hell, what about waterwheels or old-school windmills? All of those use natural sources. All of those have made money for people who controlled them.

What they don’t get is this: Solar, wind, and tidal power, after the initial infrastructure investments, will be far cheaper per kilowatt hour to create, but far more profitable at even one tenth of the kilowatt hour price that power companies now charge. The only reason these backwards thinking troglodytes embrace fossil fuels is because they see a resource that is running out, and so one that they can keep jacking the price up on as it becomes rarer and rarer.

Metaphor: This is like a butcher who has run out of meat, so starts cutting up and selling his children, until he runs out of children, so then starts cutting up himself starting at the feet, and isn’t even aware of the problem because he keeps telling himself, “I’m still selling stuff, and I’m still breathing! I’m still breathing. I’m still… oh, shit. That was a lung.”

Renewable resources, especially of the unlimited kind, are immensely more profitable than finite resources for exactly that reason: You can keep selling them forever, and if you can keep selling them at a small price, demand goes way up, so the economy of scale makes you a lot more profit than you’d get by hiking the price on a vanishing commodity and so reducing demand.

In order to save ourselves and make sure that our grandchildren and their grandchildren actually get a planet to inherit, we need to do one thing right now: Start thinking big by not being so small-minded. Tell yourself every day: There are enough resources for all of us on this planet if only everyone would share. People who don’t want to share are bad, and should be voted off of the island and/or planet. It is only by eliminating all borders and unnatural divisions that we can save this planet by making it one. No, you won’t lose your precious self-identity if this happens. If anything, it’ll just get more fun because you’ll get to tell your story to lots of people with their own stories as you all share.

There’s the key word again, and another reminder of the motto we need to start living by: “One Planet. One People. Please.”

Image: © Ad Meskens / Wikimedia Commons

Momentous Monday: I’m not really who I think I am

 
The surname Bastian is the 11,616th most common in the world — meaning it’s not all that high on the list — and is most common in Germany, which should be a no-brainer, since it is in fact a German name.
 

Thirty-five percent of Bastians reside in Germany, and the name has been documented in 86 other countries. Surprisingly, it is more popular in Indonesia (21% of Bastians) than in the U.S. (19% of Bastians.)

And yet, a few years back, I had a little existential shock when I found out that I was not a Bastian at all. It all happened because I’d started doing genealogy years ago and lucked out a long time after that when somebody researching the German village my ancestors came from saw a query I’d posted about my great-grandfather, so he sent me all the info.

But, because of that, I don’t know what the family name is really supposed to be because Bastian only goes back to my great-great-great grandmother, Barbara Bastian, who was born in 1801. But… that was her maiden name, and her husband’s name wasn’t recorded, so her sons Peter and Titus assumed the name Bastian. (I’m descended from Titus.)

I have the info on her Bastian ancestors going back four more generations to the 1670s, but no idea who my great-great-great-grandfather in that slot really was. The genealogist said that it could either have been a passing soldier who didn’t stick around (common at the time) or that the husband wasn’t Catholic and the family apparently was, so his info wasn’t recorded in the church records and/or the marriage (if it happened) was never recognized.

Of course, there’s a possibility that Barbara was actually the father, since there is precedent for it being a man’s name and it just got flipped at some point. After all, Marian is still a very common German name for boys. But I’m not counting on that.

So the Bastian line I know of goes: Johannes Georg Bastian and Ursula Rieger begat Johannes Lorenz Bastian; he and Catharina Melchior begat Johannes Georg Bastian; he and Anna Barbara Riger begat Matthias Bastian; he and Dorothea Bittman begat Barbara Bastian; she and some dude begat Titus Bastian; he and Catharina Seiser begat Gustav Bastian; he and Mary Fearl begat Theodore James Bastian; and he and Neva Belle Jones begat my father, who knocked up my mother and begat me.

That’s ten generations, but the last six of them aren’t really Bastians at all.

If any of those surnames sound familiar and you have family in or ancestors from Gaggenau-Michelbach in Baden, Germany, by all means say hello in the comments — we probably are related. That was another thing the genealogist told me — that there were only about nine families in the village, which was isolated, so yes, there were a lot of cousins getting married.

And before you roll your eyes over incest, cousins marrying was the norm throughout most of human history, because those were the only people a lot of people knew but who were distant enough genetically to safely marry but close enough in distance to actually meet. Also, second cousins and beyond were much more common.

I am fortunate, though, in that German obsession to detail and the Catholic penchant for keeping meticulous records combined to preserve this history so that a researcher could find it centuries later.

I’m less fortunate on my mom’s side of the family, which is all Irish, because we have the same genealogical problem that a lot of European Jews do: an attempted genocide intervened to wipe out most of the records.

In my case, it happened over a century earlier, and in a much more passive-aggressive way as England basically did nothing about a potato blight that created a potato famine that decimated the population. So… not an active genocide, I… guess…?

But they also went in and stamped down Irish culture, forcing everyone to speak English and almost killing off Gaelic, and paying no regard to any records.

So… while I can trace that one line through my father back ten generations (and another line on his side that lucks out and hits England back thirty or so), on my mother’s side, the farthest I can get back is… four generations through every branch. It all stops in the mid-19th century, which is also about the same time that most of them arrive in the U.S.

In fact, up each branch, the trail ends with no information on the parents of each one who was the first immigrant to come here. The pattern is “Born in Ireland, died in America, parents unknown.”

It’s kind of ironic, then, that I know more about my English and Welsh ancestry through just one of my father’s 7th great grandparents than I do through my mother, especially considering that genetically I am 50% Irish.

Oh, by the way, not accounting for pedigree collapse, a person has 512 7th great grandparents. That makes sense, since it’s two to the eighth power (don’t forget to add your parents to the seven), then doubled because you have two ancestors per slot per generation.

And, to put the degree of DNA in perspective — 50% from my mom, directly and, while the percentage that came from my dad is the same, the bit that came from that ancestor of his is about 0.39%.

Or, in other words, out of the 30,000 genes in my genome, about 117 came from that ancestor — only to mix ‘n match with the 117-ish other genes that came from every other person swimming in the gene pool that eventually became me at that point in the timeline.

In case you’re wondering, it wouldn’t take anything nearly as big as a swimming pool. In fact, a one liter bottle would hold all of the quarter gram of human eggs and approximately 800 ccs of semen contributed by all of those 7th great grandparents, with room to spare.

But you’re going to need a two liter if you want to go to the next generation, and a gallon jug to hold the ingredients for the one after that. At that point, just forget it, because you’re just going to be exponentially adding gallon jugs from that point on.

Ah. Isn’t genealogy wonderful? Speaking of which, I’ve finally signed up to get my DNA tested with FamilyTreeDNA.  I chose them because my half-brother’s girlfriend had gifted him the service previously –she’s the other one in the family really into genealogy. 

But since he and I share a father, and hence the Bastian name, I found it odd that his test showed absolutely no German heritage. I’ve been a little nervous about getting mine tested, because if I show up German for days, then there’s a bit of an issue somewhere. If I don’t, though, it may just lead to figuring out where that non-Bastian great-great-great-grandfather came from.

I will definitely keep you all posted on that one. Oh — and Schöne Grüße to my many German readers! 

Image by Calips, used unaltered via (CC) BY-SA 3.0.