Friday Free-for-All #8

Friday Free for All

In which I answer a random question generated by a website. Here’s this week’s question Feel free to give your own answers in the comments.

Is true artificial intelligence possible with our current technology and methods of programming?

It’s funny that this question should come up, because I’ve long thought no, this isn’t possible with our current technology for various reasons, and just recently I ran across an article about a program called TextFooler, which in effect uses A.I. to mess with A.I.

The project came about as a way to find vulnerabilities inherent in A.I., specifically how algorithms designed to spot auto-generated media stories could be tricked into accepting them as legit, and it basically involves swapping in some synonyms that will not change the meaning to a human reader but which will fool the A.I. into thinking that the text has a different tone than it really does.

A test example given in the article cited above is this: TextFooler will take the source, “The characters, cast in impossibly contrived situations, are totally estranged from reality,” and change it to read “The characters, cast in impossibly engineered circumstances, are fully estranged from reality.”

The net effect in this case is that the target A.I. classified the quote completely incorrectly, seeing it as positive instead of negative. A human could see in an instant that both versions are negative reviews, but the A.I. focuses on the italicized words, which are the changed ones, and weights the latter versions as positive, even though the overall meaning is still negative.

In most cases, TextFooler was able to reduce the accuracy of the A.I. tested to less than 10% while changing fewer than 20% of the words in the source text. In the above example, the change is 25%, but it’s quite likely that changing totally to fully doesn’t have as much of an effect, in which case changing only 17% of the words would do it.

From personal experience, I know that it’s really easy to mess with A.I., and one of its weaknesses is that it can’t deal with ambiguity, which includes humor and especially puns. If we ever get into a serious cyberwar with A.I. backed forces, our best defenders would be squads of Dads, armed with their best worst jokes.

It’s the inability to deal with ambiguity that will keep us from developing true A.I. for a long time, and it’s not something that machine learning can overcome. Human brains just process information differently.

I’ve seen this countless times as new people come onto my improv team. We traditionally end most of our shows with a “jump-out” game, which involves the players making lots and lots of puns based on audience suggestions. Punning is a difficult skill. Some people are naturals at it — I learned very quickly that I was — but others aren’t.

Here’s the difference between a human and A.I,, though. Teach a human the parameters of a pun game, let them try it a few times or watch other people do it, explain how to structure a pun, and suddenly they start to get good at it as well; in some cases, rather quickly.

One of the classic games is called 185, and the basic form of the joke (as we play it now as opposed to the version in the link) is this: “185 (suggestions) walk into a bar and the bartender says, ‘Sorry, we’re closed.’ And the (suggestions) say (punchline).”

For example, “”185 horses walk into a bar and the bartender says, ‘Sorry, we’re closed.’ And the horses say, ‘Guess we should hoof it out of here, then…’”

Simple for a human to come up with and understand, but for A.I., not so much. The program would have to understand the double meaning of “hoof” — one a common noun related to a horse, the other a slangy and somewhat dated verb — but then would also have to decide, “Is this funny?”

Funny is often based on the unexpected, and here’s another good example from my improv company. When we have a birthday in the audience, we’ll bring the person onstage, make a big deal about it, then prepare to sing for them. The players do some throat-clearing and vocal warm-up, someone sings a tune-up note, and then they launch into it, to the tune of Ta-Ra-Ra Boom-De-Ay.

Here’s where the joke comes in the unexpected, though. After all of that build up, there are exactly two bars: “This is your birthday song, it isn’t very long — ” And then it abruptly ends and everyone just walks away, now focused on a new subject, making small talk with their fellow players, whatever. It always gets a huge laugh from the audience and, truth to tell, although I’ve seen it a bunch of times and have done it a few myself, it cracks me up every single time.

A.I. ain’t gonna get that, not now, and not soon, Turing test notwithstanding, because it really may not be the best way to detect fakes. That distinction may actually go to the word poop.

But getting back to the Achilles heel of A.I. — computers are only as good as their programming, and there’s an old term you may or may not know: GIGO, which stands for garbage in, garbage out,

The short version is that the output you get is only as good as the input you gave, and if the input was in an unexpected form, you won’t get anything usefull back. For example, if the computer expected a number and you entered “snot,” you’re probably going to get back an error message.

If you’ve ever seen those annoying #NA or #VALUE or #NAME? errors pop up when you’re trying to enter a formula in Excel, you’ve committed a GIGO error.

Now remember that input can also come from internal sources, so if you have a database of words or phrases for an A.I. to look up and find responses to, it can quickly turn into garbage out by tossing some nonsense at it.

For example, if the A.I. says, “Good morning, what’s your name?” and you reply, “My name is Digital Badger Wankstick Flipflap III,” a human would probably reply with, “What? You’re kidding, right?” A.I., though, would see the designated response tags “my name is” (or “I am” or “It’s” or just take any unprefixed response as the name), and reply, “Hello, Digital Badger Wankstick Flipflap III. How are you today?” Particularly smart A.I. might just reply with “Hello, Digital,” but still… the fact that it doesn’t immediately ask, “Is that really your name?” is a dead giveaway that you’re probably not dealing with a human.

In theory, this could be programmed in, but think of how much data it would need. The programmers would basically have to create two tables — one containing actual, normal names across a number of cultures, and the other containing things that are most likely not normal names, then the A.I. would have to search both and respond accordingly.

In computer terms, it wouldn’t add much lag time to the response — maybe milliseconds — but when it came to the size of the program and the lines of code needed, it could bump things up considerably. Not to mention that the data would have to account for all kinds of possible variant spellings.

Hell, as a human, I’ve learned to always ask someone to spell their first name unless it’s totally unambiguous, which is rare. At least in the U.S., my first name is itself the less common variant of John (I’ve also seen it spelled Jhon), and you can have variations like Ralph/Ralf, Marc/Mark/Marq, Karen/Caren/Karin, Jack/Jaq, Alan/Allan/Allen, Jeff/Geoff, Charles/Charlie/Charley, and on and on and on. And yes, I’ve known people with every single one of those variants.

One of the more disappointing attempts at A.I. with a good purpose is Replika, which tried to be a counsellor and source of help, but which failed badly. The idea was that it would get to know you quickly through a series of conversations, but despite having spent a lot of time (as it turns out, over a year ago) chatting with it and answering its annoying questions, it never learned anything, and I never saw its personality change.

Of course, the grandmother of all A.I. is Eliza, who was pretty much the first attempt at this sort of thing. The original Eliza was created back in the early 1960s, believe it or not, although it was not so much an attempt at true A.I. as it was a stab at getting computers to communicate in natural language.

But if you click the first link in the preceding paragraph, it won’t take very long for you to figure out that you’re talking to a program. In fact, IIRC, in my early days of learning to program, studying the code for Eliza was one of my assignments, and it really was an amazing job of using very few clues from the input (with a lot of wiggle room for “noise” thrown in) to select from a fairly limited number of slightly customizable responses.

Also keep in mind that, at the time Eliza was originally written in a long lost language called MAD-Slip (which was decades before I encountered the code in BASIC [which lives on in Microsoft products as VBASIC, yay!], as a training exercise) that the possible size of programs was very, very limited.

In fact, after a little searching, I found out that the version of the program in BASIC was limited to… 256 lines — which is a very, very important number in binary. In fact, it’s as high as you can get in 8 bits, because 256 is 2^8 (or 16^2).

But that’s all the long way around of me saying that, no, we won’t develop true A.I. until we build computers (probably quantum) that can deal with enormous amounts of data, ambiguity and multiple choice, actually learn and reprogram themselves constantly, develop a sense of humor, and play with language like humans can. Meaning… no time soon, and not until we manage to create artificial and literal neural networks that would involve interfacing computers directly with… well, lab-grown brains would be cheating, because that wouldn’t be artificial.

We’d have to figure out how to create a completely synthetic and functional analogue to a human brain. Well, to be honest, we’d have to start simple, and to some extent things like John Conway’s famous Game of Life did manage to cause artificial objects to follow certain rules and either reproduce and thrive or die out, and he did it with a few simple rules, but that was basically creating single-celled organisms, which are not intelligent, only reactive.

But if we want “real” A.I., we’re going to have to fake it by creating biological computers. Or growing brains in labs and wiring them up, but that would probably be all kinds of unethical.

Whether you think so is up to you, but please discuss it in the comments, and thanks for reading!

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.