Algorithms on the Brain

I enjoyed reading “Augmenting Human Intellect: A Conceptual Framework” by Doug Engelbart. It allowed me to think about and incorporate thoughts from both of my majors: Computer Science and a very linguistics-based Cognitive Science. The first thing that came to mind, and the one thing that stayed there for the rest of the article, was the idea of an algorithm.
Computer science isn’t so much the study of computers–their hardware and how to answer weird technical questions for friends–but more the study of algorithms. What’s an algorithm? Well, there’s an old programming joke that I think is really funny because this sort of thing actually caught my eye when I was a little kid:

Why couldn’t the computer programmer get out of the shower?
The shampoo bottle said “Lather, rinse, repeat.”

I’ve noticed recently that “lather, rinse, repeat” has been replaced with “lather, rinse, repeat as necessary,” which is much more appropriate so that computer scientists don’t get stuck in a very sudsy infinite loop. Both of the shampoo instructions (or  basically any set of instructions that can be carried out in a finite number of steps) are examples of algorithms. Have you ever baked a cake? Done long division? Algorithms are everywhere, and you don’t need to be a CS major (or even somewhat technically inclined)  to see and use them.
The funny thing about algorithms is that even the most complex of programs is built from most of the stuff you’d learn in an introductory programming course. Sure, there are plenty of other awesome things to keep you coming back for more, but you can do more than you think just by understanding a few basic steps that you can weave into a complex algorithm.
I’ve always been rather quick with calculating most derivatives, but my Linear Algebra class recently introduced me to a program called Derive 6. Derive does all sorts of calculations at speeds even the most accomplished human mathematician could never reach.  It can even find integrals that I’d struggle to calculate in a blink of an eye. I’ve never viewed the source code of Derive, but I’m almost certain that it calculates derivatives in a similar way that I do.
I also remember struggling with a particular program in my Intro to Computer Science class, the first time I’d learned programming from an actual teacher instead of scouring source codes by my own lesson plan. For some reason, I had a hard time figuring out how to complete a program that asked the user what time their class started and how long their commute would take in order to tell them what time to leave. (Yes, I still get a few good laughs at how many things tripped me up back then…and how many of those same mistakes I still make now.) I visited my professor in his office for a bit of help, and he asked me to do the calculations myself. He’d rattle off a class time and commute time, and I tried to respond back with the correct departure time as quickly as I could. After a few rounds of back-and-forth, he stopped and told me that I didn’t need help because I already knew how to do the program in my head. It was just a matter of figuring out how I know how to do it. I was absolutely fascinated by the idea that I have to know how to do something myself in order to tell a computer to do it, and I really do think that one visit helped me realize that I want to study cognition just as much as I want to study algorithms.
The article also discussed language and cognition, which was very interesting for me to read.
I often think of my own thoughts and actions as a bunch of algorithms. It’s actually pretty interesting to think that, as the article mentioned, computers are frequently used in linguistics and cognitive science research in order to model human thought. When we’re talking, for example, we aren’t usually thinking about syntax. We’re thinking about the conversation itself. I’m not consciously thinking about the grammatical structure of each of the sentences that I’m typing as I type them. If I did, I certainly wouldn’t be able to type as quickly as I usually do. I play with my sentences, each word’s morphemes, syntactic ambiguities (I’ve found a few in just this post and left them–can you find them?), and even the different sounds I’d have to make in order to pronounce them, but that’s only after they’re already typed out. Nevertheless, I’m not completely conscious of every linguistic feature of my language as it slides from my tongue or tap-dances from my keyboard. Computers are useful tools that allow us to model these subconscious algorithms of language processing and see just how valid our theories are. Computers don’t have dialects. They don’t have a register that changes depending on the social setting. Computers really don’t make mistakes in their algorithms unless someone else does first–there are no computer slips-of-the-tongue or mathematical errors.
The article also mentioned the Whorf hypothesis, also known as the Sapir-Whorf hypothesis. I actually had a brief discussion with a friend and fellow linguist on this about a week ago. She loaned me a book called “The Language Instinct” by Steven Pinker, which is supposed to be a pretty awesome book for someone who likes the stuff I like. I haven’t had time to start reading it yet, but she told me that it had a chapter that argued against this particular hypothesis. I don’t feel that I’m knowledgeable enough just yet for me to say whether I agree or disagree with the Sapir-Whorf hypothesis, but just by knowing its basics I feel that I mostly disagree with the idea that “the world view of a culture is limited by the structure of the language which that culture uses.” Perhaps I’ll make a follow-up post after I’ve read Pinker’s views in “The Language Instinct” and talk a bit more about it.

One Response to “Algorithms on the Brain”

  1. Jim says:

    Wow! That is all, for now.