Jim pointed me to this article a few weeks ago, and I’m annoyed to say that I can’t get it out of my head. It’s about a guy who claims to have an algorithm (implemented by a computer program) to help you remember a lot more stuff a lot more solidly than you can with other methods, and it strikes just the right balance of potential importance and buy-in required to get me thinking about it more than I’d like.
The basic idea is this: if you want to remember something, you have to practice remembering it periodically. So it’s not enough to cram facts for an exam and then pretend that you know something: a few months later, you won’t consciously remember most of it. (Which is one reason why I question significant parts of our educational structure, but that’s a separate rant.) Instead, you have to periodically refresh your memory of the facts; fortunately, you can refresh less and less frequently over time and still remember those facts. Basically, the optimal time to refresh each fact is right before you’re about to forget it; this guy claims that he has a computer program that will serve up facts to you at the appropriate time for optimal practice.
This would be very useful to me (and, for that matter, to Miranda) right now: while he will happily apply it to anything, it’s clearly extremely applicable to learning foreign-language vocabulary. (And grammar!) And the theory is also obviously quite plausible (and apparently supported by the empirical psychological literature): I’ve spent a lot of time memorizing facts over the years (and in particular over the last year), and I can testify that this phenomenon of memorizing a word, and then not quite having it at the tip of your memory (or barely still having it at the tip of your memory) some time later is quite correct, and I’m quite willing to believe that there’s some optimal decay pattern for the refreshes.
But I also have a system for memorizing vocabulary that works moderately well right now: not perfectly, by a long shot, but I’ve gotten a lot of use out of it. In particular, right now I have 1200 or so vocabulary cards written down; I’m not about to sit down and digitize them all (which isn’t really necessary), but I’m also nervous about switching to another system which may or may not work, and (if I decide to switch back) to then deal with having some of my vocabulary on a computer and some on physical cards.
Also, to make matters worse, the software is basically Windows-only. So using it isn’t a realistic possibility for me. (It does seem like the sort of software that would strike a chord among Mac geeks, but who knows…)
But then I was idly thinking about it some more over the last day or two. Just how hard could it be to whip together a version of the software myself? The basic infrastructure is pretty straightforward: I need a way to save questions and answers, I need it to display questions to me, and I need to tell it whether or not I’ve answered the questions correctly. Then the software could save my history of when I’ve answered each question successfully (or unsuccessfully), and, based on his magic curves, figure out when it should next offer that question up to me. I’d never written a Rails app (a deficiency that I’d like to remedy), but all the data entry/display sounded like it should be very easy to whip up using Rails; I didn’t know what the magic sauce was, but it’s probably some sort of exponential decay curve, so I should be able to just look up his algorithm and implement it, right?
So I spent some more time at his web site, looking up his algorithm. And, at first, I was pretty disappointed. The most obvious place to start was with the paper version, but it had a few glaring deficiencies. The main one is that it had you work on groups of items all at once, treating each group as equally difficult (i.e. with the same decay curve). (Both the grouping and the equal difficulty seemed wrong to me.) Also (and this is, of course, just a minor annoyance, easily tweaked around), having the first review come four days after you’ve written down a group seemed way too long to me.
Reading that, I was pretty let down. After more poking around, though, it turns out that the algorithm has changed a fair amount over the years; I believe this is the most recent version of the algorithm listed on the website, and that page gives links to earlier historical versions. I haven’t tried to fully understand the most recent version (and, as far as I can tell, there’s not enough information there to reconstruct it, some of the constants there apparently need to be determined empirically), but there are enough ideas to try to remedy the above flaws. It seems like the current version doesn’t always use exponential decay, but I believe earlier intermediate versions did (version 4 seems a particularly useful touchstone), so I could easily start with that; there is a per-item difficulty factor, and there’s some idea that you can calculate the difficulty factor by counting the number of times you’ve gotten the item wrong.
Based on that, it sounds plausible that I could hallucinate an algorithm that probably wouldn’t do any worse than my current method for learning vocabulary. (My current method wastes too much time up-front in going over words that I would ideally review in intervals longer than a day, while at the same time not doing enough review of old words.) And I don’t think it would be too much work to whip up a program to implement it, and I’d get some practice with Rails to boot.
So: would doing that be a good idea? I’m still not sure: if I ultimately decide that I don’t like the results (whether because I don’t think it works well or because I don’t want to be tied to a computer when doing vocab review or because of some other reason), then there would be a real cost in switching back. And it may turn out that this is all really a side-issue: maybe it would be more effective than my current system, even significantly so, if I wanted to memorize a dictionary. But I don’t want to memorize a dictionary, I want to be able to, say, read Japanese, and doing so would probably give me frequent enough review of the words I was actually using to make a program like this superfluous.
Not sure where I’ll go with this yet; for now, I’m too busy, so it’s on the someday/maybe stack. But it’s surprisingly close to the top of that stack; we’ll see where I am in a couple of weeks.
There are no revisions for this post.