A few months back, some friends of mine and I talked about the game Zombies, Run! in our game discussion group. It’s a game designed to get help you exercise by playing a zombie story as you go for a walk / run; it tracks your movement as you go, and can use that to feed back into the game.

One of the options in Zombies, Run! is to have the came occasionally activate chase sequences, where you’re supposed to speed up; if you don’t do that, then you’ll lose some resources in the game. The first several times I played the game, I was walking our dog, so I didn’t have the ability to be able to break into a jog on demand; but, as I was thinking about it, I realized that I probably wouldn’t have enabled that option anyways.

Basically, I’ve gotten more and more suspicious of doing things because a computer tells me that it would be a good idea. There are two reasons why I’m suspicious of doing things because a computer tells me to: one is that the goals of the designers of the software may not match my own goals (and may, in fact, actively go against my goals); the second is that I don’t necessarily trust software to be good at a lot of things.

 

As an example of the former of those: earlier today, I’d looked up a video on Youtube. And, as happens so frequently, I looked at the video controls, noticed that Youtube had decided to turn autoplay back on, and I turned it off. Youtube’s product managers have decided that they really want me to keep on watching videos on their site, and they feel strongly enough about this that they repeatedly override my explicitly stated preference to the contrary; that is a bad decision.

Fortunately, the algorithmic “break into a run” mode in Zombies, Run! isn’t that sort of hostile action; instead, it (I am fairly sure) falls into the second category that I’m suspicious about. Designing an exercise program to help people get significant benefits from running is a skill; Zombies, Run! did nothing whatsoever to convince me that that mode would lead to a coherent exercise program. And, indeed, that it would be helpful rather than harmful: if I wanted to make running part of my exercise program, what are the odds that the specific amount of running that Zombies, Run! would be a better amount of running than what I could come up with on my own just by listening to my body, let alone what I could come up with by doing some research or getting coaching from somebody who knows what they’re doing?

 

I realize that the title of this post is way too broad. There are lots of situations where computers are actively helpful, and that’s great. We just have to turn around my two criteria above: I like it when computers are working with me instead of against me and are working in ways where they bring value that they’re particularly suited to.

For example, I work on server software, and we want those servers to be running well as much of the time as possible. And having software (our own software in large part!) help us with that goal is great: computers can constantly measure a huge number of data points, and let on-calls know if those data points are in a range where a human should take a look. Or if I’m driving somewhere that I’m not used to going to, I’ll put the address into a mapping application on my phone, and let my phone give me driving directions; computers are good at that too. And at least some of you are reading this blog post because, years ago, you told Feedly that you wanted it to automatically watch my blog, detect when I put up a new post, and let you know that.

 

I start to get a lot more suspicious about recommendations from computers that come from more opaque algorithms, though. Is the computer making a recommendation because A) it has an informed opinion about what I’m looking for, B) it actually wants to meet my goals as well as possible, and C) it has a good idea how to do so? There are a lot of ways that that could fail, but in particular I just do not trust that point B is going to be the case in a lot of opaque algorithmic situations: there’s a lot of financial incentive for companies to sell algorithmic responses, and there’s also a lot of financial incentive for companies to get me spending more time using their software than I originally intended.

This doesn’t mean that I stay completely away from opaque algorithmic recommendation systems. Web search is essential; I do what I can to avoid / be aware of situations where search engine are acting for another’s benefit, but ultimately I still use search engines. Once a week, I listen to Spotify’s Release Radar playlist; learning about new music (whether by musicians that are new to me or new releases from musicians that I already know of and like) is importat to me, and while the algorithm generatic that playlist has a very narrow view of what kind of music to recommend to me, Spotify’s algorithms have pointed me to more than enough music that I really like and wouldn’t have discovered otherwise that I still use them. (At least Release Radar; it’s been months since Discover Weekly has pointed at me at something interesting, so I’ve stopped listening to that one.)

But, in the Spotify case, I’m hiring the algorithm for a very specific job; once it’s done that job, I switch away from that algorithm. Concretely, once it’s pointed me at a song that I liked, I switch over to much less opaque methods of investigation: listen to the whole album the song is from (if the song is from an album, which is far from guaranteed these days), and, if that goes well, I go through the rest of that artist’s back catalog. (Usually also buying some of the artist’s albums as part of that process, partly because I like supporting musicians and partly because I don’t trust music streaming services to be a permanent storage space for my music collection, or indeed to be a storage space where I can trust an album I added there today to still be there next month!)

 

That last example actually points me at one way in which the Zombies, Run! mode in question could be useful: I don’t trust it to provide an exercise program, but maybe it could be useful from a discovery point of view? I did try out that mode once with the question in mind as to whether I should add running into my routine; my feeling was that spending some time running was a good idea, but not a good enough idea for me to carve out time in my day to spend on that instead of one of the many other activities that are competing for my time.

Anyways: computers can be useful. But computers also do not deserve the benefit of doubt that they are 1) providing high-quality advice that is 2) working for you instead of for somebody else. And that second point in particular is something that I feel that I need to be aware of more and more.

Post Revisions:

This post has not been revised since publication.