I was just reading Exceptional C++ Style, by Herb Sutter, and one of the recommendations (Item 18) threw me for a bit of a loop. That item talks about access control for virtual functions. (We’ll ignore destructors, since that’s a special case.)

My habit is to provide public virtual functions if I want all of the corresponding public functionality to be polymorphic; if I want to provide polymorphic behavior on a finer grain, I provide protected virtual functions. I never provide private virtual functions: I’m aware that, in many of the situations where I use protected virtual functions, I could use private ones instead, but I’ve never particularly seen fit to do so.

Sutter disagrees with me on two counts. The first is the point that I just brought up: “protected” means that subclasses can call the function, so if you have a virtual function on a class A that you only want to be called by A, you should really make it private instead of protected. The fact that it’s virtual is enough of a hint that subclasses of A can/should override it; there’s no need to mark it protected in addition. This makes sense to me.

The second point of disagreement is that Sutter doesn’t believe in public virtual functions at all: according to him, your public functions should all be non-virtual, though they can invoke non-public (probably private, by the above) virtual functions to carry out the actual work. In the simplest case, the public function can be a one-liner to forward to a virtual function that carries out the actual work.

The philosophical point here is that public functions provide the interface that the class provides to its users, while virtual functions provide the “customization interface” that the class provides to its inheritors. These are two different things; they should be separated accordingly.

That’s somewhat plausible, but I’m not sure I completely buy it. In particular, if the class in question is solely an abstract interface, I’m not sure that it makes any sense much of the time to distinguish between these two aspects: the public interface exists only to provide customizable behavior, so why not acknowledge that?

There are, of course, some situations where you want to do this sort of trick. In particular, it can be the case that all implementations of the public functionality will normally want to carry out more or less the same tasks in the same order, with only the details varying. In that case, providing a public non-virtual function that calls a sequence of non-public virtual functions is very useful. (This is the “Template Method” design pattern.)

The book also gives further examples where this might be useful – for example, if you decide that you always want to perform some action before and/or after calling the core of the implementation (e.g. instrumenting it, checking pre-/postconditions), then it would be useful to have your function already broken up into a public non-virtual part and a non-public virtual part: you would only have to change the non-virtual part.

There’s some truth to that. On the other hand, in my limited experience, that’s not something I want to do all that frequently; and, in situations where I want to do such a change, it’s not too tricky a refactoring to turn a public virtual function into a public non-virtual function plus a non-public virtual function. (It’s not completely trivial, but it’s not that hard, either.) Furthermore, it’s not like starting with a public non-virtual interface is a panacea: in particular, if you want to change from an implementation with a single virtual function into an implementation with multiple virtual functions (creating a Template Method), then the fact that you started from a non-virtual public interface won’t help you at all.

The place where I’d seen this before is the IOStreams library. I suppose it makes sense there – the authors of the standard want to impose as few constraints as possible on implementors, so this fits into that vein: e.g. it makes it possible for people to ship versions of the library that instrument various calls. And, in general, the less control you have of your subclasses, the better job you have to do of guessing where to put your virtual functions, because it may not be possible to carry out arbitrary refactorings; this is a technique that can help. (Though, as I said above, I’m not sure how much it helps, given the non-Template Method to Template Method conversion example that I mentioned.) But, in general, I’m not convinced.

Still, as Sutter says, it’s so easy to do things the way he recommends that you might as well always do it, given that there can be benefits at times. There’s certainly something to that – I would never dream of having non-private member variables, after all, so even in situations where I really am keeping around data that I allow the user to read and write, I will do that via public member functions. (But I don’t want to stretch that analogy too far – I’m rarely in a situation where a class provides simple functions to read/write a member. Or rather, I’m in that situation more often than I’d like at work, but that’s because we do things wrong at work, not because it’s a good idea.)

For the time being, I’m keeping an open mind on the issue. Even if I were convinced by Sutter’s arguments, I doubt I’d adopt his recommendation immediately, because that would require changing our coding conventions at work, and I don’t think this is an important enough issue to require a change of existing conventions.

On completely different topics:

  • I was playing Paper Mario 2 (about which I will write multiple posts later) over the weekend, when Miranda started humming the main Katamari Damashii theme, for no particular reason. So I spent the rest of the weekend and much of today singing/whistling/humming it myself.
  • I don’t know what triggered it, but I’ve suddenly started getting a new piece of blog spam about every 5 minutes. Sigh. Fortunately, they’re getting intercepted and dumped into the moderation queue, but it’s still a pain.

Post Revisions:

There are no revisions for this post.