Scott Aaronson has a fantastic new post, Eigenmorality. He draws a few concepts together that never appear together (eigenvalues and morality), ties it to Axelrod’s excellent The Evolution of Cooperation, and extrapolates to how to make the world better. Nearly a perfect post. Go ahead it read it. It’s a bit long, but I’ll wait here.
Why only “nearly?” Well, I think he missed the opportunity to tie one more observation to an existing concept. Scott wrestles with and embraces the circular definition used to rank things, e.g., “Something is considered X, when a large number of things sharing the property X point/refer/reference back to it.” The gist of the process is, that you use a little bit of matrix math (eigenvalues) and you can figure out how much X everything has. Then he goes out of his way to find examples of how it’s actually feasible to use such a system.
If, instead, he had just said “Every system has to start somewhere, and that somewhere is its intrinsic bias, because without bias, you can’t prove anything,” he could have saved quite a few electrons.
Consider what Tom Mitchell writes about bias in Machine Learning. In Chapter 2, he describes how it would be for a learner to try to not have a bias of some sort; it turns out ot be impossible. “…a learner that makes no a priori assumptions regarding the identity of the target concept has no rational basis for classifying any unseen instances.” In this case, the bias is that the model can indeed be defined in terms of itself. Baddabing, baddaboom, done! As Tom Mitchell points out later in the chapter, the important thing is to understand the nature of the inductive bias, not to try to eliminate it.
If you haven’t read the classic Machine Learning, make sure you do. It is nearly unparalleled as an introduction to ML. This time, I won’t wait for you to finish it, though.