This is a comment on Stephen Omohundro's “The Nature of Self-Improving Artificial Intelligence” , which I found by way of http://www.overcomingbias.com/2008/12/two-visions-of/comments/page/2/#comment-142322542. I tried posting this as a comment on Steve's log, but it seems stuck in moderation.
I think unfortunately the derivation in chapter 10 of expected utility maximization from the need to avoid pricing vulnerabilities, especially section 10.9, doesn’t work, because there are ways to avoid being Dutch booked, other than being an expected utility maximizer. For example, I may prefer a mixture of L1 and L2 to both L1 and L2, and as soon as the alpha-coin is flipped, change my preferences so that I now have the highest preference for either L1 or L2 depending on the outcome of the coin.
To give a real-world example, suppose I my SO asks me “Do you want chicken or pork for dinner?” and I say “Surprise me.” Then whatever dinner turns out to be is what I want. I don’t go in circles and say “I’d like to exchange that for another surprise, please.”
Another way to avoid being Dutch booked is to have an ask/bid spread. Why should it be that for any mixture of L1 and L2, I must have a single price at which I am willing to both buy and sell that mixture? If there’s a difference between the price that I’m willing to buy at, and the price that I’m willing to sell at, then that leaves me some room to violate expected utility maximization without being exploited.
Or I may have a vulnerability, but morality, customs, law, or high transaction costs prevent anyone from making a profit exploiting it.
I suppose the first objection is the most serious one (i.e. exploitable circularity can be avoided by changing preferences). The others, while showing that expected utility maximization doesn’t have to be followed exactly, leaves open that it should be approximated.