Tuesday, April 6, 2010

Frequentist Magic vs. Bayesian Magic

This is a belated reply to cousin_it's 2009 post Bayesian Flame, which claimed that frequentists can give calibrated estimates for unknown parameters without using priors:

And here's an ultra-short example of what frequentists can do: estimate 100 independent unknown parameters from 100 different sample data sets and have 90 of the estimates turn out to be true to fact afterward. Like, fo'real. Always 90% in the long run, truly, irrevocably and forever.

And indeed they can. Here's the simplest example that I can think of that illustrates the spirit of frequentism:

Suppose there is a machine that produces biased coins. You don't know how the machine works, except that each coin it produces is either biased towards heads (in which case each toss of the coin will land heads with probability .9 and tails with probability .1) or towards tails (in which case each toss of the coin will land tails with probability .9 and heads with probability .1). For each coin, you get to observe one toss, and then have to state whether you think it's biased towards heads or tails, and what is the probability that's the right answer.

Let's say that you decide to follow this rule: after observing heads, always answer "the coin is biased towards heads with probability .9" and after observing tails, always answer "the coin is biased towards tails with probability .9". Do this for a while, and it will turn out that 90% of the time you are right about which way the coin is biased, no matter how the machine actually works. The machine might always produce coins biased towards heads, or always towards tails, or decide based on the digits of pi, and it wouldn't matter—you'll still be right 90% of the time. (To verify this, notice that in the long run you will answer "heads" for 90% of the coins actually biased towards heads, and "tails" for 90% of the coins actually biased towards tails.) No priors needed! Magic!

What is going on here? There are a couple of things we could say. One was mentioned by Eliezer in a comment:

It's not perfectly reliable. They assume they have perfect information about experimental setups and likelihood ratios. (Where does this perfect knowledge come from? Can Bayesians get their priors from the same source?)

In this example, the "perfect information about experimental setups and likelihood ratios" is the information that a biased coin will land the way it's biased with probability .9. I think this is a valid criticism, but it's not complete. There are perhaps many situations where we have much better information about experimental setups and likelihood ratios than about the mechanism that determines the unknown parameter we're trying to estimate. This criticism leaves open the question of whether it would make sense to give up Bayesianism for frequentism in those situations.

The other thing we could say is that while the frequentist in this example appears to be perfectly calibrated, he or she is liable to pay a heavy cost for this in accuracy. For example, suppose the machine is actually set up to always produce coins biased towards heads. After observing the coin tosses for a while, a typical intelligent person, just applying common sense, would notice that 90% of the tosses come up heads, and infer that perhaps all the coins are biased towards heads. They would become more certain of this with time, and adjust their answers accordingly. But the frequentist would not (or isn't supposed to) notice this. He or she would answer "the coin is biased towards heads with probability .9" 90% of the time, and "the coin is biased towards tails with probability .9" 10% of the time, and keep doing this, irrevocably and forever.

The frequentist magic turns out to be weaker than it first appeared. What about the Bayesian solution to this problem? Well, we know that it must involve a prior, so the only question is which one. The maximum entropy prior that is consistent with the information given in the problem statement is to assign each coin an independent probability of .5 of being biased toward heads, and .5 of being biased toward tails. It turns out that a Bayesian using this prior will give the exact same answers as the frequentist, so this is also an example of a "matching prior". (To verify: P(biased heads | observed heads) = P(OH|BH)*P(BH)/P(OH) = .9*.5/.5 = .9)

But a Bayesian can do much better. A Bayesian can use a universal prior. (With a universal prior based on a universal Turing machine, the prior probability that the first 4 coins will be biased "heads, heads, tails, tails" is the probability that the UTM will produce 1100 as the first 4 bits of its output, when given a uniformly random input tape.) Using such a prior guarantees that no matter how the coin-producing machine works, as long as it doesn't involve some kind of uncomputable physics, in the long run your expected total Bayes score will be no worse than someone who knows exactly how the machine works, except by a constant (that's determined by the algorithmic complexity of the machine). And unless the machine actually settles into deciding the bias of each coin independently with 50/50 probabilities, your expected Bayes score will also be better than the frequentist (or a Bayesian using the matching prior) by an unbounded margin as time goes to infinity.

I consider this magic also is because I don't really understand why it works. Is the universal prior actually our prior, or just a handy approximation that we can substitute in place of the real prior? Why does the universe that we live in look like a giant computer? What about uncomputable physics? Just what are priors, anyway? These are some of the questions that I'm still confused about.

But as long as we're choosing between different magics, why not pick the stronger one?

Thursday, January 28, 2010

Complexity of Value != Complexity of Outcome

Complexity of value is the thesis that our preferences, the things we care about, don't compress down to one simple rule, or a few simple rules. To review why it's important (by quoting from the wiki):

  • Caricatures of rationalists often have them moved by artificially simplified values - for example, only caring about personal pleasure. This becomes a template for arguing against rationality: X is valuable, but rationality says to only care about Y, in which case we could not value X, therefore do not be rational.
  • Underestimating the complexity of value leads to underestimating the difficulty of Friendly AI; and there are notable cognitive biases and fallacies which lead people to underestimate this complexity.

I certainly agree with both of these points. But I worry that we might have swung a bit too far in the other direction. No, I don't think that we overestimate the complexity of our values, but rather there's a tendency to assume that complexity of value must lead to complexity of outcome, that is, agents who faithfully inherit the full complexity of human values will necessarily create a future that reflects that complexity. I will argue that it is possible for complex values to lead to simple futures, and explain the relevance of this possibility to the project of Friendly AI.

The easiest way to make my argument is to start by considering a hypothetical alien with all of the values of a typical human being, but also an extra one. His fondest desire is to fill the universe with orgasmium, which he considers to have orders of magnitude more utility than realizing any of his other goals. As long as his dominant goal remains infeasible, he's largely indistinguishable from a normal human being. But if he happens to pass his values on to a superintelligent AI, the future of the universe will turn out to be rather simple, despite those values being no less complex than any human's.

The above possibility is easy to reason about, but perhaps does not appear very relevant to our actual situation. I think that it may be, and here's why. All of us have many different values that do not reduce to each other, but most of those values do not appear to scale very well with available resources. In other words, among our manifold desires, there may only be a few that are not easily satiated when we have access to the resources of an entire galaxy or universe. If so, (and assuming we aren't wiped out by an existential risk first) the future of our universe will be shaped largely by those values that do scale. (I should point out that in this case the universe won't necessarily turn out to be mostly simple. Simple values to not necessarily lead to simple outcomes either.)

Now if we were rational agents who had perfect knowledge of our own preferences, then we would already know whether this is the case or not. And if it is, we ought to be able to visualize what the future of the universe will look like, if we had the power to shape it according to our desires. But I find myself uncertain on both questions. Still, I think this possibility is worth investigating further. If it were the case that only a few of our values scale, then we can potentially obtain almost all that we desire by creating a superintelligence with just those values. And perhaps this can be done manually, bypassing an automated preference extraction or extrapolation process with their associated difficulties and dangers. (To head off a potential objection, this does assume that our values interact in an additive way. If there are values that don't scale but interact multiplicatively with values that do scale, then those would need to be included as well.)

Whether or not we actually should take this approach would depend on the outcome of such an investigation. Just how much of our desires can feasibly be obtain this way? And how does the loss of value inherent in this approach compare to the expected loss of value caused by the potential of errors in the extraction/extrapolation process? These are questions worth trying to answer before we get too far along any particular path, I think.

Monday, February 23, 2009

microeconomic crisis

There seem to be a lot of interesting microeconomics issues in the current economic crisis. But there is little discussion of the micro issues, in comparison with the macro ones. For example:

- Why did the financial industry screw up so badly? This can be subdivided into product design, compensation design, financial modeling, risk management, regulation, etc., which all failed. A whole set of long-standing institutions all evolved in bad directions within a few years. The macro cause seems to be a big savings glut from Asia and oil exporters. But why weren't our institutions more resilient on a micro level? Is there any way to improve them to be more resilient in the future?

- During the boom, investors trusted the financial industry a lot more than they should have, at least in hindsight. Why? Why weren't all those flaws visible?

- It seems that some people did notice the flaws, and tried to short the market, but there is so much "dumb money" out there which can easily overwhelm "smart money" on a timescale of years. Is this a problem for decision markets? Why or why not?

Saturday, December 13, 2008

expected utility maximization needed to avoid pricing vulnerabilities?

This is a comment on Stephen Omohundro's “The Nature of Self-Improving Artificial Intelligence” , which I found by way of http://www.overcomingbias.com/2008/12/two-visions-of/comments/page/2/#comment-142322542. I tried posting this as a comment on Steve's log, but it seems stuck in moderation.

I think unfortunately the derivation in chapter 10 of expected utility maximization from the need to avoid pricing vulnerabilities, especially section 10.9, doesn’t work, because there are ways to avoid being Dutch booked, other than being an expected utility maximizer. For example, I may prefer a mixture of L1 and L2 to both L1 and L2, and as soon as the alpha-coin is flipped, change my preferences so that I now have the highest preference for either L1 or L2 depending on the outcome of the coin.

To give a real-world example, suppose I my SO asks me “Do you want chicken or pork for dinner?” and I say “Surprise me.” Then whatever dinner turns out to be is what I want. I don’t go in circles and say “I’d like to exchange that for another surprise, please.”

Another way to avoid being Dutch booked is to have an ask/bid spread. Why should it be that for any mixture of L1 and L2, I must have a single price at which I am willing to both buy and sell that mixture? If there’s a difference between the price that I’m willing to buy at, and the price that I’m willing to sell at, then that leaves me some room to violate expected utility maximization without being exploited.

Or I may have a vulnerability, but morality, customs, law, or high transaction costs prevent anyone from making a profit exploiting it.

I suppose the first objection is the most serious one (i.e. exploitable circularity can be avoided by changing preferences). The others, while showing that expected utility maximization doesn’t have to be followed exactly, leaves open that it should be approximated.

Tuesday, July 15, 2008

Communicating Qualia

Consider an AI that wants to build a copy of itself, but doesn't have physical access to the hardware that it's currently running on. (It does have remote sensors and effectors.) It has to somehow derive an outside view of itself from the inside view. Assuming that the AI has full access to its own source code and state, this doesn't seem to be a hard problem. The AI can just program a new general purpose computer with its source code, copy its current state into it, and let the new program run.

What if a human being wants to attempt the same thing? That seems impossible, since we don't have full introspective access to our "source code" or mental state. But might it be possible to construct another brain that isn't necessarily identical, but just "subjectively indistinguishable"? To head off further objections, we can define this term operationally as follows: two snapshots of brains are subjectively indistinguishable if each continuation of the snapshots, when given access to the two snapshots, can not determine (with probability better than chance) which snapshot he is the continuation of.

Given the above, we can define "to communicate qualia directly" to mean to communicate enough of the inside view of a brain to allow someone else to build a subjectively indistinguishable clone of it.