tag:blogger.com,1999:blog-53601422947520453032014-10-04T23:59:52.789-07:00Anthropic ThoughtsWei Daihttp://www.blogger.com/profile/12427403662583076992noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-5360142294752045303.post-47096174562983615272010-04-06T03:41:00.001-07:002010-04-06T04:05:39.087-07:00Frequentist Magic vs. Bayesian Magic<p>This is a belated reply to cousin_it's 2009 post <a href="http://lesswrong.com/lw/147/bayesian_flame/">Bayesian Flame</a>, which claimed that frequentists can give calibrated estimates for unknown parameters without using priors:</p><blockquote><p>And here's an ultra-short example of what frequentists can do: estimate 100 independent unknown parameters from 100 different sample data sets and have 90 of the estimates turn out to be <em>true to fact</em> afterward. Like, fo'real. Always 90% in the long run, truly, irrevocably and forever.</p></blockquote><p>And indeed they can. Here's the simplest example that I can think of that illustrates the spirit of frequentism:</p><p style="padding-left: 30px;">Suppose there is a machine that produces biased coins. You don't know how the machine works, except that each coin it produces is either biased towards heads (in which case each toss of the coin will land heads with probability .9 and tails with probability .1) or towards tails (in which case each toss of the coin will land tails with probability .9 and heads with probability .1). For each coin, you get to observe one toss, and then have to state whether you think it's biased towards heads or tails, and what is the probability that's the right answer.</p><p>Let's say that you decide to follow this rule: after observing heads, always answer "the coin is biased towards heads with probability .9" and after observing tails, always answer "the coin is biased towards tails with probability .9". Do this for a while, and it will turn out that 90% of the time you are right about which way the coin is biased, no matter how the machine actually works. The machine might always produce coins biased towards heads, or always towards tails, or decide based on the digits of pi, and it wouldn't matter—you'll still be right 90% of the time. (To verify this, notice that in the long run you will answer "heads" for 90% of the coins actually biased towards heads, and "tails" for 90% of the coins actually biased towards tails.) No priors needed! Magic!</p><p>What is going on here? There are a couple of things we could say. One was mentioned by Eliezer in a <a href="http://lesswrong.com/lw/147/bayesian_flame/zen">comment</a>:</p><blockquote><p>It's not perfectly reliable. They assume they have perfect information about experimental setups and likelihood ratios. (Where does this perfect knowledge come from? Can Bayesians get their priors from the same source?)</p></blockquote><p>In this example, the "perfect information about experimental setups and likelihood ratios" is the information that a biased coin will land the way it's biased with probability .9. I think this is a valid criticism, but it's not complete. There are perhaps many situations where we have much better information about experimental setups and likelihood ratios than about the mechanism that determines the unknown parameter we're trying to estimate. This criticism leaves open the question of whether it would make sense to give up Bayesianism for frequentism in those situations.</p><p>The other thing we could say is that while the frequentist in this example appears to be perfectly calibrated, he or she is liable to pay a heavy cost for this in accuracy. For example, suppose the machine is <em>actually</em> set up to always produce coins biased towards heads. After observing the coin tosses for a while, a typical intelligent person, just applying common sense, would notice that 90% of the tosses come up heads, and infer that perhaps all the coins are biased towards heads. They would become more certain of this with time, and adjust their answers accordingly. But the frequentist would not (or isn't supposed to) notice this. He or she would answer "the coin is biased towards <em>heads </em>with probability .9" 90% of the time, and "the coin is biased towards <em>tails </em>with probability .9" 10% of the time, and keep doing this, irrevocably and forever.</p><p>The frequentist magic turns out to be weaker than it first appeared. What about the Bayesian solution to this problem? Well, we know that it must involve a prior, so the only question is which one. The <a href="http://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution">maximum entropy prior</a> that is consistent with the information given in the problem statement is to assign each coin an independent probability of .5 of being biased toward heads, and .5 of being biased toward tails. It turns out that a Bayesian using this prior will give the exact same answers as the frequentist, so this is also an example of a "matching prior". (To verify: P(biased heads | observed heads) = P(OH|BH)*P(BH)/P(OH) = .9*.5/.5 = .9)</p><p>But a Bayesian can do much better. A Bayesian can use a <a href="http://www.scholarpedia.org/article/Algorithmic_probability">universal prior</a>. (With a universal prior based on a universal Turing machine, the prior probability that the first 4 coins will be biased "heads, heads, tails, tails" is the probability that the UTM will produce 1100 as the first 4 bits of its output, when given a uniformly random input tape.) Using such a prior guarantees that no matter how the coin-producing machine works, as long as it doesn't involve some kind of uncomputable physics, in the long run your expected total Bayes score will be no worse than someone who knows exactly how the machine works, except by a constant (that's determined by the algorithmic complexity of the machine). And unless the machine actually settles into deciding the bias of each coin independently with 50/50 probabilities, your expected Bayes score will also be better than the frequentist (or a Bayesian using the matching prior) by an unbounded margin as time goes to infinity.</p><p>I consider this magic also is because I don't <em>really</em> understand why it works. Is the universal prior actually our prior, or just a handy approximation that we can substitute in place of the real prior? Why <em>does</em> the universe that we live in look like a giant computer? What about uncomputable physics? <a href="http://lesswrong.com/lw/1iy/what_are_probabilities_anyway/">Just what are priors, anyway?</a> These are some of the questions that I'm still confused about.</p><p>But as long as we're choosing between different magics, why not pick the stronger one?</p>Wei Daihttp://www.blogger.com/profile/12427403662583076992noreply@blogger.com3tag:blogger.com,1999:blog-5360142294752045303.post-79939768396585513332010-01-28T22:22:00.000-08:002010-01-28T22:56:33.309-08:00Complexity of Value != Complexity of Outcome<p>Complexity of value is the thesis that our preferences, the things we care about, <i>don't</i> compress down to one simple rule, or a few simple rules. To review why it's important (by quoting from the <a href="http://wiki.lesswrong.com/wiki/Complexity_of_value" mce_href="http://wiki.lesswrong.com/wiki/Complexity_of_value">wiki</a>):</p> <ul><li><span class="mw-redirect">Caricatures</span> of rationalists often have them moved by artificially simplified values - for example, only caring about personal pleasure. This becomes a template for arguing against rationality: X is valuable, but rationality says to only care about Y, in which case we could not value X, therefore do not be rational. </li></ul> <ul><li>Underestimating the complexity of value leads to underestimating the difficulty of <a class="mw-redirect" title="Friendly AI" href="http://wiki.lesswrong.com/wiki/Friendly_AI" mce_href="http://wiki.lesswrong.com/wiki/Friendly_AI">Friendly AI</a>; and there are notable cognitive biases and fallacies which lead people to underestimate this complexity. </li></ul> <p>I certainly agree with both of these points. But I worry that we might have swung a bit too far in the other direction. No, I don't think that we overestimate the complexity of our values, but rather there's a tendency to assume that complexity of value must lead to complexity of outcome, that is, agents who faithfully inherit the full complexity of human values will necessarily create a future that reflects that complexity. I will argue that it is possible for complex values to lead to simple futures, and explain the relevance of this possibility to the project of <a href="http://wiki.lesswrong.com/wiki/Friendly_AI" mce_href="http://wiki.lesswrong.com/wiki/Friendly_AI">Friendly AI</a>.</p><p>The easiest way to make my argument is to start by considering a hypothetical alien with all of the values of a typical human being, but also an extra one. His fondest desire is to fill the universe with orgasmium, which he considers to have orders of magnitude more utility than realizing any of his other goals. As long as his dominant goal remains infeasible, he's largely indistinguishable from a normal human being. But if he happens to pass his values on to a superintelligent AI, the future of the universe will turn out to be rather simple, despite those values being no less complex than any human's.</p> <p>The above possibility is easy to reason about, but perhaps does not appear very relevant to our actual situation. I think that it <i>may</i> be, and here's why. All of us have many different values that do not reduce to each other, but most of those values do not appear to scale very well with available resources. In other words, among our manifold desires, there may only be a few that are not easily satiated when we have access to the resources of an entire galaxy or universe. If so, (and assuming we aren't wiped out by an existential risk first) the future of our universe will be shaped largely by those values that do scale. (I should point out that in this case the universe won't necessarily turn out to be mostly simple. Simple values to not necessarily lead to simple outcomes either.)</p> <p>Now if we were rational agents who had perfect knowledge of our own preferences, then we would already know whether this is the case or not. And if it is, we ought to be able to visualize what the future of the universe will look like, if we had the power to shape it according to our desires. But I find myself <a href="http://lesswrong.com/lw/1ns/value_uncertainty_and_the_singleton_scenario/" mce_href="/lw/1ns/value_uncertainty_and_the_singleton_scenario/">uncertain</a> on both questions. Still, I think this possibility is worth investigating further. If it <i>were</i> the case that only a few of our values scale, then we can potentially obtain almost all that we desire by creating a superintelligence with just those values. And perhaps this can be done manually, bypassing an automated preference extraction or <a href="http://singinst.org/upload/CEV.html" mce_href="http://singinst.org/upload/CEV.html">extrapolation process</a> with their associated difficulties and dangers. (To head off a potential objection, this does assume that our values interact in an additive way. If there are values that don't scale but interact multiplicatively with values that do scale, then those would need to be included as well.)</p> <div>Whether or not we actually should take this approach would depend on the outcome of such an investigation. Just how much of our desires can feasibly be obtain this way? And how does the loss of value inherent in this approach compare to the expected loss of value caused by the potential of errors in the extraction/extrapolation process? These are questions worth trying to answer before we get too far along any particular path, I think.<br /></div>Wei Daihttp://www.blogger.com/profile/12427403662583076992noreply@blogger.com0tag:blogger.com,1999:blog-5360142294752045303.post-27692083932292578632009-02-23T07:10:00.000-08:002009-02-24T05:59:13.353-08:00microeconomic crisisThere seem to be a lot of interesting microeconomics issues in the current economic crisis. But there is little discussion of the micro issues, in comparison with the macro ones. For example:<br /><br />- Why did the financial industry screw up so badly? This can be subdivided into product design, compensation design, financial modeling, risk management, regulation, etc., which all failed. A whole set of long-standing institutions all evolved in bad directions within a few years. The macro cause seems to be a big <a href="http://www.fsa.gov.uk/pages/Library/Communication/Speeches/2009/0121_at.shtml">savings glut from Asia and oil exporters</a>. But why weren't our institutions more resilient on a micro level? Is there any way to improve them to be more resilient in the future?<br /><br />- During the boom, investors trusted the financial industry a lot more than they should have, at least in hindsight. Why? Why weren't all those flaws visible?<br /><br />- It seems that some people did notice the flaws, and tried to short the market, but there is so much "dumb money" out there which can easily overwhelm "smart money" on a timescale of years. Is this a problem for decision markets? Why or why not?Wei Daihttp://www.blogger.com/profile/12427403662583076992noreply@blogger.com0tag:blogger.com,1999:blog-5360142294752045303.post-17869354584437640162008-12-13T17:06:00.000-08:002008-12-13T17:28:29.165-08:00expected utility maximization needed to avoid pricing vulnerabilities?<p>This is a comment on Stephen Omohundro's <a href="http://selfawaresystems.com/2007/10/05/paper-on-the-nature-of-self-improving-artificial-intelligence" title="The Nature of Self-Improving Artificial Intelligence">“The Nature of Self-Improving Artificial Intelligence” </a>, which I found by way of <a href="http://www.overcomingbias.com/2008/12/two-visions-of/comments/page/2/#comment-142322542" rel="nofollow">http://www.overcomingbias.com/2008/12/two-visions-of/comments/page/2/#comment-142322542</a>. I tried posting this as a comment on Steve's log, but it seems stuck in moderation.<br /></p> <p>I think unfortunately the derivation in chapter 10 of expected utility maximization from the need to avoid pricing vulnerabilities, especially section 10.9, doesn’t work, because there are ways to avoid being Dutch booked, other than being an expected utility maximizer. For example, I may prefer a mixture of L1 and L2 to both L1 and L2, and as soon as the alpha-coin is flipped, change my preferences so that I now have the highest preference for either L1 or L2 depending on the outcome of the coin.</p> <p>To give a real-world example, suppose I my SO asks me “Do you want chicken or pork for dinner?” and I say “Surprise me.” Then whatever dinner turns out to be is what I want. I don’t go in circles and say “I’d like to exchange that for another surprise, please.”</p> <p>Another way to avoid being Dutch booked is to have an ask/bid spread. Why should it be that for any mixture of L1 and L2, I must have a single price at which I am willing to both buy and sell that mixture? If there’s a difference between the price that I’m willing to buy at, and the price that I’m willing to sell at, then that leaves me some room to violate expected utility maximization without being exploited.</p> <p>Or I may have a vulnerability, but morality, customs, law, or high transaction costs prevent anyone from making a profit exploiting it.</p> <p>I suppose the first objection is the most serious one (i.e. exploitable circularity can be avoided by changing preferences). The others, while showing that expected utility maximization doesn’t have to be followed exactly, leaves open that it should be approximated.</p>Wei Daihttp://www.blogger.com/profile/12427403662583076992noreply@blogger.com0tag:blogger.com,1999:blog-5360142294752045303.post-11624371125752125202008-07-15T17:27:00.000-07:002008-07-15T17:47:59.935-07:00Communicating QualiaConsider an AI that wants to build a copy of itself, but doesn't have physical access to the hardware that it's currently running on. (It does have remote sensors and effectors.) It has to somehow derive an outside view of itself from the inside view. Assuming that the AI has full access to its own source code and state, this doesn't seem to be a hard problem. The AI can just program a new general purpose computer with its source code, copy its current state into it, and let the new program run.<br /><br />What if a human being wants to attempt the same thing? That seems impossible, since we don't have full introspective access to our "source code" or mental state. But might it be possible to construct another brain that isn't necessarily identical, but just "subjectively indistinguishable"? To head off further objections, we can define this term operationally as follows: two snapshots of brains are subjectively indistinguishable if each continuation of the snapshots, when given access to the two snapshots, can not determine (with probability better than chance) which snapshot he is the continuation of.<br /><br />Given the above, we can define "to communicate qualia directly" to mean to communicate enough of the inside view of a brain to allow someone else to build a subjectively indistinguishable clone of it.Wei Daihttp://www.blogger.com/profile/12427403662583076992noreply@blogger.com0