This is really interesting and lots to respond to. But I'll start right at the top. Unless you adhere to a very stringent definition of rationality (maximizing EV or EU) there's nothing irrational about playing the lottery. It just says you like long-odds bets with extreme positive outcomes. Here's my analysis
Thank you for sending your brilliant analysis, which I'm enjoying thinking about. What would you think of black swan weighting as a nickname for rank-dependent expected utility modeling, since the central insight seems to be about overweighting only black swan events instead of all unlikely outcomes? Sumo swans...
I'm also mulling how this connects with Ralph Hertwig and Ido Erev's work on "The description-experience gap in risky choice" (*Trends in Cognitive Sciences*, Vol. 13 No. 12, 2009; https://pubmed.ncbi.nlm.nih.gov/19836292). They say "in decisions from experience, people behave as if the rare events have less impact than they deserve according to their objective probabilities, whereas in decisions from description people behave as if the rare events have more impact than they deserve (consistent with cumulative prospect theory)" (p. 518). They think psychological factors are in play, but aren't sure which ones. Is this consistent with your analysis? It seems rather that your nuclear power plant example (p. 13) is a case where people would underweight the black swan (skinny swan) based on description according to EU methods. But maybe it's yet consistent, because you don't actually argue that they do that... Your point is that these methods were flawed.
Anyway, I take your main point: People aren't strictly rational actors, there are different ways of being rational, and uncertainty characterizes a lot of real-world decisions such that we can't meaningfully calculate many risks and have to use various heuristics, of which RDEU is one. (If that's right!)
I'm glad you liked the paper. You raise exactly the right point. I've been working on bounded awareness (of which black swans are a key example) for the last 20 years, though with only partial success. The idea that incompletely described black swan events explain the overweighting of extreme outcomes in RDEU is one I've been thinking about all this time, though I've never successfully formalised it.
I'll look at the Hertwig & Evo paper, which I wasn't aware of.
Given your expertise, maybe you know of more research in this area? I've been thinking about how a decisionmaking interface might collect and offer people tools to evaluate sources' credibility, use heuristics, identify and hack biases, and experiment to support more informed choices. Pachur et al suggest a role for the availability heuristic in helping people make good risk assessments when it comes to common risks ("How do people judge risks: availability heuristic, affect heuristic, or both?" J Exp Psychol Appl, 2012 Sep;18(3):314-30, https://pubmed.ncbi.nlm.nih.gov/22564084). This makes me wonder if there is research on priming black swans to patch the bug in the availability heuristic in such contexts. Or if there is another established way to help people correct for that sort of blind spot. Or maybe that is the problem you're still working on.
My motivation for starting on this, 30-odd years ago was to provide a coherent version of the Precautionary Principle. Here's what I came up with in the end
Grant, S. & Quiggin, J. (2013) Bounded awareness, heuristics and the Precautionary Principle. Journal of Economic Behavior & Organization, 93, 17-31 doi.org/10.1016/j.jebo.2013.07.007
(I can send some more references like this if you are interested)
I have also been doing some work on financial markets with Ani Guerdjikova, but that is going slowly.
This is really interesting and lots to respond to. But I'll start right at the top. Unless you adhere to a very stringent definition of rationality (maximizing EV or EU) there's nothing irrational about playing the lottery. It just says you like long-odds bets with extreme positive outcomes. Here's my analysis
On the Optimal Design of Lotteries, https://www.jstor.org/stable/2554972
Thank you for sending your brilliant analysis, which I'm enjoying thinking about. What would you think of black swan weighting as a nickname for rank-dependent expected utility modeling, since the central insight seems to be about overweighting only black swan events instead of all unlikely outcomes? Sumo swans...
I'm also mulling how this connects with Ralph Hertwig and Ido Erev's work on "The description-experience gap in risky choice" (*Trends in Cognitive Sciences*, Vol. 13 No. 12, 2009; https://pubmed.ncbi.nlm.nih.gov/19836292). They say "in decisions from experience, people behave as if the rare events have less impact than they deserve according to their objective probabilities, whereas in decisions from description people behave as if the rare events have more impact than they deserve (consistent with cumulative prospect theory)" (p. 518). They think psychological factors are in play, but aren't sure which ones. Is this consistent with your analysis? It seems rather that your nuclear power plant example (p. 13) is a case where people would underweight the black swan (skinny swan) based on description according to EU methods. But maybe it's yet consistent, because you don't actually argue that they do that... Your point is that these methods were flawed.
Anyway, I take your main point: People aren't strictly rational actors, there are different ways of being rational, and uncertainty characterizes a lot of real-world decisions such that we can't meaningfully calculate many risks and have to use various heuristics, of which RDEU is one. (If that's right!)
I'm glad you liked the paper. You raise exactly the right point. I've been working on bounded awareness (of which black swans are a key example) for the last 20 years, though with only partial success. The idea that incompletely described black swan events explain the overweighting of extreme outcomes in RDEU is one I've been thinking about all this time, though I've never successfully formalised it.
I'll look at the Hertwig & Evo paper, which I wasn't aware of.
Given your expertise, maybe you know of more research in this area? I've been thinking about how a decisionmaking interface might collect and offer people tools to evaluate sources' credibility, use heuristics, identify and hack biases, and experiment to support more informed choices. Pachur et al suggest a role for the availability heuristic in helping people make good risk assessments when it comes to common risks ("How do people judge risks: availability heuristic, affect heuristic, or both?" J Exp Psychol Appl, 2012 Sep;18(3):314-30, https://pubmed.ncbi.nlm.nih.gov/22564084). This makes me wonder if there is research on priming black swans to patch the bug in the availability heuristic in such contexts. Or if there is another established way to help people correct for that sort of blind spot. Or maybe that is the problem you're still working on.
My motivation for starting on this, 30-odd years ago was to provide a coherent version of the Precautionary Principle. Here's what I came up with in the end
Grant, S. & Quiggin, J. (2013) Bounded awareness, heuristics and the Precautionary Principle. Journal of Economic Behavior & Organization, 93, 17-31 doi.org/10.1016/j.jebo.2013.07.007
(I can send some more references like this if you are interested)
I have also been doing some work on financial markets with Ani Guerdjikova, but that is going slowly.