The Topology of Fear

Saturday, March 9th
1:30 - 3:30PM

Past Event

How do emotions color and shape our actions? How do we decide to take action in the midst of fear for our own lives–go to war, fight an intruder, save a person falling on subway tracks–or to ward off catastrophes such as global climate change and the irreversible loss of species that could lead to the extinction of our own species? This roundtable will focus on the role of fear in the valuation of life, and on how we respond to rare but extreme risk. Catastrophic circumstances inspire extreme fear, and this alters cognitive processes and behaviors deemed rational choices under more ordinary circumstances. Studying human responses to fear can broaden our very understanding of what constitutes rational choice. A branch of mathematics called Topology holds the trump card: it leads to new ways to understand rationality in a larger context to include ambiguity and apparent contradictions that emerge in all logical systems—a universal law that was discovered by Kurt Gödel last century and analyzed by Bertrand Russell. The roundtable discussion will bring to bear economic, neuroscientific, psychological, and other perspectives on fear, and topology as a geometric tool to help us visualize and update the concept of rationality and make it appropriate for our complex, ever changing, and challenging world.

Free and open to the public.

Participants:

Graciela Chichilnisky

Professor of Economics, Columbia University

Dr. Graciela Chichilnisky (www.chichilnisky.com) is a professor of Economics and Mathematical Statistics and a University Senator at Columbia University in New York, where she is the Director of the Columbia Consortium for Risk Management (CCRM). A world-renowned economist, she is the creator of the formal theory of Sustainable Development and acted as Lead US Author… read more »

Paul Glimcher

Director of the Center for Neuroeconomics, New York University

Paul Glimcher is the Director of the Center for Neuroeconomics at New York University and the Julius Silver Professor of Neural Science, Economics, and Psychology at NYU’s Center for Neural Science. He is an Investigator for the National Eye Institute, National Institute of Aging, and the US. Army Research Laboratory’s Army Research Office. He is… read more »

Linda Keen

Professor of Mathematics, Graduate Center and Lehman College of the City University of New York

Linda Keen is Professor of Mathematics at the Graduate Center and Lehman College of the City University of New York. She is currently Executive Officer of the Doctoral Program in Mathematics. Her research spans various parts of complex analysis including complex dynamics, hyperbolic manifolds and Teichmuller theory. She is the co-author of Hyperbolic Geometry from… read more »

Joseph LeDoux

University Professor & Henry and Lucy Moses Professor of Science, Center for Neural Science and the Department of Psychology, New York University

Joseph LeDoux is the Henry and Lucy Moses Professor of Science at NYU in the Center for Neural Science. He also directs the Emotional Brain at NYU and is a professor in the Department of Psychiatry and Department of Child and Adolescent Psychiatry at NYU Langone Medical School. His work is focused on the brain… read more »

David Lichtenstein

Co-Founder, Apres-Coup Psychoanalytic Association, New York, NY

David Lichtenstein is a co-founder, faculty member, and supervisor at Apres-Coup Psychoanalytic Association in New York. He is the Editor of DIVISION/Review: A Quarterly Psychoanalytic Forum, and an Adjunct Professor of Psychology at the CUNY Doctoral Program in Clinical Psychology and at Adelphi University Derner Institute. He is the author of numerous articles, including “Born… read more »

7 comments on “The Topology of Fear

  1. [Paul Glimcher] asked me one question during the roundtable and I responded but was constrained in my response [in the interests of the lay members of our audience].

    [His] question was “How different is continuity with the Topology of Fear” and “Can you show an example of a function that is continuous with the topology of fear but not continuous with the standard topology”?

    I responded that continuity in a standard topology means continuity “on average” – while with the Topology of Fear continuity is defined with respect to “extremals” – with a total focus on the worst or the best that can happen — these are two totally different concepts of proximity and give rise to different continuity.

    I would like to elaborate on this statement, formally, below – using as an example the function (Sup(x). In a nutshell this is the example he was looking for:

    Example: The Sup function is continuous with respect to the topology of fear (sup norm) on R– but it is not continuous with respect to the averaging topology – L1 on R – see the end of this message for the complete proof with a specific example of sequence where the continuity of SUP in the average topology fails – and why.

    It is best to first give two simple examples – one with measures (namely, continuous linear functions) and another with general functions on the line (linear or not) of how different the notion of continuity is with standard topologies and with the topology of fear:

    1. Consider the usual continuity concept used for decision theory – for example the “Monotone Continuity Axiom” (MC) in Arrow’s work (Essays in the Theory of Risk Bearing) or its equivalent in Probability Theory — “Axiom SP4” (Degroot’s book, Optimal Statistical Decisions).

    This axiom (in both forms) can be succinctly expressed as: For any vanishing sequence of sets, the limit of the measures of the sets is zero.

    (A family Ui is vanishing when Ui is contained in Ui-1, and the intersection of the Ui’s is empty; see Arrow’s definition.)

    Therefore, from what I wrote above, with the standard topology defined by Monotone continuity, lim m(Ui) = 0 when the family UI is vanishing.

    In the topology of fear, however, this is not the case.

    Consider the sequence of open intervals (n,infinity) where n=1,2,….This is a “nested family” since Un is contained in Un-1. And the intersection of the entire family for all n is the empty set. Therefore the family (n, infinity) is a vanishing family.

    Then the standard topology says that the measure of the limit measure (limUn) = 0. This is continuity in the standard topology.

    But with the Topology of Fear this need not be the case. Sometimes the limit of the measures of the vanishing family Ui will be zero and others it will not.

    2. Let me elaborate on this same example using the real valued function Sup(x) to make it totally clear.

    Consider the sequence of real valued functions {fn} defined on the line, where (by definition) fn(x) = 0 on any real number x that smaller or equal to to n, and fn(x) = 1 when x is in the set Un= (n,infinity).

    Observe that the functions fn are integrable with respect to the standard countably additive finite measure on R – for example a standard Lebesgue measure on R — so that fn is in the L1 space of R when measure of R=1. Indeed, the lim of the integrals of the fn = 0 as n goes to infinity is zero.

    Now consider the standard “averaging” topology for functions on R, the L1 norm, namely, two functions are “close” when the integral of the difference is close.

    In the standard L1 topology the limit of this sequence of functions fn is 0. Another way of saying that is that “on the average” the functions fn go to zero.

    Instead, in the Sup Norm (The Topology of Fear), the distance between the sup of the function fn and the function 0 is always 1, for any n. Therefore the sup of the function fn does NOT converge to 0 in the sup norm.

    Continuity is different in the two norms.

    The functions fn do not get close to the function zero as n goes to infinity with the sup norm – the Topology of Fear is focused on extremals, and as a result is much more restrictive in defining “proximity.” As we just showed, continuity – and the notion of closeness – change with the topology)

    The “average” function defined by the standard integral of fn (Integral of fn) is continuous with respect to the standard topology.

    But the function “Sup fn” is not continuous with respect to the average topology.

    The function Sup (fn) is continuous with respect to the Sup norm – but it not continuous with respect to the L1 norm.

    Continuity means something different in the two topologies.

    The function Sup (fn) does not go to zero when n goes to infinity – even though the functions fn do converge to zero when n goes to infinity under the standard “average” (L1) topology.

    1. I guess what I was struggling with was not so much the math as the mapping of the math to – aah – psychology? Neuroscience? Thinking about this purely within the constraints of decision-theory, it seems to me that this is clear. (And in some ways
      related to Sargent’s robustness, although such a different approach!) But when it comes to the broader domains of psychology and neuroscience it’s much more complex, I think. For me as a neurobiologist/economist, what I often find myself doing is trying to render latent variables observable. I often say that rather than writing ‘as if’ theories, I write ‘because’ theories. And what interests me about the topology of fear is its relationship to some fear-like latent variable.

      Now having said that, obviously [Joseph Ledoux] felt otherwise. In the last two years, Joe has been abandoning what [Graciela, Linda, and Paul] would call latent variable approaches. His recent papers all have shifted away from the idea of fear to a much more behaviorist style of modeling which eschews concepts that can’t be measured in the external world.

      But what I struggle with, apropos the topology of fear, is how the internal topological mapping might be a dependent variable of a psychological or neural process like fear. What happens – Joe’s critique notwithstanding – to the internal representation of probability (I mean the physical-neural-psychological representation) when a chooser changes state from fearless to fearful? If your axioms are preserved, but the topology of representation is changed by fear, how would I look for that (as a neurobiologist or psychologist)? What are the constraints that I could use to look for this kind of latent variable? Hence my interest in continuity.

      Could continuity – or changes in continuity – be the internal marker that could be used to render this representation observable (as a Fn of the latent variable fear)?

      1. We agree on the topology – thanks to Linda for pitching in and making things clearer – what [Paul] now write goes to the essence of the problem and of the entire event:

        How does the psychology of fear connect with the different topologies and the mathematical axioms?

        How does it connect with Joe Ledoux’s experimental findings? And why should it matter?

        My axioms and the main result in my article “The Topology of Fear” have the response you are seeking. In essence my axioms and mathematical theorems explain a new form of rationality that is represented by a new criterion of choice – one that we should adopt to understand and utilitize Joe’s insights about threats and fear and his experimental findings

        1. My axiom of “sensitivity to rare events” (which never existed before and certainly is not in Arrow’s or Von Neumann’s axioms) reproduces the fact that one focuses on the threat of a potentially catastrophic event as a priority.

        This is the main difference with established theory – which instead weights an event by its probability and ends up neglecting events that have small probability no matter how catastrophic they may be.

        My axiom agrees with [Joseph Ledoux’s] words at the Helix roundtable – in the case of a threat, one focuses all one’s attention on the threat as a priority.

        I establish a new form of rationality that requires standard events to be treated according to their probabilities – but giving single priority to threats without regard to probability. Both at the same time.

        This requires a new axiom and a new type of topology that values catastrophic events in an extremal fashion and not through averages – this is what my new axioms for choice under uncertainty predict and what was is proven in my “Topology of Fear” article – it is a theorem.

        Regular criteria of rationality, instead, weight all risks using their probabilities – so, rare events are ‘denied’ no matter how catastrophic they may be.

        As a consequence of my new axiom, I identify a new criterion for decision making under uncertainty – which means a new type of rationality – that is totally different from – and even inconsistent with – standard criteria of rationality and decisions under uncertainty that disregards risks with low probability.

        What Joseph said is that one focuses on the threat – one gives top priority to the threat – while standard decision theory says that one discounts the threat by its probability – and this is quite different.

        My axioms replicate [Joseph’s] experimental observation, and my topology and characterization of decisions mathematically derive a new type of decision criteria and new form of rationality that is in agreement with what Joseph says – even though it may be considered irrational by standard (Arrow’s) theory.

        1. Just a point of clarification on what I think Paul was saying about what I have been saying:

          Paul says I have “shifted away from the idea of fear to a much more behaviorist style of modeling which eschews concepts that cant be measured in the external world.”

          That is partly correct but partly not. What I am saying is this:

          1. Most of what is called “fear” in animal and human research is really threat detection.

          2. Threat detection leads to measurable responses elicited by observable stimuli.

          3. Feelings like fear occur when we become consciously aware that threat detection has occurred and in part involves evaluation of: (a) the context in which the stimulus and response are occurring; (b) explicit memories triggered by the stimulus; and (c) aspects of the responses (feedback from the body to the brain; arousal within the brain; other more specific info in the brain that slips into awareness).

          4. Feelings of fear occur in people.

          5. Some kind of feeling may occur in some other animals but if so, is likely to be species-specific and very different from what we have in mind when we use one of the 37 words we have in English to describe fear-related states

          6. I have no problem using latent variables to model this stuff. But it’s important to specify what the latent variable is supposed to be modeling – whether it is capturing the unconscious state that is a direct consequence of threat detection or whether it is the conscious feeling of fear, which is an indirect and unreliable factor that is in some cases related to the unconscious state, but that in others can be top down driven and not necessarily related to the more basic threat processing system. I would say economic “fear” is like the latter.

          I think much of the confusion comes from the idea that there a single entity in the brain called fear that is call upon for all the many uses we have for the English word ‘fear’.

          1. [Joseph Ledoux’s] clarification is very good and it came out clearly at the roundtable when [he] explained (I am taking a short cut) that ‘fear’ is a conscious reflection that comes after the immediate “threat detection.”

            This is why I used the words “reaction to a threat” in my message rather than “fear.”

            My point is that when detecting a threat one gives focused priority to the threat – as you said – the ‘extremal’ reaction that the “topology of fear” is all about – and this is different from what one does when attaching probabilities to possible events and then choosing a response that provides the maximum expected value.

            The latter is “rational” behavior according to the classic theory of decision making and just about everyone who does decision theory – the former is, instead, what my own axioms and the topology of fear anticipate, and it provides a new form of “rationality.” It is also one that agrees with our experimental behavior — and explains away the ‘irrational’ behavior that behavioral economics is fond of predicting in humans. We may not be that irrational – such responses are not irrational if you look at them with the right framework – these responses are rational behavior that can be expected from a set of reasonable axioms and from logic deductions from those axioms. Just different axioms – but still very reasonable.

            In this manner, we can expand the concept of rationality that has been used until now to include natural reactions to situations where a threat is detected. And this is whether or not there is the conscious element of fear arising after the detection of the threat.

            I am going to assume that we agree so far (let me know otherwise!) and ask:

            Why does this all matter?

            Why do we need the topology and why do we need the axioms? Who cares?

            After all, [Joseph’s] experimental work already deals with the “detection of a threat” and the posterior element of ‘fear’ (if it does occur).

            So who cares about the “topology of fear” and about the new definition of ‘rationality’ that come (both) from my axioms?

            I am going to leave this question open and invite responses.

            The group may decide that we do not care – that it is all nice mathematics but it does not add to true knowledge on the topic.

            I am willing to wade in if needed after people respond – if they do! – but I want to ask the question first because it matters to me. I am very curious indeed to see what the group thinks – and would be grateful for candid responses.

            1. For what it’s worth, the axioms matter a lot to me – as an experimentalist. That may initially seem odd, but I think it reflects my own view of why axioms matter. My view is that what experimentalists often do is to test pretty weak models in pretty silly ways. They say things like: “I have this 2-parameter model and I fit it to the data and I got an R^2 of .3, which is significant in my analysis – so hurrah for me and my model.” (I should add that I have certainly done this.) But what does that mean? Does it mean that the model is ‘true’? I don’t think so. Consider a better case: “I have two different 2-parameter models and my second is better than my first.” In this case at least I learned something: that one of the models is bad (or at least worse). But of course, even that is a pretty limited thing to be able to say.

              For me this is where the axioms are different. Testing axioms allows one to test a whole class of theory at one time. If, to take a famous example, I show that humans violate the independence axiom of von Neumann and Morgenstern, then I know that there is no model with (to speak loosely) a linear representation of probability embedded in it that can account for the observed behavior – and this is true even if there is a positive significant R^2 for some linear probability model. The fact that the axiom is provably false tells you that the R^2 is a spurious correlation – for sure.

              So what interests me most about Graciela’s axioms would be the possibility of testing them experimentally – and testing them against standard continuity/independence axioms. (Although I have to admit that empirical tests of continuity are hard to dream up – I have generally thought of continuity as what is called a ‘technical’ axiom – but Graciela’s paper is making me rethink that assumption).

    2. Here is another way of saying this – given a metric, or a measure, on a space, one obtains a topology; open sets contain balls of small radius. Often different metrics define the same topology, in the sense that the open sets are the same. For example, the Euclidean metric on the unit disk |z|<1 and the hyperbolic metric on the disk. The density of the latter is ds/sqrt(1-|z|^2) and so points close in the Euclidean metric are very far apart in the hyperbolic metric. The open sets however are the same on the open disk since every open set in the disk in contained in a closed disk |z| <=r < 1. These metrics are thus equivalent and define the same topology. Thus continuity is the "same" because the topology is the same.

      Graciela is defining two different metrics on her space of lotteries, one using countably additive measures from L infinitity and using finitely additive measures coming from the dual space. These give different topologies. This is non-trivial and comes from the fact that to use these measures to define a metric one needs to use the appropriate "axiom of choice" to construct the open sets. To use finitely additive measures in the dual space, you need a different version of the axiom of choice. I realize that this has to be proved, and I assume that the mathematical proofs are contained in the articles by Arrow that are quoted (I don't know them). Having really different topologies means that continuity is really different in each context. Thus the mathematical subtlety comes in realizing and proving that these are non-equivalent topologies. An open set in one doesn't contain an open set in the other. Stating things in this more abstract way means that there may be other applications of the fact that metrics arising using the L infinity space and its dual give different topologies.

Leave a Reply

Your email address will not be published. Required fields are marked *

Leave the field below empty!