Hollywood accounting and cost disease in medicine (also Stan Lee)

I always wondered what people meant when they (usually obliquely/snidely) referred to “creative accounting practices”. Today I stumbled upon an example entirely by happenstance, called “Hollywood accounting”, in a comment of jimrandomh‘s on his LW container for short-form writing:

The discussion so far on cost disease seems pretty inadequate, and I think a key piece that’s missing is the concept of Hollywood Accounting. Hollywood Accounting is what happens when you have something that’s extremely profitable, but which has an incentive to not be profitable on paper. The traditional example, which inspired the name, is when a movie studio signs a contract with an actor to share a percentage of profits; in that case, the studio will create subsidiaries, pay all the profits to the subsidiaries, and then declare that the studio itself (which signed the profit-sharing agreement) has no profits to give.

In the public contracting sector, you have firms signing cost-plus contracts, which are similar; the contract requires that profits don’t exceed a threshold, so they get converted into payments to de-facto-but-not-de-jure subsidiaries, favors, and other concealed forms. Sometimes this involves large dead-weight losses, but the losses are not the point, and are not the cause of the high price.

In medicine, there are occasionally articles which try to figure out where all the money is going in the US medical system; they tend to look at one piece, conclude that that piece isn’t very profitable so it can’t be responsible, and move on. I suspect this is what’s going on with the cost of clinical trials, for example; they aren’t any more expensive than they used to be, they just get allocated a share of the profits from R&D ventures that’re highly profitable overall.

Jim fleshes out the last remark:

I’m pretty uncertain how the arrangements actually work in practice, but one possible arrangement is: You have two organizations, one of which is a traditional pharmaceutical company with the patent for an untested drug, and one of which is a contract research organization. The pharma company pays the contract research organization to conduct a clinical trial, and reports the amount it paid as the cost of the trial. They have common knowledge of the chance of success, of the future probability distribution of future revenue for the drug, how much it costs to conduct the trial, and how much it costs to insure away the risks. So the amount the first company pays to the second is the costs of the trial, plus a share of the expected profit.

Pharma companies making above-market returns are subject to political attack from angry patients, but contract research organizations aren’t. So if you control both of these organizations, you would choose to allocate all of the profits to the second organization, so you can defend yourself from claims of gouging by pleading poverty.

Stan Lee is a particularly high-profile victim of Hollywood accounting. Per the Wiki article:

Stan Lee, co-creator of the character Spider-Man, had a contract awarding him 10% of the net profits of anything based on his characters. The film Spider-Man (2002) made more than $800 million in revenue, but the producers claim that it did not make any profit as defined in Lee’s contract, and Lee received nothing. In 2002 he filed a lawsuit against Marvel Comics. The case was settled in January 2005, with Marvel paying $10 million to “finance past and future payments claimed by Mr. Lee.”

(Wow, that makes me angry. How on earth could you do that to Stan Lee?)

Peter Jackson is another:

Peter Jackson, director of The Lord of the Rings, and his studio Wingnut Films, brought a lawsuit against New Line Cinema after an audit. Jackson stated this is regarding “certain accounting practices”. In response, New Line stated that their rights to a film of The Hobbit were time-limited, and since Jackson would not work with them again until the suit was settled, he would not be asked to direct The Hobbit, as had been anticipated.[18] Fifteen actors are suing New Line Cinema, claiming that they have never received their 5% of revenue from merchandise sold in relation to the movie, which contains their likenesses.[19] Similarly, the Tolkien estate sued New Line, claiming that their contract entitled them to 7.5% of the gross receipts of the $6 billion hit.[20] According to New Line’s accounts, the trilogy made “horrendous losses” and no profit at all.

The Harry Potter franchise is also another:

A WB receipt was leaked online, showing that the hugely successful movie Harry Potter and the Order of the Phoenix ended up with a $167 million loss on paper.[23] This is especially egregious given that, without inflation adjustment, the Harry Potter film series is the second highest-grossing film series of all time both domestically and internationally, second only to the Marvel Cinematic UniverseHarry Potter and the Deathly Hallows – Part 2 remains highest-grossing movie ever for Warner Bros.

This is all baffling to me, even sans ethics — why would you overdo Hollywood accounting on such visible projects? A slightly-less-stupid villain would do less-than-extreme creative accounting on lots of lower-profile revenue-optimized films (“diversifying their portfolio”), and hence be able to fly under the radar longer. Of course there probably are such syndicates, and they are still flying under the radar; we just see the stupid ones – this is embarrassingly obvious in retrospect.

Amusingly, creative accounting is a central plot device in Mel Brooks’ film The Producers, whose Wiki summary sounds amazing.

The Producers (1968).jpg

P vs NP humor

Parody of a typical comp.theory newsgroup discussion of a typical P vs NP proof, from Suresh Venkatasubramanian’s post A meta-proof:

P: I would like to announce my proof of P=/!=NP. The proof is very short and demonstrates how to solve/not solve SAT in polynomial time. You may find a write up of the proof here.

|– V: I started reading your proof and when you claim ‘foobar’ do you mean ‘foobar’ or ‘phishbang’ ?
|—-P: I meant ‘phishbang’. Thanks for pointing that out. An updated version is here.
|——V: Well if you meant ‘phishbang’ then statement “in this step we assume the feefum” is incorrect.
|——–P: No no, you don’t understand. I can assume feefum because my algorithm has a glemish.
|———–V: It has a glemish ? !! But having a glemish doesn’t imply anything. All algorithms have glemishes !!
|—-V’: Yes, and in fact in the 3rd step of argument 4, your glemish contradicts the first propum.
|–V”: I think you need to understand some basic facts about complicity theory before you can go further. Here is a book to read.
|—-P: My proof is quite clear, and I don’t see why I have to explain it to you if you don’t understand. I have spent a long time on this.
|——V’: Um, this is a famous problem, and there are many false proofs, and so you do have to convince us that the argument using glemishes can actually work.
|——–P: But what is wrong in my proof ? I don’t see any problems with it, and if you can’t point one out, how can you say it is wrong.
|———-V””: I don’t have to read the entire proof: glemished algorithms are well known not to work.
|————V”””: Check out this reference by to see why.
P: <silence>
|–P: <answering earlier post>. This is what I mean by a glemish. it is really a flemish, not a glemish, which answers your objection.
|—-P’: Keep up the good work P. I tried publishing my result, and these people savaged my proof without even trying to identify a problem. All great mathematical progress has come from amateurs like us. See this link of all the theorems proved by non-experts.
|——V’: Oh jeez, not P’ again. I thought we had established that your proof was wrong.
|——–P’: no you didn’t: in fact I have a new version that explains the proof in such simple language even dumb&%&%s like you can get it.
|——P: Thanks P’, I understand that there will be resistance from the community since I have proved what they thought to be so hard.
|–V’: P, I’m trying to understand your proof, with the flemishes, and it seems that maybe there is a problem in step 517 with the brouhaha technique.
P: <silence>
|—-P: V’, thanks for pointing out that mistake. you are right. Instead of a brouhaha technique I need a slushpit. The details are complicated, so I will fix it and post a corrected version of the proof shortly. Thanks to all those who gave me constructive advice. I am glad that at least some of you have an open mind to accept new ideas.

The Peter Scholze experience

There are lots of rising stars in math. There are a number of superstars too. (Exercise for the interested reader: browse through the laudatios for all the recent high-profile prizes, see what names pop up again and again.)

But every once in a while comes a person to whom others feel called. Think Grothendieck. The word I have in mind is “prophet”, although that seems a bit strong to use for just about anyone else other than Schurik (what his close friends called him). This person isn’t a star so much as a phenomenon, an experience.

(Even Terry isn’t “an experience” in the sense I’m gesturing towards, although he’s my favorite contemporary mathematician, given my penchant for generalists and master expositors. It’s rare that you find someone who’s both at the highest levels. I’d call him, not a prophet, but a bridge-builder, a weaver of worlds. But I digress.)

Everything I’ve read about Peter Scholze makes me think he’s a prophet in the style of Schurik. Here are some quotes to that effect.

From redditor Professor_Pohato, whose assessment is wonderfully prosaic:

Maths at [the Free University of Bonn] is no joke, this faculty is considered on of the finest and hardest in town which makes it even more impressive he pulled it off in under two years.

Kudos to Peter and his beautiful hair

From Ken Ribet’s interview on the occasion of him winning the 2017 Brouwer Medal:

Let’s talk about recent developments in the field. Is there something that gets you really excited? If you would have two years to study only something mathematical, what would it be?

Well, it would clearly be perfectoid spaces.

I actually wrote that down as next subject! Two of our Utrecht students attended the Arizona Winter School about that, I guess you were not there?

No, but their inventor Peter Scholze was in Berkeley for a semester fairly recently and he gave a course. I have tried to follow the beginning of the course and I was just completely blown away by (1) the beauty of the subject and (2) the amazing command of an entire landscape that he has. He was lecturing to many people who were experts in different corners, like (, z C) -modules and different ways of expressing Galois representations in somewhat concrete terms and Scholze understands all of this in his new framework in some illuminating way that completely surprised the original perpetrators of the subject, who were sitting in the audience shaking their heads. …

From Spiegel Online’s interview with Klaus Altmann, Scholze’s former teacher at the Free University of Berlin, who he generously describes as “an important mentor”:

SPIEGEL ONLINE: Peter Scholze has described you as an important mentor. How did you meet him?

Altmann: Scholze visited as a talent a math circle at his special school in Berlin. The head of the class told me that she has a really great student, to whom she simply can not cope, and who needs further suggestions. Then she sent him to me at the FU. Scholze was 16 years old at the time.

SPIEGEL ONLINE: How did you experience him?

Altmann: We talked a bit – and then I gave him a textbook for students and told him to look at the first one or two chapters. We could then ask questions at the next meeting. After two weeks, he came to me and read the whole book. He explained to me what was not quite okay in the book and what to do better. At the age of 16 — that was really amazing.

SPIEGEL ONLINE: Did Scholze study with you then?

Altmann: No, that’s not the way to say that – we just kept in touch. After our first conversation, I took him to my research seminar. Then I asked him how much he understood. He meant not much. Because he did not know many basic terms that were used in the seminar lecture. I explained most of those terms in a few minutes.”Now everything is clear,” he said suddenly. He must have completely saved those 90 minutes before he could understand them. And after the gaps were filled, he could retrieve and assemble everything. For the first time, I got an idea of ​​how he processes things. Scholze may have something like a photographic memory for math, he hardly takes notes.

SPIEGEL ONLINE: How did the mini-study by Scholze run for you?

Altmann: He was there almost every week. He came to my algebra seminar and heard lectures there and gave lectures to my graduate students and doctoral students. The fact that a 16-year-old taught and explained something to them was psychologically not always easy for the 25-year-old students.

SPIEGEL ONLINE: And how did Scholze deal with this situation?

Altmann: It did not seem to be a special situation for him. Not only is he an exceptional mathematician – he is also a very impressive personality. He explained things so that the 25-year-olds were not piqued, but enthusiastic. Scholze had an exact idea of ​​what the others knew and what did not and how he had to explain something so they could understand it. That was completely natural for him and incredibly fascinating to me. I have never experienced such a person.

SPIEGEL ONLINE: Would not Scholze get bored with you at some point?

Altmann: I hope not. But he is also far from any arrogance and would not let me feel that. In his view, the students and even the lecturers and professors should actually all be non-professionals because he is intellectually so far ahead of them. But he does not think so – he is completely down to earth. The cliché of a mundane, introverted mathematician may be true to some – it is totally wrong with him.

SPIEGEL ONLINE: What makes Scholze so special as a mathematician?

Altmann: He oversees complicated problems with incredible ease. And he has a crazy mind. There is a standard book on algebraic geometry, which I have given Scholze. It’s our Bible, so to speak. Even for many doctoral students it is difficult to work through this text completely. Scholze told me that he read the book during German lessons. And reading means understanding with him then.

SPIEGEL ONLINE: But you as a math professor can do it.

Altmann: When I’m working on an unknown math book, I read a line or two, put the book aside, take a pen, do a bit of calculation, and see what that means. And after half a day, in a nutshell, I read on the third line. The density of information in math texts is very high. Scholze, on the other hand, just reads it like an entertaining novel. But not superficially, he understands it to the last detail. One could say: Mathematics is his second native language, his brain can directly record the complicated statements without tedious processing and translation. But that does not mean that he only has mathematics in mind. Scholze has also graduated from high school.

When Altmann mentions “our Bible” (of algebraic geometry) I think he’s talking about Hartshorne. Interestingly, Hartshorne is doubly-distilled/simplified EGA, at least if you believe Elencwajg on MO[1].

Also funny is Altmann’s argument that Peter “doesn’t just have math in mind – he’s also graduated from high school”. It seems an exceedingly low bar to clear, but then again there are the Ramanujans of the world, so what do I know w.r.t. generational minds like theirs?

From Allyn Jackson’s Fields laudatio for Scholze:

Peter Scholze possesses a type of mathematical talent that emerges only rarely. He has the capacity to absorb and digest the entire frontier of a broad swath of mathematical research ranging over many diverse and often inchoate developments. What is more, he sees how to integrate these developments through stunning new syntheses that bring forth the simplicity that had previously been shrouded. Much of his work is highly abstract and foundational, but it also exhibits a keen sense of exactly which new concepts and techniques will enable proofs of important concrete results.

It was at a 2011 conference that Scholze, then still a doctoral student, first described the concept of perfectoid spaces, thereby setting off a revolution in algebraic and arithmetic geometry. The concept was quickly embraced by researchers the world over as just the right notion to clarify a wide variety of phenomena and shed new light on problems that had evaded solution for decades. …

Scholze is not simply a specialist in p-adic mathematics for a fixed p. For instance, he has recently been developing a sweeping vision of a “universal” cohomology that works over any field and over any space. In the 1960s, Grothendieck described his theory of motives, the goal of which was to build such a universal cohomology theory. While Vladimir Voevodsky (2002 Fields Medalist) made significant advances in developing the theory of motives, for the most part Grothendieck’s vision has gone unfulfilled. Scholze is coming at the problem from the other side, so to speak, by developing an explicit cohomology theory that in all observable ways behaves like a universal cohomology theory. Whether this theory fulfills the motivic vision then becomes a secondary question. Mathematicians the world over are following these developments with great excitement.

The work of Peter Scholze is in one sense radically new, but in another sense represents an enormous expansion, unification, and simplification of ideas that were already in the air. It was as if a room were in semi-darkness, with only certain corners illuminated, when Scholze’s work caused the flip of a light switch, revealing in bright detail the features of the room. The effect was exhilarating if rather disorienting. Once mathematicians had adjusted to the new light, they began applying the perfectoid viewpoint to a host of outstanding problems.

The clarity of Scholze’s lectures and written expositions played a large role in making the prospect of joining the perfectoid adventure appear so attractive to so many mathematicians, as has his personality, universally described as kind and generous.

From ICM 2018’s blog post:

Peter Scholze was a massive hit at this year’s ICM, and was a favorite to win the Fields Medal months before the winners were announced on Wednesday 1st August. His plenary lecture on ‘Period Maps in p-adic Geometry’ was so popular, Riocentro staff had to open additional seating in Pavilion 6 on Saturday morning. Everyone wanted to see the young mathematician in action.

The Jimi Hendrix of mathematics took to the stage to explain his breathtaking and groundbreaking work in the field of geometry using his own, homemade, handwritten slides. …

Scholze’s work has stunned the math community since his early twenties at Bonn University in Germany. “Peter’s work has really completely transformed what can be done, what we have access to,” said collaborator Ana Caraiani.

From Michael Rapoport’s laudatio of Scholze’s Fields medal:

Scholze has proved a whole array of theorems in p-adic geometry. These theorems are not disjoint but, rather, are the outflow of a theoretical edifice that Scholze has created in the last few years. There is no doubt that Scholze’s ideas will keep mathematicians busy for many years to come.

What is remarkable about Scholze’s approach to mathematics is the ultimate simplicity of his ideas. Even though the execution of these ideas demands great technical power (of which Scholze has an extraordinary command), it is still true that the initial key idea and the final result have the appeal of inevitability of the classics, and their elegance. We surely can expect more great things of Scholze in the future, and it will be fascinating to see to what further heights Scholze’s work will take him.

From Quanta in 2016:

At 16, Scholze learned that a decade earlier Andrew Wiles had proved the famous 17th-century problem known as Fermat’s Last Theorem … Scholze was eager to study the proof, but quickly discovered that despite the problem’s simplicity, its solution uses some of the most cutting-edge mathematics around. “I understood nothing, but it was really fascinating,” he said.

So Scholze worked backward, figuring out what he needed to learn to make sense of the proof. “To this day, that’s to a large extent how I learn,” he said. “I never really learned the basic things like linear algebra, actually—I only assimilated it through learning some other stuff.” …

After high school, Scholze continued to pursue this interest in number theory and geometry at the University of Bonn. In his mathematics classes there, he never took notes, recalled Hellmann, who was his classmate. Scholze could understand the course material in real time, Hellmann said. “Not just understand, but really understand on some kind of deep level, so that he also would not forget.” …

Scholze “found precisely the correct and cleanest way to incorporate all the previously done work and find an elegant formulation for that—and then, because he found really the correct framework, go way beyond the known results,” Hellmann said. …

Despite the complexity of perfectoid spaces, Scholze is known for the clarity of his talks and papers. “I don’t really understand anything until Peter explains it to me,” Weinstein said.

Scholze makes a point of trying to explain his ideas at a level that even beginning graduate students can follow, Caraiani said. “There’s this sense of openness and generosity in terms of ideas,” she said. “And he doesn’t just do that with a few senior people, but really, a lot of young people have access to him.” Scholze’s friendly, approachable demeanor makes him an ideal leader in his field, Caraiani said. One time, when she and Scholze were on a difficult hike with a group of mathematicians, “he was the one running around making sure that everyone made it and checking up on everyone,” Caraiani said.

Yet even with the benefit of Scholze’s explanations, perfectoid spaces are hard for other researchers to grasp, Hellmann said. “If you move a little bit away from the path, or the way that he prescribes, then you’re in the middle of the jungle and it’s actually very hard.” But Scholze himself, Hellmann said, “would never lose himself in the jungle, because he’s never trying to fight the jungle. He’s always looking for the overview, for some kind of clear concept.”

Scholze avoids getting tangled in the jungle vines by forcing himself to fly above them: As when he was in college, he prefers to work without writing anything down. That means that he must formulate his ideas in the cleanest way possible, he said. “You have only some kind of limited capacity in your head, so you can’t do too complicated things.” …

Discussing mathematics with Scholze is like consulting a “truth oracle,” according to Weinstein. “If he says, ‘Yes, it is going to work,’ you can be confident of it; if he says no, you should give right up; and if he says he doesn’t know—which does happen—then, well, lucky you, because you’ve got an interesting problem on your hands.”

Yet collaborating with Scholze is not as intense an experience as might be expected, Caraiani said. When she worked with Scholze, there was never a sense of hurry, she said. “It felt like somehow we were always doing things the right way—somehow proving the most general theorem that we could, in the nicest way, doing the right constructions that will illuminate things.” …

Scholze continues to explore perfectoid spaces, but he has also branched out into other areas of mathematics touching on algebraic topology, which uses algebra to study shapes. “Over the course of the last year and a half, Peter has become a complete master of the subject,” Bhatt said. “He changed the way [the experts] think about it.”

It can be scary but also exciting for other mathematicians when Scholze enters their field, Bhatt said. “It means the subject is really going to move fast. I’m ecstatic that he’s working in an area that’s close to mine, so I actually see the frontiers of knowledge moving forward.”

Yet to Scholze, his work thus far is just a warm-up. “I’m still in the phase where I’m trying to learn what’s there, and maybe rephrasing it in my own words,” he said. “I don’t feel like I’ve actually started doing research.”

“I’m just phrasing things in my own words” is exceedingly modest. Rephrasings by the masters can be so illuminating as to alter the research directions of entire subfields. Consider John Milnor, legendarily precocious youngster turned all-time great, probably the most decorated mathematician alive. When he decided to cast his eye on dynamical systems theory in the 1970s, “the Smale program in dynamics had been completed”, per Peter Makienko in his review of Topological Methods in Modern Mathematics. No worries, Milnor said, I’m just trying to teach myself. Peter wrote:

Milnor’s approach was to start over from the very beginning, looking at the simplest nontrivial families of maps. The first choice, one-dimensional dynamics, became the subject of his joint paper with Thurston. Even the case of a unimodal map, that is, one with a single critical point, turns out to be extremely rich. This work may be compared with Poincaré’s work on circle diffeomorphisms, which 100 years before had inaugurated the qualitative theory of dynamical systems. Milnor’s work has opened several new directions in this field, and has given us many basic concepts, challenging problems and nice theorems.

(He did have the advantage of collaborating with the greatest intuitive geometric thinker in the history of mathematics in Thurston, but the point stands.)

But I digress again. Back to Peter. Last quotes, from Michael Harris’ The perfectoid concept: test case for an absent theory:

It’s not often that contemporary mathematics provides such a clear-cut example of concept formation as the one I am about to present: Peter Scholze’s introduction of the new notion of perfectoid space.

The 23-year old Scholze first unveiled the concept in the spring of 2011 in a conference talk at the Institute for Advanced Study in Princeton. I know because I was there. This was soon followed by an extended visit to the Institut des Hautes Études Scientifiques (IHES) at Bûressur-Yvette, outside Paris — I was there too. Scholze’s six-lecture series culminated with a spectacular application of the new method, already announced in Princeton, to an outstanding problem left over from the days when the IHES was the destination of pilgrims come to hear Alexander Grothendieck, and later Pierre Deligne, report on the creation of the new geometries of their day.

Scholze’s exceptionally clear lecture notes were read in mathematics departments around the world within days of his lecture — not passed hand-to-hand as in Grothendieck’s day — and the videos of his talks were immediately made available on the IHES website. Meanwhile, more killer apps followed in rapid succession in a series of papers written by Scholze, sometimes in collaboration with other mathematicians under 30 (or just slightly older), often alone.
By the time he reached the age of 24, high-level conference invitations to talk about the uses of perfectoid spaces (I was at a number of those too) had enshrined Scholze as one of the youngest elder statesmen ever of arithmetic geometry, the branch of mathematics where number theory meets algebraic geometry.) Two years later, a week-long meeting in 2014 on Perfectoid Spaces and Their Applications at the Mathematical Sciences Research Institute in Berkeley broke all attendance records for “Hot Topics” conferences. …

Four years after its birth, perfectoid geometry, the theory of perfectoid spaces, is a textbook example of a progressive research program in the Lakatos sense. It is seen, retrospectively, as the right theory toward which several strands of arithmetic geometry were independently striving. It has launched a thousand graduate student seminars (if I were a historian I would tell you exactly how many); the students’ advisors struggle to keep up. It has a characteristic terminology, notation, and style of argument; a growing cohort of (overwhelmingly) young experts, with Scholze and his direct collaborators at the center; a domain of applications whose scope continues to expand to encompass new branches of mathematics; an implicit mandate to unify and simplify the fields in its immediate vicinity.

Last, but certainly not least, there is the generous, smiling figure of Peter Scholze himself, in the numerous online recordings of his lectures or in person, patiently answering every question until his questioner is satisfied, still just 27 years old, an inexhaustible source of revolutionary new ideas. …

I’m not the only person who thinks that the Scholze experience is reminiscent of the Schurik experience 🙂 I also like the imagery of Scholze at the center surrounded by his ever-growing rings of collaborators, subsuming and simplifying and unifying everything they touch (even though the usual parallels in sci-fi are meant to be horrifying).

What does Harris mean by “clear-cut example of concept formation”?

“Category” is the formalized mathematical concept that currently best captures what is understood by the word “concept.” Scholze defined perfectoid spaces as a category of geometric spaces with all the expected trappings, and thus there’s no reason to deny it the status of “concept.” I will fight the temptation to explain in any more detail just why Scholze’s perfectoid concept was seen to be the right one as soon as he explained the proofs in the (symbolically charged) suburban setting of the IHES. But I do want to disabuse the reader of any hope that the revelation was as straightforward as a collective process of feeling the scales fall from our eyes. Scholze’s lectures and expository writing are of a rare clarity, but they can’t conceal the fact that his proofs are extremely subtle and difficult. Perfectoid rings lack familiar finiteness properties — the term of art is that they are not noetherian. This means that the unwary will be systematically led astray by the familiar intuitions of algebra. The most virtuosic pages in Scholze’s papers generally involve finding ways to reduce constructions that appear to be hopelessly infinite to comprehensible (finite type) ring theory.

A few months after his IHES lectures, a French graduate student asked whether I would be willing to be his thesis advisor; things started conventionally enough, but very soon the student in question was bitten by the perfectoid bug and produced a Mémoire M2 — a mainly expository paper equivalent to a minor thesis — that was much too complicated for his helpless advisor. By then Scholze had found two new spectacular applications that the precocious student managed to cram into his Mémoire M2, making it by far the longest Mémoire it has even been my pleasure to direct. (The student in question — who has taken on by a second, more competent advisor — has not yet finished his thesis but I would already count him as a member of the second perfectoid circle revolving around Scholze. The first circle, as I see it, includes Scholze’s immediate collaborators and a few others; the second circle is already much broader, and there is a third circle consisting of everyone hoping to apply the concept to one thing or another.) …

More of that rings imagery!

I began writing this essay three years after Scholze’s IHES lectures and one month after his ICM lecture in Seoul. One year earlier, I could safely assert that no one had (correctly) made use of the perfectoid concept except in close collaboration with Scholze. The Seoul lecture made it clear that this was already no longer the case. Now, after nearly a year has passed, the perfectoid concept has been assimilated by the international community of arithmetic geometers and a growing group of number theorists, in applications to questions that its creator had never considered. It is an unqualified success.

How can this be explained? Mathematicians in fields different from mine are no better prepared than philosophers or historians to evaluate our standards of significance. One does occasionally hear dark warnings about disciplines dominated by cliques that expand their influence by favorably reviewing one another’s papers, but by and large, when a field as established and prestigious as arithmetic geometry asserts unanimously that a young specialist is the best one to come along in decades, our colleagues in other fields defer to our judgment.

Doubts may linger nonetheless. I don’t think that even a professional historian would see the point in questioning whether Scholze is exceedingly bright, but is his work really that important? How much of the fanfare around Scholze is objectively legitimate, how much an effect of Scholze’s obvious brilliance and unusually appealing personality, and how much just an expression of the wish to have something to celebrate, the “next big thing”? Is a professional historian even allowed to believe that (some) value judgments are objective, that the notion of the right concept is in any way coherent? How can we make sweeping claims on behalf of perfectoid geometry when historical methodology compels us to admit that even complex numbers may someday be seen as a dead end? “Too soon to tell,” as Zhou En-Lai supposedly said when asked his opinion of the French revolution.

It’s possible to talk sensibly about convergence without succumbing to the illusion of inevitability. In addition to the historical background sketched above, and the active search for the right frameworks that many feel Scholze has provided, perfectoid geometry develops themes that were already in the air when Scholze began his career. With respect to the active research programs that provide a field with its contours, it’s understandable that practitioners can come to the conclusion that a new framework provides the clearest and most comprehensive unifying perspective available. When the value judgment is effectively unanimous, as it is in the case of perfectoid geometry, it deserves to be considered as objective as the existence of the field itself.

[1] From What are the required backgrounds of Robin Hartshorne’s Algebraic Geometry book?:

Hartshorne’s book is an edulcorated version of Grothendieck and Dieudonné’s EGA, which changed algebraic geometry forever.
EGA was so notoriously difficult that essentially nobody outside of Grothendieck’s first circle (roughly those who attended his seminars) could (or wanted to) understand it, not even luminaries like Weil or Néron.

Things began to change with the appearance of Mumford’s mimeographed notes in the 1960’s, the celebrated Red Book, which allowed the man in the street (well, at least the streets near Harvard ) to be introduced to scheme theory.

Then, in 1977, Hartshorne’s revolutionary textbook was published. With it one could really study scheme theory systematically, in a splendid textbook, chock-full of pictures, motivation, exercises and technical tools like sheaves and their cohomology.

However the book remains quite difficult and is not suitable for a first contact with algebraic geometry: its Chapter I is a sort of reminder of the classical vision but you should first acquaint yourself with that material in another book.

There are many such books nowadays but my favourite is probably Basic Algebraic Geometry, volume 1 by Shafarevich, a great Russian geometer. …

The most elementary introduction to algebraic geometry is Miles Reid’s aptly named Undergraduate Algebraic Geometry, of which you can read the first chapter here.

Saharon Shelah, logic juggernaut

There’s a great quote via Hunter Johnson on Quora arguing for the importance of mathematical logic to math:

Shelah is attending a mathematics talk. The presenter has offered, with great difficulty, a new example of some mathematical structure, let’s say a quasi-Hebrand reticular matrixoid. The existence of a new example of this object is significant in the field of quasi-Hebrand reticulation theory (I am making up these names). 

Shelah has come in late and missed most of the talk. When the time for questions comes, he raises his hand and says, “I can give you uncountably many of these objects. Now, tell me, what is a quasi-Hebrand reticular matrixoid?”

Mathematical logic is about the forest rather than the trees. When you look at the structure that different mathematical fields have in common, you see overarching themes that make the theory work.

The funny thing is that it’s entirely believable, because this is Saharon Shelah we’re talking about. I don’t know of anyone else alive more prolific in math than he is – 1,166 papers/preprints/books as of July 2019 with 260 coauthors.

I was pretty happy to see him being signal-boosted today on Quora, by Alon Amit. Alon wrote:

He is also recognized as one of the most powerful problem solvers around (I remember this actual phrase being used in a Scientific American article about Van der Waerden’s Theorem). As a result, some mathematicians are slightly relieved that Shelah focuses on infinite combinatorics and model theory and not on their field.

Shelah made tremendous contributions to model theory and set theory, solving a huge number of open problems and establishing major theories and directions for research, most notably PCF theory. In the well-known classification of mathematicians into “theory builders” and “problem solvers”, Shelah is a rare dual citizen.

A “dual citizen” of the highest order! I’ve only heard this explicitly said (in terms of praise) about one other modern-day mathematician, Akshay Venkatesh in his Fields laudatio, but of course that’s just memory failing me.

What is Shelah’s style? A Kanamori gave a beautiful description back in 1999. Unfortunately it’s just one solid wall of text, too much for my non-Shelahian working memory, so I’ve broken it up:

In set theory Shelah is initially stimulated by specific problems. He typically makes a direct, frontal attack, bringing to bear extraordinary powers of concentration, a remarkable ability for sustained effort, an enormous arsenal of accumulated techniques, and a fine, quick memory.

When he is successful on the larger problems, it is often as if a resilient, broad-based edifice has been erected, the traditional serial constraints loosened in favour of a wide, fluid flow of ideas and the final result almost incidental to the larger structure. What has been achieved is more than just a succinctly stated theorem but rather the establishment of a whole network of robust constructions and arguments. A telling point is that when some local flaw is pointed out to Shelah, he is usually able to come up quickly with another idea for crossing that bridge.

Shelah’s written accounts have acquired a certain notoriety that in large part has to do with his insistence that his edifices be regarded as autonomous mental constructions. Their life is to be captured in the most general forms, and this entails the introduction of many parameters. Often, the network of arguments is articulated by complicated combinatorial principles and transient hypotheses, and the forward directions of the flow are rendered as elaborate transfinite inductions carrying along many side conditions. The ostensible goal of the construction, the succinctly stated result that is to encapsulate it in memory, is often lost in a swirl of conclusions. This can make for difficult and frustrating reading, with the usual problem of presenting a mathematical argument in linear form exacerbated by the emphasis on the primacy of the construction itself and its overarching generality.

Further difficulties ensue from the nature of the enterprise. Shelah regards the written word as necessary and central for capturing and fixing a construction, and so for him getting everything down on paper is of crucial importance. The tensions among the robustness of the construction, the variability of its possible renditions, and the need to convey it all in print are inevitably complicated by the speed with which he is able to establish new results. The papers have to be written quickly, previous constructions are newly refreshed and modified, and so a labyrinthian network may result over a series of related papers.

In mathematics one often aspires to the most elegant or definitive treatment; in contrast, Shelah’s work features a continuing, dynamic self-dialogue, one that pushes to the limits of exposition. Many may consider Shelah’s work to be “technical”, but as T S Eliot has written “We cannot say at what point ‘technique’ begins or where it ends” [‘The Sacred Wood’]. While there is a particular drive to solve specific problems, Shelah with his generalizing approach is able to draw out larger, recurring patterns that lead to new techniques that soon get elevated to methods.

What’s an example of this approach?

One primary instance is the whole complex of approaches and results he developed under the general rubric of proper forcing. Shelah started out in model theory, developing an abstract classification theory for models which is a continuing research program for him and model theorists to this day. In the mid-1970’s, in his first major body of results in set theory, Shelah resolved a long-standing problem in abelian group theory, Whitehead’s problem, by establishing both the consistency and the independence of the corresponding proposition. It is through these beginnings, motivated by the set-theoretic problems that arose, that Shelah started to develop a general theory of iterated forcing for the continuum.

But what does Shelah “do all day at the office”? In his own words:

Since I have succeeded in demonstrating a substantial number of theorems, I have also a lot of work completing and correcting the demos. As I write, I have a secretary typing (I did have a lot of troubles concerning this) and I have to proof-read a lot. I write and make corrections, send to the typist, get it back and revise it again and again.

A great amount of time is used to verify what I wrote. If it is not accurate or utterly wrong, I ask myself what went wrong. I tell myself: there must be a hole somewhere, so I try to ll it. Or perhaps there is a wrong way of looking at things or a mistake of understanding. Therefore one must correct or change or even throw everything and start all over again, or leave the whole matter. Many times what I wrote first was right, but the following steps were not, therefore one should check everything cautiously. Sometimes, what seems to be a tiny inaccuracy leads to the conclusion that the method is inadequate.

I have a primeval picture of my goal. Let us assume that I have heard of a problem and it seems alike to problems that I know how to resolve, provided we change some elements. It often happens that, having thought of a problem without solving it, I get a new idea. But if you only think or even talk but not sit down and write, you do not see all the defaults in your original idea.

Writing does not provide a 100% assurance, but it forces you to be precise. I write something and then I get stuck and I ask myself perhaps it might work in another direction? As if you were pulling the blanket to one part and then another part is exposed. You should see that all parts are integrated into some kind of completeness.

Indeed, sometimes you are happy in a moment of discovery. But then you find out, while checking up, that you were wrong. There has been a joy of discovery, but that is not enough, for you should write and check all the details. My office is full of drafts, which turned to be nonsense.

Strong words, but it does explain A Kanamori’s remark that “Shelah regards the written word as necessary and central for capturing and fixing a construction, and so for him getting everything down on paper is of crucial importance“.

Terry Tao: can an approach used to prove almost all cases be extended to prove all cases?

Recently Terry Tao posted to the arXiv his paper Almost all Collatz orbits attain almost bounded values, which caused quite the stir on social media. For instance, this Reddit post about it is only a day old and already has nearly a thousand upvotes; Twitter is abuzz with tweets like Tim Gowers’:

(this sentiment seems off coming from the editor of the Princeton Companion to Mathematics, a T-shaped mathematician with both bar and stem thick, not to mention a fellow Fields medalist)

The first comment on his post, by goingtoinfinity, voices the unasked question everyone’s wondering:

What is the relations between results for “almost all” cases vs. subsequent proofs of the full result, from historic examples? Are there good examples where the former influences the developments of the later? Or is it more common that, proving results for full results of a mathematical question, is conducted in an entirely different way usually?

As an example, Falting’s proof that there are only finitely many solutions to Fermat’s Last Theorem — did his techniques influence and appear in Wiles’s/Taylor’s final proof?

Terry’s response is the raison d’être of this post. It also features really long paragraphs, too long for my poor working memory, so I’ve broken it up for personal edification:

One can broadly divide arguments involving some parameter (with a non-empty range) into three types: “worst case analysis”, which establish some claim for all choices of parameters; “average case analysis”,which establish some claim for almost all choices of parameters; and “best case analysis”, which establish some claim for at least one choice of parameters.

(One can also introduce an often useful variant of the average case analysis by working with “a positive fraction” of choices rather than “almost all”, but let us ignore this variant for sake of this discussion.)

There are obvious implications between the three: worst case analysis results imply average case analysis results (these are often referred to as “deterministic arguments”), and average case analysis results imply best case analysis results (the “probabilistic method”). In the contrapositive, if a claim fails in the average case, then it will also fail in the worst case; and if it fails even in the best case, then it fails in the average case.

However, besides these obvious implications, one generally sees quite different methods used the three different types of results. In particular, average case analysis (such as the arguments discussed in this blog post) gets to exploit methods from probability (and related areas such as measure theory and ergodic theory); best case analysis relies a lot on explicit constructions to design the most favorable parameters for the problem; but worst case analysis is largely excluded from using any of these methods, except when there is some “invariance”, “dispersion”, “unique ergodicity”, “averaging” or “mixing” property that allows one to derive worst-case results from average-case results by showing that every worst-case counterexample must generate enough siblings that at they begin to be detected by the average-case analysis.

For instance, one can derive Vinogradov’s theorem (all large odd numbers are a sum of three primes) from a (suitably quantitative) almost all version of the even Goldbach conjecture (almost all even numbers are the sum of two primes), basically because a single counterexample to the former implies a lot of counterexamples to the latter (see Chapter 19.4 of Iwaniec-Kowalski for details).

At a more trivial (but still widely used) level, if there is so much invariance with respect to a parameter that the truth value of a given property does not actually depend on the choice of parameter, then the worst, average, and best case results are equivalent, so one can reduce the worst case to the best case (such arguments are generally described as “without loss of generality” reductions).

However, in the absence of such mixing properties, one usually cannot rigorously convert positive average case results to positive worst case results, and when the worst case result is eventually proved, it is often by a quite different set of techniques (as was done for instance with FLT). So it is often better to think of these different types of analysis as living in parallel, but somewhat disjoint, “worlds”.

(In additive combinatorics, there is a similar distinction made between the “100% world”, “99% world”, and “1% world”, corresponding roughly to worst case analysis and the two variants of average case analysis respectively, although in this setting there are some important non-trivial connections between these worlds.)

In the specific case of the Collatz conjecture, the only obvious invariance property is that coming from the Collatz map itself (N obeys the Collatz conjecture Col_min(N) = 1 if and only if Col(N) does), but this seems too weak of an invariance to hope to obtain worst case results from average case ones (unless the average case results were really, really, strong).

Appleseed by John Clute

Rob Nostalgebraist’s Goodreads reviews are fun to read, so I read nearly all of them some time ago. That’s how I came across John Clute’s novel Appleseed.

Before talking about Appleseed, a bit of setup. Nostalgebraist is very bright. In particular, his very-bright-ness is of the “see through complexity” variety. One salient way he made an impression on me was when he saw through Karl Friston’s “free energy” BS when everyone else was puzzled. (I have yet to see a counterargument to rebut that post. Instead there’s been support by e.g. jadagul, the other of the two math tumblr users I respect above all else.) And by everyone else, I mean people like Scott Alexander, who in turn (in his post God help us, let’s try to understand Friston on free energy) quoted the following memorable passage from the journal Neuropsychoanalysis:

At Columbia’s psychiatry department, I recently led a journal club for 15 PET and fMRI researhers, PhDs and MDs all, with well over $10 million in NIH grants between us, and we tried to understand Friston’s 2010 Nature Reviews Neuroscience paper – for an hour and a half.

There was a lot of mathematical knowledge in the room: three statisticians, two physicists, a physical chemist, a nuclear physicist, and a large group of neuroimagers – but apparently we didn’t have what it took.

I met with a Princeton physicist, a Stanford neurophysiologist, a Cold Springs Harbor neurobiologist to discuss the paper. Again blanks, one and all.

(If you’ve been following math, this is very reminiscent of the drama surrounding Mochizuki’s claimed proof of the abc conjecture. In particular I’m thinking of Stanford arithmetic geometer Brian Conrad’s Notes on the Oxford IUT workshop, albeit at reduced scale. In that case, the best specialists in the world convened for a full week, and at the end of it there was still a haze of incomprehension clouding everyone’s vision.)

So: nostalgebraist is “see through complexity” sharp. Given that, I was very intrigued to see his review of Appleseed begin like so:

When I finished this book I was too dazed and worn out to give it anything like the kind of review it deserved. I ended up just resorting to the worst reviewer’s cliche in the book — “what was this guy on?

I still don’t feel like writing a real review, but in lieu of that I can at least throw some quotes at you. Quotes are specially informative here because what distinguishes this book from all the other science fiction I’ve read isn’t plot or characterization or worldbuilding — all pretty good, mind you — but its use of language to disorient and dazzle the reader.

I’ve always been confused by the preference many science fiction writers and fans have for plain, unadorned language — isn’t science fiction all about going to new and strange places where people may think, and thus talk, differently? The future will break the world apart into categories along different lines than the present, and those categories will be embodied in words.

Well, Appleseed’s style is anything but plain, and it is one of the only works of science fiction I’ve read that really sounds like the future.

First of all, this is precisely what’s nagged me about most sci-fi depictions of the future: they always sound so anachronistically contemporary, which sets off my implausibility detectors like a goddamn klaxon. Second of all, if even nostalgebraist found it challenging, what the hell chance do I have? But I was already riveted.

Nostalgebraist then gives several quotes from the book, and says:

What you think of these passages is a pretty reliable determinant of what you will think of the whole book.

If you are the sort of SF fan who won’t be able to enjoy the book unless you can determine precisely what each of Clute’s funny words means and what basis it has in real science, then you won’t like Appleseed.

On the other hand, if these quotes make you hungry for more psychedelic future-speak, Clute is your man.

The first quoted passage from Appleseed is this:

A timorous sibling tched softly within striking distance of the breakfast head of the Harpe in command of the great ark in orbit around Trencher with its stuffing of deep-sleeps snoring through their brainchip tasks. The sibling masticated with tiny nibbles the real-paper printouts in its glutinous ticklers, which it extended, perhaps hoping to donate an extensor limb. The commanding officer — a grown sibling of Opsophagos — took the printout in the mouth of its slack-eyed famished breakfast head, read the co-ordinates displayed, pulled down a three-horned screen and punched out the designated location. Chip-sluggish, the screen cleared, in time to reveal Number One Son wobble bare-arsed into the homo sapiens braid. Controlling their aversion to sigilla, the commanding officer began to jubilate.

They almost ate himself alive with joy.

Clute, it turns out, is definitely my man.

Unfortunately the book is old and wasn’t very popular when it came out, so I wasn’t able to get a copy of it for the longest time.

Until today! I discovered a copy on the Internet Archive available for free 15-day borrowing. So I’ve just started reading it. Here are some of the more memorable quotes from the beginning of the book.

Appleseed is, above all, linguistic performance art. There’s not much of a story, but the way it’s told — hoo boy.

The ship Tile Dance docking in an old ecumenopolis, or world-city, called Trencher:

One thing I realized was how biological all the metaphors were. I’m used to reading SF by math/physics/CS types: Egan, Rajaniemi, Stross, Rosenbaum, Rucker etc. Even Peter Watts’s work (which I love) doesn’t feel so wet, so sensual. This was a striking change.

The ostensible antagonists:

“Flesh is grass” is a recurring theme. Here’s another quote:

Local news as “data perfume”:

Number One Son:

I like this passage. Sometimes you just wanna get the job done and “go home”, metaphorically construed:

This too — a depiction of the dermis-level innards of a mature world-city, holographically viewed:

More Trencher:

The “data smells” metaphor is starting to grow on me:

The sole human aboard the cargo ship Tile Dance, Freer, is naked. (Of course he is. So would I be. I am in fact, right now, and I’m not even aboard a spaceship.) This is a description of how he decides to get clothed:

Tile Dance is quite the spaceship. Here Freer is exiting what we’d call the “control room” (here Glass Island) and disembarking from the ship:

Glass Island itself:

Elsewhere, reentering Tile Dance:

The ship is pretty much alive:

A description of a powerful AI, or “Made Mind”, being awakened from slumber:

More Made Mind, and the allure of mortality (which doesn’t make much sense to me, a mortal):

One thing that always comes up is “toons”, which deliver stuff like spammy ads:

The fate of Trencher’s humans seem pretty bleak:

This scene reminded me of the imagery during the speeder chase on, or above, Coruscant:

Humans out for a stroll in Trencher:

What happens if you walk around Trencher without a spam filter?

Social mores are different. Here Freer watches the start of a play:

Elsewhere, background commentary:

Clute’s rendition of momentary loss of augmentation:

Clute’s rendition of bodily speedup — I’m a great fan of depictions of this, my favorite being Rajaniemi’s in his Quantum Thief trilogy:

Data plaque, one of the book’s major plot points:

Talking to “a flesh non-homo-sapiens bipedal” embodying what’s basically the GPS for this particular cargo shipment (so not quite, but pretty much, an alien):


The dialogue is… difficult.

Sometimes I wonder what paragraphs like these mean:

All this, by the way, is in the first quarter of the book.

Wendigoes in the wild

Comment from gtsteel on the Haskell subreddit MIRI’s newest recruit: Edward Kmett:

Have you ever encountered the any of the work on Basil Johnston? He made the point that a lot of institutions, through the restrictions they place on those in them, form structures that function as optimization algorithms independent of the motivations of their members, and that there are already wendigoes (paperclip-maximizers) in the wild already, just not filly transistorized at the moment.

One such example is the board of directors of a public corporation which is legally required to maximize profit for shareholders and replaces members who do not serve that purpose, functioning as a sort of genetic algorithm.

Under this kind of analysis, a lot of current issues are symptoms of the world getting slowly paperclipped, and any kind of development that can speed these systems up (which can include a lot of non-general AI components such as hardware replacement is not all-or-nothing) is a threat to humanity until we can contain them and shut them down.

Moloch is of course obviously related, and Moloch has been at the back of my mind ever since Scott wrote about it in 2014, so I was extremely intrigued to read more by Johnston. Unfortunately he doesn’t seem to be who I expected him to; I’m guessing gtsteel was just great at translating the essence of Johnston’s work into familiar jargon.

It also turns out I’ve been wrong about corporations being legally required to maximize profits – they aren’t, at least according to Cornell professor Lynn Stout, who literally wrote a book titled The Shareholder Value Myth: How Putting Shareholders First Harms Investors, Corporations, and the Public”. In this post, Lynn wrote:

There is a common belief that corporate directors have a legal duty to maximize corporate profits and “shareholder value” — even if this means skirting ethical rules, damaging the environment or harming employees. But this belief is utterly false. To quote the U.S. Supreme Court opinion in the recent Hobby Lobby case: “Modern corporate law does not require for-profit corporations to pursue profit at the expense of everything else, and many do not.”

The Hobby Lobby case dealt with a closely held company with controlling shareholders, but the Court’s statement on corporate purpose was not limited to such companies. State codes (including that of Delaware, the preeminent state for corporate law) similarly allow corporations to be formed for “any lawful business or purpose,” and the corporate charters of big public firms typically also define company purpose in these broad terms. And corporate case law describes directors as fiduciaries who owe duties not only to shareholders but also to the corporate entity itself, and instructs directors to use their powers in “the best interests of the company.”

Serving shareholders’ “best interests” is not the same thing as either maximizing profits, or maximizing shareholder value. “Shareholder value,” for one thing, is a vague objective: No single “shareholder value” can exist, because different shareholders have different values. Some are long-term investors planning to hold stock for years or decades; others are short-term speculators.

Also, most investors care not only about their portfolios, but also about their jobs, their tax burdens, the products they buy and the air they breathe. Which is to say, companies that maximize profits by firing employees, avoiding taxes, selling shoddy products or polluting the environment can harm their shareholders more than helping them.

More to the point, corporate directors are protected from most interference when it comes to running their business by a doctrine known as the business judgment rule. It says, in brief, that so long as a board of directors is not tainted by personal conflicts of interest and makes a reasonable effort to stay informed, courts will not second-guess the board’s decisions about what is best for the company — even when those decisions predictably reduce profits or share price.

Outside the rare case of a public company that decides to sell itself to a private bidder, the business judgment rule gives directors nearly absolute protection from judicial second-guessing about how to best serve the company and its shareholders.

So, where did the mistaken idea that directors must maximize shareholder value come from? The notion is especially popular among economists unburdened by knowledge of corporate law. But it has also been embraced by increasingly powerful activist hedge funds that profit from harassing boards into adopting strategies that raise share price in the short term, and by corporate executives driven by “pay for performance” schemes that tie their compensation to each year’s shareholder returns.

In other words, it is activist hedge funds and modern executive compensation practices — not corporate law — that drive so many of today’s public companies to myopically focus on short-term earnings; cut back on investment and innovation; mistreat their employees, customers and communities; and indulge in reckless, irresponsible and environmentally destructive behaviors.

Ah, okay. Articulately argued, but in the end doesn’t assuage me at all: most corporations today still do maximize profit (which is what I care about). At least I know now who to blame, er, I mean what actually needs to change?

Redditor sclv, from whose comment I got the link to Stout’s opinion piece above, ends like so:

However, it is nonetheless usually considered their “job” to do this, and the market itself incentivizes (even mandates) such behavior and the myth of a legal requirement is very nice way to pawn off responsibility for, say, mass layoffs or use of harmful chemicals or any other behavior that meets social approbation.

Extremely expensive NYC properties

I know nothing about luxury property, save for the fading recollections of my parents’ Trends magazine subscriptions I religiously leafed through cover-to-cover as a child. Since I’m always looking for ways to reconnect with my childhood, it was exciting to stumble upon a Zillow link by David S. Rose on Quora to luxury apartments in New York City. Naturally I had to sort by highest price first and see what the homes were like.

The most expensive for-sale apartment, if you can call it that, is a five-story 19,800 sq ft penthouse listed for $98 million. It has 11 bedrooms and 14 full bathrooms. It… doesn’t have heating? And doesn’t have cooling either? That’s confusing. Also confusingly, management doesn’t allow pets, but (per the property agent)

Winter will never feel the same for your cats and dogs with the on-site 172 Pet Spa.

so what gives? At least there’s a 67 foot saltwater lap pool. The monthly cost is estimated at $515,000, of which $380,000 is principal and interest alone; the property taxes on this penthouse exceed $70,000 per month. Zillow’s rental value estimate is a cool $250,000, which is (again confusingly) cheaper than buying the home it seems, so why would you not rent?

What do you get for $98 million? This:

(I always did like leafing through the pages of my parents’ Trends magazine. Looking at these pictures is like taking a stroll down memory lane, even though I never have, and probably never will, set foot in any of these homes.)

I was a bit surprised to see how much smaller the next condo is than the above considering its $63 million price tag: all you get is 5 bedrooms and 7.5 bathrooms totaling 7,000 sq ft. I was confused until I was clued in by this sentence in the listing’s description:

Located on Manhattans Billionaire’s Row and steps from Central Park, this new architectural landmark will rise 1,550 feet above New York City, establishing it as the tallest residential building in the world.

Within the tallest residential tower ever built will be the worlds most elevated private club; Central Park Club, an exclusive offering of 50,000 square feet of curated luxury services and amenities. Spread across three floors in the tower, each location provides a unique experience complemented by five-star service.

The defining feature of Central Park Club is the 100th floor – located over 1,000 feet above Manhattan. Residents will enjoy the highest private ballroom in the world, complete with a private dining room, wine bar and cigar lounge.

Gotcha. Central Park Tower itself is located along Billionaire’s Row.

The building itself looks incongruous:

It’s nice inside:

(The model feels very unnecessary… Yeesh. The dog though? More please!)

But again, confusingly, it has neither heating nor cooling.

What if we limited ourselves to just 2 bed 2 bath? Then sorting by highest price first gets you this $45 million 2-story penthouse atop a hotel-turned-apartment-complex that (finally!) has both heating (baseboard) and cooling (solar), whatever those mean. The given 3,000 sq ft size is actually “total interior livable area”, which is less than half of the 6,241 sq ft total floor area. (The master bedroom is actually more than 2,000 sq ft, which is kind of confusing: are you saying it comprises over two-thirds of the total livable area, or does “livable area” exclude the bedrooms?) It also comes with 3,600 sq ft of outside terraces. Whew.

This is what you get for plonking $45 million on a 2 bed 2 bath:

You get the usual assortment of ridiculous frills:

After arriving to the private elevator vestibule clad in marble and custom millwork, a mahogany door welcomes guests to the sun-flooded two story entrance foyer complete with marble staircase and custom gilded iron railing.

A thirty-foot gallery graced with decorative plaster moldings, intricate marble floor and a plaster-coffered ceiling leads to the library and living room. Windows along the gallery open to a beautiful terrace that runs along the east side of the apartment and provides light to all rooms.

Old English knotty pine in the living room and library harkens to the pre-war days of old New York and creates a relaxed atmosphere to the otherwise formal rooms. French doors flank the fireplace in the living room and open to the spacious outside terraces totaling 3,585 square feet, and providing dramatic views of the New York skyline. A powder room and office, or fourth bedroom (with full bathroom), complete the formal side of this floor.

At the northern end of the first floor the tremendous eat-in-kitchen, which boasts light from three exposures, is large enough for any party and features an oversized butler’s pantry complete with staff kitchen and bathroom. The northern terrace, similar in size to the southern terrace off the living room, has spectacular views of Central Park and the Carlyle Hotel and is complimented by a stunning English greenhouse.

Ascending up the grand staircase, or private elevator, one arrives on a marble landing which separates the enormous master suite from the two guest suites. The master suite encompasses over 2,000 square feet complete with separate marble bathrooms and dressing rooms, a breakfast bar and a wood-burning fireplace.

The Westbury offers full hotel amenities such as concierge, a large full-time staff, fitness center, wine cellars, and a bicycle room. A private wine storage bin and double storage unit transfer with this apartment.

Bonus funny fact:

The “Chartwell” mansion in LA, subject of many movies featuring extravagant wealth, is (per its listing) “simply the finest residential offering in the United States”. It’s situated on 10+ acres in the heart of Bel Air, and can be yours for a little less than $200 million.

I mean check out this listing:

“Chartwell” is the ultimate trophy and a legend cherished for generations.

Situated on 10.39 acres in the heart of Bel Air, the main residence was originally designed by Sumner Spaulding in 1930 with a timelessly elegant exterior of symmetrical cut limestone in the French Neoclassical style.

The interiors were masterfully renovated in the late 1980s by Henri Samuel, one of the most important designers of the 20th century. Offering panoramic views from downtown to the Pacific Ocean, “Chartwell” is a rare combination of extensive grounds and powerful jetliner views.

Features include a Wallace Neff designed 5-bedroom guest house, 75-foot pool with spacious pool house, tennis court, car gallery for 40 vehicles, 12,000 bottle wine cellar and precisely manicured gardens befitting a chateau in France.

Discrete and world-class, an estate of this caliber has not been offered in decades.

This is about as close as you can get in America to a palace.

The asking price has actually dropped significantly. As recently as March this year, it was still at $245 million; the new asking price is a $50 million drop. The estimated monthly cost is $940,000. Guess how much Zillow estimates the rent here to be?

Nothing makes sense anymore…

From Nielsen to Thurston by way of Cummings, ft Terry Tao

I was reading Dominic Cummings’ blog post On the referendum #33: high performance government, ‘cognitive technologies’, Michael Nielsen, Bret Victor, & ‘Seeing Rooms’, and at one point he references a nice passage by Nielsen that immediately reminded me of Bill Thurston’s experience trying to communicate his ways of thinking in his seminal retrospective On proof and progress in mathematics, one of the highest wisdom-density (not just insight-density!) opinion pieces I have ever had the privilege of reading, so I thought I’d record it here. In doing so I went to Nielsen’s essay Thought as a technology where Cummings got the passage from, and discovered that Nielsen directly references Thurston right then and there. It is with supreme modesty that I henceforth claim that for one blindingly-inspired moment of reasoning-by-vague-association, I was precisely as bright as a man who in his mid-twenties co-wrote one of the ten most cited physics texts of all time.

(NOT! After all, a stopped clock is still right twice a day. But don’t you dare deny me my time in the sun.)

Cummings’ post itself is an interesting infodump. I am somehow turned on by infodense run-on sentences, because they’re the most reliable way to sound Stereotypically Ferociously Smart on paper. He certainly delivers there. But more so than that, he’s one of the very few people ‘in politics’ (broadly construed) who understands the need for quantitative literacy and complex systems intuition (in particular as it relates to high-performance grand-scale complex project management, like the Apollo mission). Also I’m biased, because he checks all the Right Names – Nielsen, Bret Victor, Alan Kay, Colonel ‘OODA loop’ Boyd, Tetlock etc – in other words, despite me having almost nothing else in common with him he is firmly One Of My People (albeit clearly better endowed in the grey matter department). His introduction to On the referendum #33 certainly promises loads:

This blog looks at an intersection of decision-making, technology, high performance teams and government. It sketches some ideas of physicist Michael Nielsen about cognitive technologies and of computer visionary Bret Victor about the creation of dynamic tools to help understand complex systems and ‘argue with evidence’, such as ‘tools for authoring dynamic documents’, and ‘Seeing Rooms’ for decision-makers — i.e rooms designed to support decisions in complex environments. It compares normal Cabinet rooms, such as that used in summer 1914 or October 1962, with state-of-the-art Seeing Rooms. There is very powerful feedback between: a) creating dynamic tools to see complex systems deeper (to see inside, see across time, and see across possibilities), thus making it easier to work with reliable knowledge and interactive quantitative models, semi-automating error-correction etc, and b) the potential for big improvements in the performance of political and government decision-making.

It is relevant to Brexit and anybody thinking ‘how on earth do we escape this nightmare’ but 1) these ideas are not at all dependent on whether you support or oppose Brexit, about which reasonable people disagree, and 2) they are generally applicable to how to improve decision-making — for example, they are relevant to problems like ‘how to make decisions during a fast moving nuclear crisis’ which I blogged about recently, or if you are a journalist ‘what future media could look like to help improve debate of politics’. One of the tools Nielsen discusses is a tool to make memory a choice by embedding learning in long-term memory rather than, as it is for almost all of us, an accident. I know from my days working on education reform in government that it’s almost impossible to exaggerate how little those who work on education policy think about ‘how to improve learning’.

Fields make huge progress when they move from stories (e.g Icarus)  and authority (e.g ‘witch doctor’) to evidence/experiment (e.g physics, wind tunnels) and quantitative models (e.g design of modern aircraft). Political ‘debate’ and the processes of government are largely what they have always been — largely conflict over stories and authorities where almost nobody even tries to keep track of the facts/arguments/models they’re supposedly arguing about, or tries to learn from evidence, or tries to infer useful principles from examples of extreme success/failure. We can see much better than people could in the past how to shift towards processes of government being ‘partially rational discussion over facts and models and learning from the best examples of organisational success‘. But one of the most fundamental and striking aspects of government is that practically nobody involved in it has the faintest interest in or knowledge of how to create high performance teams to make decisions amid uncertainty and complexity. This blindness is connected to another fundamental fact: critical institutions (including the senior civil service and the parties) are programmed to fight to stay dysfunctional, they fight to stay closed and avoid learning about high performance, they fight to exclude the most able people.

That passage is tempting to dive further into, but I am trying to discipline my blogging/associative note-taking output by ‘modularizing’ it to better build upon it in the future (a long-term personal knowledge management project of mine), so I’ll save that for a later post.

Back to the topic at hand with a relevant quote:

Language and writing were cognitive technologies created thousands of years ago which enabled us to think previously unthinkable thoughts. Mathematical notation did the same over the past 1,000 years. [A math problem that took al-Khawarizmi a convoluted paragraph to express can now be written as a quadratic equation an inch long.]

Michael Nielsen uses a similar analogy. Descartes and Fermat demonstrated that equations can be represented on a diagram and a diagram can be represented as an equation. This was a new cognitive technology, a new way of seeing and thinking: algebraic geometry. Changes to the ‘user interface’ of mathematics were critical to its evolution and allowed us to think unthinkable thoughts.

Similarly in the 18th Century, there was the creation of data graphics to demonstrate trade figures. Before this, people could only read huge tables. …

Segue to Nielsen’s essay elaborating on the above, then noting that visual thinking is another great example of a cognitive technology:

Language is an example of a cognitive technology: an external artifact, designed by humans, which can be internalized, and used as a substrate for cognition. That technology is made up of many individual pieces – words and phrases, in the case of language – which become basic elements of cognition. These elements of cognition are things we can think with.

Language isn’t the only cognitive technology we internalize.

Consider visual thinking. If, like me, you sometimes think visually, it’s tempting to suppose your mind’s eye is a raster display, capable of conceiving any image. But while tempting, this is wrong. In fact, our visual thinking is done using visual cognitive technologies we’ve previously internalized.

For instance, one of the world’s best-known art teachers, Betty Edwards, explains that the visual thinking of most non-artist adults is limited to what she refers to as a simple “symbol system”, and that this constrains both what they see and what they can visually conceive:

[A]dult students beginning in art generally do not really see what is in front of their eyes — that is, they do not perceive in the way required for drawing. They take note of what’s there, and quickly translate the perception into words and symbols mainly based on the symbol system developed throughout childhood and on what they know about the perceived object.

It requires extraordinary imagination to conceive new forms of visual meaning – i.e., new visual cognitive technologies. Many of our best-known artists and visual explorers are famous in part because they discovered such forms. When exposed to that work, other people can internalize those new cognitive technologies, and so expand the range of their own visual thinking.

For example, cubist artists such as Picasso developed the technique of using multiple points of view in a single painting. Once you’ve learnt to see cubist art, it can give you a richer sense of the structure of what’s being shown…

Another example is the work of Doc Edgerton, a pioneer of high-speed photography, whose photographs revealed previously unsuspected structure in the world. If you study such photographs, you begin to build new mental models of everyday phenomena, enlarging your range of visual thought…

Another class of examples comes from the many cartographers who’ve developed ways to visually depict geography. Consider, for example, the 1933 map of the London Underground, developed by Harry Beck. In the early 1930s, Beck noticed that the official map of the Underground was growing too complex for readers to understand. He simplified the map by abandoning exact geographic fidelity, as was commonly used on most maps up to that point. He concentrated instead on showing the topological structure of the network of stations, i.e., what connects to what…

Images such as these are not natural or obvious. No-one would ever have these visual thoughts without the cognitive technologies developed by Picasso, Edgerton, Beck, and many other pioneers. Of course, only a small fraction of people really internalize these ways of visual thinking. But in principle, once the technologies have been invented, most of us can learn to think in these new ways.

Why, you know who was good at visual thinking, perhaps the best geometric thinker in the history of mathematics (a tall claim!)? Bill Thurston. In Thinking and explaining, one of the most upvoted MathOverflow questions of all time, Thurston asked:

How big a gap is there between how you think about mathematics and what you say to others? Do you say what you’re thinking? Please give either personal examples of how your thoughts and words differ, or describe how they are connected for you.

I’ve been fascinated by the phenomenon the question addresses for a long time. We have complex minds evolved over many millions of years, with many modules always at work. A lot we don’t habitually verbalize, and some of it is very challenging to verbalize or to communicate in any medium. Whether for this or other reasons, I’m under the impression that mathematicians often have unspoken thought processes guiding their work which may be difficult to explain, or they feel too inhibited to try.

One prototypical situation is this: there’s a mathematical object that’s obviously (to you) invariant under a certain transformation. For instant, a linear map might conserve volume for an ‘obvious’ reason. But you don’t have good language to explain your reason—so instead of explaining, or perhaps after trying to explain and failing, you fall back on computation. You turn the crank and without undue effort, demonstrate that the object is indeed invariant.

Here’s a specific example. Once I mentioned this phenomenon to Andy Gleason; he immediately responded that when he taught algebra courses, if he was discussing cyclic subgroups of a group, he had a mental image of group elements breaking into a formation organized into circular groups. He said that ‘we’ never would say anything like that to the students. His words made a vivid picture in my head, because it fit with how I thought about groups. I was reminded of my long struggle as a student, trying to attach meaning to ‘group’, rather than just a collection of symbols, words, definitions, theorems and proofs that I read in a textbook.

There’s a reason this question matters so much to Thurston, and it’s exactly what this post of mine is building up to. But I like my foreplay, so I shall prolong it just a tad further with more related-ish quotes. (Also, “Connect Everything!” can be a hard impulse to resist.)

It turns out that Nielsen’s essay Thought as a technology also quotes part of the Thurston passage above (before Tao), just without referencing the MO question. (I knew it anyway because I’d seen it before. Gosh, yet another instance where my thoughts rise to Nielsen’s level! Where do I collect my MacArthur Fellowship??) He paraphrases Thurston like so:

… mathematicians often don’t think about mathematical objects using the conventional representations found in books. Rather, they rely heavily on what we might call hidden representations, such as the mental imagery Thurston describes, of groups breaking into formations of circular groups. Such hidden representations help them reason more easily than the conventional representations, and occasionally provide them with what may seem to others like magical levels of insight.

Nielsen notes that “the use of hidden representations occurs in many fields”. In electrical engineering, for instance, Gerald Sussman has the following quote about analyzing electrical circuits:

I was teaching my first classes in electrical engineering at MIT, in circuit theory… and I observed that what we taught the students wasn’t at all what the students were actually expected to learn. That is, what an expert person did when presented with a circuit… was quite different from what we tell [the students] to write down – the node equations… and then you’re supposed to grind these equations together somehow and solve them, to find out what’s going on. Well, you know, that’s not what a really good engineer does. …

Nielsen himself is a theoretical physicist, so he can draw from personal experience:

The energy surface prototype is based on the kind of hidden representation described by Thurston and Sussman. In particular, it’s based on the way I often visualize one-dimensional motion, in my work as a theoretical physicist. The visuals are not original to me: when I’ve shown the prototype to other physicists, several have told me “Oh, I think about one-dimensional motion like this”. But while this way of understanding may be common among physicists, they rarely talk about it. For instance, it’s not the kind of thing one would use in teaching a class on one-dimensional motion. At most, you might make a few ancillary sketches along these lines for the students. Certainly, you would not put this way of thinking front and center, or expect students to answer homework or exam questions using energy surfaces. Nor would you use such a representation in a research paper.

The situation is strange. A powerful way of thinking about one-dimensional motion is largely absent from our shared conversations. The reason is that traditional media are poorly adapted to working with such representations.

Okay, why not just share those representations? The answer is that they do, but it’s hard nonetheless, and there can be reasons why they don’t. Nielsen:

To answer that question, suppose you think hard about a subject for several years – say, cyclic subgroups of a group, to use Thurston’s example. Eventually you push up against the limits of existing representations. If you’re strongly motivated – perhaps by the desire to solve a research problem – you may begin inventing new representations, to provide insights difficult through conventional means. You are effectively acting as your own interface designer. But the new representations you develop may be held entirely in your mind, and so are not constrained by traditional static media forms. Or even if based on static media, they may break social norms about what is an “acceptable” argument. Whatever the reason, they may be difficult to communicate using traditional media. And so they remain private, or are only discussed informally with expert colleagues.

This is the passage I alluded to at the very beginning. It’s precisely what Thurston ran up against, and the answer to it will be the climax of this post.

But one last interlude – Terry Tao has also expressed the same sentiments, which is to be expected given the highly collaborative nature of his research style. He describes these in his answer to Thurston’s MO question, which is fantastic in its sheer range:

I find there is a world of difference between explaining things to a colleague, and explaining things to a close collaborator. With the latter, one really can communicate at the intuitive level, because one already has a reasonable idea of what the other person’s mental model of the problem is. In some ways, I find that throwing out things to a collaborator is closer to the mathematical thought process than just thinking about maths on one’s own, if that makes any sense.

One specific mental image that I can communicate easily with collaborators, but not always to more general audiences, is to think of quantifiers in game theoretic terms. Do we need to show that for every epsilon there exists a delta? Then imagine that you have a bag of deltas in your hand, but you can wait until your opponent (or some malicious force of nature) produces an epsilon to bother you, at which point you can reach into your bag and find the right delta to deal with the problem. Somehow, anthropomorphising the “enemy” (as well as one’s “allies”) can focus one’s thoughts quite well. This intuition also combines well with probabilistic methods, in which case in addition to you and the adversary, there is also a Random player who spits out mathematical quantities in a way that is neither maximally helpful nor maximally adverse to your cause, but just some randomly chosen quantity in between. The trick is then to harness this randomness to let you evade and confuse your adversary.

Is there a quantity in one’s PDE or dynamical system that one can bound, but not otherwise estimate very well? Then imagine that it is controlled by an adversary or by Murphy’s law, and will always push things in the most unfavorable direction for whatever you are trying to accomplish. Sometimes this will make that term “win” the game, in which case one either gives up (or starts hunting for negative results), or looks for additional ways to “tame” or “constrain” that troublesome term, for instance by exploiting some conservation law structure of the PDE.

For evolutionary PDEs in particular, I find there is a rich zoo of colourful physical analogies that one can use to get a grip on a problem. I’ve used the metaphor of an egg yolk frying in a pool of oil, or a jetski riding ocean waves, to understand the behaviour of a fine-scaled or high-frequency component of a wave when under the influence of a lower frequency field, and how it exchanges mass, energy, or momentum with its environment. In one extreme case, I ended up rolling around on the floor with my eyes closed in order to understand the effect of a gauge transformation that was based on this type of interaction between different frequencies. (Incidentally, that particular gauge transformation won me a Bocher prize, once I understood how it worked.) I guess this last example is one that I would have difficulty communicating to even my closest collaborators. Needless to say, none of these analogies show up in my published papers, although I did try to convey some of them in my PDE book eventually.

ADDED LATER: I think one reason why one cannot communicate most of one’s internal mathematical thoughts is that one’s internal mathematical model is very much a function of one’s mathematical upbringing. For instance, my background is in harmonic analysis, and so I try to visualise as much as possible in terms of things like interactions between frequencies, or contests between different quantitative bounds. This is probably quite a different perspective from someone brought up from, say, an algebraic, geometric, or logical background. I can appreciate these other perspectives, but still tend to revert to the ones I am most personally comfortable with when I am thinking about these things on my own.

ADDED (MUCH) LATER: Another mode of thought that I and many others use routinely, but which I realised only recently was not as ubiquitious as I believed, is to use an “economic” mindset to prove inequalities such as X≤Y or X≤CY for various positive quantities X,Y, interpreting them in the form “If I can afford Y, can I therefore afford X?” or “If I can afford lots of Y, can I therefore afford X?” respectively. This frame of reference starts one thinking about what types of quantities are “cheap” and what are “expensive”, and whether the use of various standard inequalities constitutes a “good deal” or not. It also helps one understand the role of weights, which make things more expensive when the weight is large, and cheaper when the weight is small.

ADDED (MUCH, MUCH) LATER: One visualisation technique that I have found very helpful is to incorporate the ambient symmetries of the problem (a la Klein) as little “wobbles” to the objects being visualised. This is most familiarly done in topology (“rubber sheet mathematics”), where every object considered is a bit “rubbery” and thus deforming all the time by infinitesimal homeomorphisms. But geometric objects in a scale-invariant problem could be thought of as being viewed through a camera with a slightly wobbly zoom lens, so that one’s mental image of these objects is always varying a little in size. Similarly, if one is in a translation-invariant setting, one’s mental camera should be sliding back and forth just a little to remind you of this, if one is working in a Euclidean space then the camera might be jiggling through all the rigid motions, and so forth. A more advanced example: if the problem is invariant under tensor products, as per the tensor product trick, then one’s low dimensional objects should have a tiny bit of shadowing (or perhaps look like one of these 3D images when one doesn’t have the polarised glasses, with the slightly separated red and blue components) that suggest that they are projections of a higher dimensional Cartesian product.
One reason why one wants to do this is that it helps suggest useful normalisations. If one is viewing a situation with a wobbly zoom lens and there is some length that appears all over one’s analysis, one is reminded that one can spend the scale invariance of the problem to zoom up or down as appropriate to normalise this scale to equal 1. Similarly for other ambient symmetries.

This sort of wobbling of symmetries is also available in less geometric settings. … In analysis, one often only cares about the order of magnitude of some very large or very small quantity X, rather than its exact value; so one should view this quantity as being a bit squishy in size, growing or shrinking by a factor of two or so every time one looks at the problem. If there is some probability theory in one’s problem, and some of your objects are random variables rather than deterministic variables, then you can imagine that every so often the “game resets”, with the random variables jumping around to different values in their range (and any quantities depending on these variables changing accordingly), whereas the deterministic variables stay fixed. Similarly if one has generic points in a variety, or nonstandard objects in a space (with the point being that if something bad happens if, say, your generic point is trapped in a subvariety, you can “reset the game” in which the generic point is now outside the subvariety; similarly one can “reset” an unbounded nonstandard number to be larger than any given standard number, etc.).

Hot damn, Terry!

I have digressed enough. Here’s Thurston’s reminiscences, the answer to the prompt at the beginning of this post.

First, foliations, where Thurston “did wrong”:

First I will discuss briefly the theory of foliations, which was my first subject, starting when I was a graduate student. (It doesn’t matter here whether you know what foliations are.)

At that time, foliations had become a big center of attention among geometric topologists, dynamical systems people, and differential geometers. I fairly rapidly proved some dramatic theorems. I proved a classification theorem for foliations, giving a necessary and sufficient condition for a manifold to admit a foliation. I proved a number of other significant theorems. I wrote respectable papers and published at least the most important theorems. It was hard to find the time to write to keep up with what I could prove, and I built up a backlog.

An interesting phenomenon occurred. Within a couple of years, a dramatic evacuation of the field started to take place. I heard from a number of mathematicians that they were giving or receiving advice not to go into foliations—they were saying that Thurston was cleaning it out. People told me (not as a complaint, but as a compliment) that I was killing the field. Graduate students stopped studying foliations, and fairly soon, I turned to other interests as well.

I do not think that the evacuation occurred because the territory was intellectually exhausted—there were (and still are) many interesting questions that remain and that are probably approachable. Since those years, there have been interesting developments carried out by the few people who stayed in the field or who entered the field, and there have also been important developments in neighboring areas that I think would have been much accelerated had mathematicians continued to pursue foliation theory vigorously.

Today, I think there are few mathematicians who understand anything approaching the state of the art of foliations as it lived at that time, although there are some parts of the theory of foliations, including developments since that time, that are still thriving.

What happened?

I believe that two ecological effects were much more important in putting a damper on the subject than any exhaustion of intellectual resources that occurred.

First, the results I proved (as well as some important results of other people) were documented in a conventional, formidable mathematician’s style. They depended heavily on readers who shared certain background and certain insights. The theory of foliations was a young, opportunistic subfield, and the background was not standardized. I did not hesitate to draw on any of the mathematics I had learned from others. The papers I wrote did not (and could not) spend much time explaining the background culture. They documented top-level reasoning and conclusions that I often had achieved after much reflection and effort. I also threw out prize cryptic tidbits of insight, such as “the Godbillon-Vey invariant measures the helical wobble of a foliation”, that remained mysterious to most mathematicans who read them. This created a high entry barrier: I think many graduate students and mathematicians were discouraged that it was hard to learn and understand the proofs of key theorems.

Second is the issue of what is in it for other people in the subfield. When I started working on foliations, I had the conception that what people wanted was to know the answers. I thought that what they sought was a collection of powerful proven theorems that might be applied to answer further mathematical questions. But that’s only one part of the story. More than the knowledge, people want personal understanding. And in our credit-driven system, they also want and need theorem-credits.

And secondly, 3-manifolds and hyperbolic geometry, where Thurston “did right”:

I’ll skip ahead a few years, to the subject that Jaffe and Quinn alluded to, when I began studying 3-dimensional manifolds and their relationship to hyperbolic geometry. (Again, it matters little if you know what this is about.) I gradually built up over a number of years a certain intuition for hyperbolic three-manifolds, with a repertoire of constructions, examples and proofs. (This process actually started when I was an undergraduate, and was strongly bolstered by applications to foliations.) After a while, I conjectured or speculated that all three-manifolds have a certain geometric structure; this conjecture eventually became known as the geometrization conjecture. About two or three years later, I proved the geometrization theorem for Haken manifolds. It was a hard theorem, and I spent a tremendous amount of effort thinking about it. When I completed the proof, I spent a lot more effort checking the proof, searching for difficulties and testing it against independent information.

I’d like to spell out more what I mean when I say I proved this theorem. It meant that I had a clear and complete flow of ideas, including details, that withstood a great deal of scrutiny by myself and by others. Mathematicians have many different styles of thought. My style is not one of making broad sweeping but careless generalities, which are merely hints or inspirations: I make clear mental models, and I think things through. My proofs have turned out to be quite reliable. I have not had trouble backing up claims or producing details for things I have proven. I am good in detecting flaws in my own reasoning as well as in the reasoning of others.

However, there is sometimes a huge expansion factor in translating from the encoding in my own thinking to something that can be conveyed to someone else. My mathematical education was rather independent and idiosyncratic, where for a number of years I learned things on my own, developing personal mental models for how to think about mathematics. This has often been a big advantage for me in thinking about mathematics, because it’s easy to pick up later the standard mental models shared by groups of mathematicians. This means that some concepts that I use freely and naturally in my personal thinking are foreign to most mathematicians I talk to. My personal mental models and structures are similar in character to the kinds of models groups of mathematicians share—but they are often different models. At the time of the formulation of the geometrization conjecture, my understanding of hyperbolic geometry was a good example. A random continuing example is an understanding of finite topological spaces, an oddball topic that can lend good insight to a variety of questions but that is generally not worth developing in any one case because there are standard circumlocutions that avoid it.

Neither the geometrization conjecture nor its proof for Haken manifolds was in the path of any group of mathematicians at the time—it went against the trends in topology for the preceding 30 years, and it took people by surprise. To most topologists at the time, hyperbolic geometry was an arcane side branch of mathematics, although there were other groups of mathematicians such as differential geometers who did understand it from certain points of view. It took topologists a while just to understand what the geometrization conjecture meant, what it was good for, and why it was relevant.

This is the answer! Continuing:

At the same time, I started writing notes on the geometry and topology of 3-manifolds, in conjunction with the graduate course I was teaching. I distributed them to a few people, and before long many others from around the world were writing for copies. The mailing list grew to about 1200 people to whom I was sending notes every couple of months. I tried to communicate my real thoughts in these notes. People ran many seminars based on my notes, and I got lots of feedback. Overwhelmingly, the feedback ran something like “Your notes are really inspiring and beautiful, but I have to tell you that we spent 3 weeks in our seminar working out the details of §n.n. More explanation would sure help.”

I also gave many presentations to groups of mathematicians about the ideas of studying 3-manifolds from the point of view of geometry, and about the proof of the geometrization conjecture for Haken manifolds. At the beginning, this subject was foreign to almost everyone. It was hard to communicate—the infrastructure was in my head, not in the mathematical community. There were several mathematical theories that fed into the cluster of ideas: three-manifold topology, Kleinian groups, dynamical systems, geometric topology, discrete subgroups of Lie groups, foliations, Teichmuller spaces, pseudo-Anosov diffeomorphisms, geometric group theory, as well as hyperbolic geometry.

We held an AMS summer workshop at Bowdoin in 1980, where many mathematicans in the subfields of low-dimensional topology, dynamical systems and Kleinian groups came. It was an interesting experience exchanging cultures.

It became dramatically clear how much proofs depend on the audience. We prove things in a social context and address them to a certain audience. Parts of this proof I could communicate in two minutes to the topologists, but the analysts would need an hour lecture before they would begin to understand it. Similarly, there were some things that could be said in two minutes to the analysts that would take an hour before the topologists would begin to get it. And there were many other parts of the proof which should take two minutes in the abstract, but that none of the audience at the time had the mental infrastructure to get in less than an hour.

At that time, there was practically no infrastructure and practically no context for this theorem, so the expansion from how an idea was keyed in my head to what I had to say to get it across, not to mention how much energy the audience had to devote to understand it, was very dramatic.

In reaction to my experience with foliations and in response to social pressures, I concentrated most of my attention on developing and presenting the infrastructure in what I wrote and in what I talked to people about. I explained the details to the few people who were “up” for it. I wrote some papers giving the substantive parts of the proof of the geometrization theorem for Haken manifolds—for these papers, I got almost no feedback. Similarly, few people actually worked through the harder and deeper sections of my notes until much later.

The result has been that now quite a number of mathematicians have what was dramatically lacking in the beginning: a working understanding of the concepts and the infrastructure that are natural for this subject. There has been and there continues to be a great deal of thriving mathematical activity. By concentrating on building the infrastructure and explaining and publishing definitions and ways of thinking but being slow in stating or in publishing proofs of all the “theorems” I knew how to prove, I left room for many other people to pick up credit. There has been room for people to discover and publish other proofs of the geometrization theorem. These proofs helped develop mathematical concepts which are quite interesting in themselves, and lead to further mathematics.

What mathematicians most wanted and needed from me was to learn my ways of thinking, and not in fact to learn my proof of the geometrization conjecture for Haken manifolds. It is unlikely that the proof of the general geometrization conjecture will consist of pushing the same proof further.

The ascended economy civilizational failure mode, ft Scott Alexander and Charlie Stross

Scott Alexander’s short speculative depiction of a possible Goodhart’s law-style civilizational failure mode – where “the imperative of economic growth” becomes the end rather than the means, becomes, indeed, totalizing – in his book review of Robin Hanson’s Age of Em has stuck with me like a low-grade nightmare ever since I read it all those years ago:

Imagine a company that manufactures batteries for electric cars. The inventor of the batteries might be a scientist who really believes in the power of technology to improve the human race. The workers who help build the batteries might just be trying to earn money to support their families. The CEO might be running the business because he wants to buy a really big yacht. And the whole thing is there to eventually, somewhere down the line, let a suburban mom buy a car to take her kid to soccer practice. Like most companies the battery-making company is primarily a profit-making operation, but the profit-making-ness draws on a lot of not-purely-economic actors and their not-purely-economic subgoals.

Now imagine the company fires all its employees and replaces them with robots. It fires the inventor and replaces him with a genetic algorithm that optimizes battery design. It fires the CEO and replaces him with a superintelligent business-running algorithm. All of these are good decisions, from a profitability perspective. We can absolutely imagine a profit-driven shareholder-value-maximizing company doing all these things. But it reduces the company’s non-masturbatory participation in an economy that points outside itself, limits it to just a tenuous connection with soccer moms and maybe some shareholders who want yachts of their own.

Now take it further. Imagine there are no human shareholders who want yachts, just banks who lend the company money in order to increase their own value. And imagine there are no soccer moms anymore; the company makes batteries for the trucks that ship raw materials from place to place. Every non-economic goal has been stripped away from the company; it’s just an appendage of Global Development.

Now take it even further, and imagine this is what’s happened everywhere. There are no humans left; it isn’t economically efficient to continue having humans. Algorithm-run banks lend money to algorithm-run companies that produce goods for other algorithm-run companies and so on ad infinitum. Such a masturbatory economy would have all the signs of economic growth we have today. It could build itself new mines to create raw materials, construct new roads and railways to transport them, build huge factories to manufacture them into robots, then sell the robots to whatever companies need more robot workers. It might even eventually invent space travel to reach new worlds full of raw materials. Maybe it would develop powerful militaries to conquer alien worlds and steal their technological secrets that could increase efficiency. It would be vast, incredibly efficient, and utterly pointless. The real-life incarnation of those strategy games where you mine Resources to build new Weapons to conquer new Territories from which you mine more Resources and so on forever.

But this seems to me the natural end of the economic system. Right now it needs humans only as laborers, investors, and consumers. But robot laborers are potentially more efficient, companies based around algorithmic trading are already pushing out human investors, and most consumers already aren’t individuals – they’re companies and governments and organizations. At each step you can gain efficiency by eliminating humans, until finally humans aren’t involved anywhere.

He goes into more detail in his followup post, Ascended Economy, which has the following line of argument:

… we can hope that things will get so post-scarcity that governments and private charities give each citizen a few shares in the Ascended Economy to share the gains with non-investors. This would at least temporarily be a really good outcome.

But in the long term it reduces the political problem of regulating corporations to the scientific problem of Friendly AI, which is really bad.

Even today, a lot of corporations do things that effectively maximize shareholder value but which we consider socially irresponsible. Environmental devastation, slave labor, regulatory capture, funding biased science, lawfare against critics – the list goes on and on. They have a simple goal – make money – whereas what we really want them to do is much more complicated and harder to measure – make money without engaging in unethical behavior or creating externalities. We try to use regulatory injunctions, and it sort of helps, but because those go against a corporation’s natural goals they try their best to find loopholes and usually succeed – or just take over the regulators trying to control them.

This is bad enough with bricks-and-mortar companies run by normal-intelligence humans. But it would probably be much worse with ascended corporations. They would have no ethical qualms we didn’t program into them – and again, programming ethics into them would be the Friendly AI problem, which is really hard. And they would be near-impossible to regulate; most existing frameworks for such companies are built on crypto-currency and exist on the cloud in a way that transcends national borders.

(A quick and very simple example of an un-regulate-able ascended corporation – I don’t think it would be too hard to set up an automated version of Uber. I mean, the core Uber app is already an automated version of Uber, it just has company offices and CEOs and executives and so on doing public relations and marketing and stuff. But if the government ever banned Uber the company, could somebody just code another ride-sharing app that dealt securely in Bitcoins? And then have it skim a little bit off the top, which it offered as a bounty to anybody who gave it the processing power it would need to run? And maybe sent a little profit to the programmer who wrote the thing? Sure, the government could arrest the programmer, but short of arresting every driver and passenger there would be no way to destroy the company itself.)

The more ascended corporations there are trying to maximize shareholder value, the more chance there is some will cause negative externalities. But there’s a limited amount we would be able to do about them. This is true today too, but at least today we maintain the illusion that if we just elected Bernie Sanders we could reverse the ravages of capitalism and get an economy that cares about the environment and the family and the common man. An Ascended Economy would destroy that illusion.

How bad would it get? Once ascended corporations reach human or superhuman level intelligences, we run into the same AI goal-alignment problems as anywhere else. Would an ascended corporation pave over the Amazon to make a buck? Of course it would; even human corporations today do that, and an ascended corporation that didn’t have all human ethics programmed in might not even get that it was wrong. What if we programmed the corporation to follow local regulations, and Brazil banned paving over the Amazon? This is an example of trying to control AIs through goals plus injunctions – a tactic Bostrom finds very dubious. It’s essentially challenging a superintelligence to a battle of wits – “here’s something you want, and here are some rules telling you that you can’t get it, can you find a loophole in the rules?” If the superintelligence is super enough, the answer will always be yes.

From there we go into the really gnarly parts of AI goal alignment theory. Would an ascended corporation destroy South America entirely to make a buck? Depending on how it understood its imperative to maximize shareholder value, it might. Yes, this would probably kill many of its shareholders, but its goal is to “maximize shareholder value”, not to keep its shareholders alive to enjoy that value. It might even be willing to destroy humanity itself if other parts of the Ascended Economy would pick up the slack as investors.

Charlie Stross picks up where Scott left off in the ‘Economic 2.0’ scenario run amok he depicts in Accelerando, the post-singularity novel that’s featured in the last few posts. He takes this speculative viewpoint further than Scott does, or at least in a different direction – as an answer to the Fermi Paradox:

… while all this is going on, the damnfool human species has finally succeeded in making itself obsolete. The proximate cause of its displacement from the pinnacle of creation (or the pinnacle of teleological self-congratulation, depending on your stance on evolutionary biology) is an attack of self-aware corporations. The phrase “smart money” has taken on a whole new meaning, for the collision between international business law and neurocomputing technology has given rise to a whole new family of species – fast-moving corporate carnivores in the Net.

Consider for instance one of the more imaginatively alien aliens in science fiction – a defaulting corporate instrument disguising itself from creditors as a naturally-evolved alien, called ‘the Slug’:

“How much for just the civilization?” asks the Slug.

Pierre looks down at it thoughtfully. It’s not really a terrestrial mollusk: Slugs on Earth aren’t two meters long and don’t have lacy white exoskeletons to hold their chocolate-colored flesh in shape. But then, it isn’t really the alien it appears to be. It’s a defaulting corporate instrument that has disguised itself as a long-extinct alien upload, in the hope that its creditors won’t recognize it if it looks like a randomly evolved sentient. One of the stranded members of Amber’s expedition made contact with it a couple of subjective years ago, while exploring the ruined city at the center of the firewall. Now Pierre’s here because it seems to be one of their most promising leads. Emphasis on the word promising – because it promises much, but there is some question over whether it can indeed deliver.

“The civilization isn’t for sale,” Pierre says slowly. The translation interface shimmers, storing up his words and transforming them into a different deep grammar, not merely translating his syntax but mapping equivalent meanings where necessary. “But we can give you privileged observer status if that’s what you want. And we know what you are. If you’re interested in finding a new exchange to be traded on, your existing intellectual property assets will be worth rather more there than here.”

The rogue corporation rears up slightly and bunches into a fatter lump. Its skin blushes red in patches. “Must think about this. Is your mandatory accounting time cycle fixed or variable term? Are self-owned corporate entities able to enter contracts?”

“I could ask my patron,” Pierre says casually. He suppresses a stab of angst. He’s still not sure where he and Amber stand, but theirs is far more than just a business relationship, and he worries about the risks she’s taking. “My patron has a jurisdiction within which she can modify corporate law to accommodate your requirements. Your activities on a wider scale might require shell companies –” the latter concept echoes back in translation to him as host organisms – “but that can be taken care of.”

The translation membrane wibbles for a while, apparently reformulating some more abstract concepts in a manner that the corporation can absorb. Pierre is reasonably confident that it’ll take the offer, however. When it first met them, it boasted about its control over router hardware at the lowest levels. But it also bitched and moaned about the firewall protocols that were blocking it from leaving (before rather rudely trying to eat its conversationalist). He waits patiently, looking around at the swampy landscape, mudflats punctuated by clumps of spiky violet ferns. The corporation has to be desperate, to be thinking of the bizarre proposition Amber has dreamed up for him to pitch to it.

“Sounds interesting,” the Slug declares after a brief confirmatory debate with the membrane. “If I supply a suitable genome, can you customize a container for it?”

“I believe so,” Pierre says carefully. “For your part, can you deliver the energy we need?”

“From a gate?” For a moment the translation membrane hallucinates a stick-human, shrugging. “Easy. Gates are all entangled: Dump coherent radiation in at one, get it out at another. Just get me out of this firewall first.”

“But the lightspeed lag –”

“No problem. You go first, then a dumb instrument I leave behind buys up power and sends it after. Router network is synchronous, within framework of state machines that run Universe 1.0; messages propagate at same speed, speed of light in vacuum, except use wormholes to shorten distances between nodes. Whole point of the network is that it is nonlossy. Who would trust their mind to a communications channel that might partially randomize them in transit?”

Pierre goes cross-eyed, trying to understand the implications of the Slug’s cosmology. But there isn’t really time, here and now: They’ve got on the order of a minute of wall-clock time left to get everything sorted out, if Aineko is right. One minute to go before the angry ghosts start trying to break into the DMZ by other means. “If you are willing to try this, we’d be happy to accommodate you,” he says, thinking of crossed fingers and rabbits’ feet and firewalls.

“It’s a deal,” the membrane translates the Slug’s response back at him. “Now we exchange shares/plasmids/ownership? Then merger complete?”

Pierre stares at the Slug: “But this is a business arrangement!” he protests. “What’s sex got to do with it?”

“Apologies offered. I am thinking we have a translation error. You said this was to be a merging of businesses?”

“Not that way. It’s a contract. We agree to take you with us. In return, you help lure the Wunch into the domain we’re setting up for them and configure the router at the other end …”

and, further down:

Amber finds the Slug browsing quietly in a transparent space filled with lazily waving branches that resemble violet coral fans. They’re a ghost-memory of alien life, an order of thermophilic quasi fungi with hyphae ridged in actin/myosin analogues, muscular and slippery filter feeders that eat airborne unicellular organisms. The Slug itself is about two meters long and has a lacy white exoskeleton of curves and arcs that don’t repeat, disturbingly similar to a Penrose tiling. Chocolate brown organs pulse slowly under the skeleton. The ground underfoot is dry but feels swampy.

Actually, the Slug is a surgical disguise. Both it and the quasi-fungal ecosystem have been extinct for millions of years, existing only as cheap stage props in an interstellar medicine show run by rogue financial instruments. The Slug itself is one such self-aware scam, probably a pyramid scheme or even an entire compressed junk bond market in heavy recession, trying to hide from its creditors by masquerading as a life-form.

except that the Slug is just a particular instance of the corporation-as-lifeform, per this dialogue between two of the main characters:

“Corporations are life-forms back home, too, aren’t they? And we trade them. We give our AIs corporations to make them legal entities, but the analogy goes deeper. Look at any company headquarters, fitted out with works of art and expensive furniture and staff bowing and scraping everywhere –”

” – They’re the new aristocracy. Right?”

“Wrong. When they take over, what you get is more like the new biosphere. Hell, the new primordial soup: prokaryotes, bacteria, and algae, mindlessly swarming, trading money for plasmids.” The Queen passes her consort a wineglass. When he drinks from it, it refills miraculously. “Basically, sufficiently complex resource-allocation algorithms reallocate scarce resources … and if you don’t jump to get out of their way, they’ll reallocate you. I think that’s what happened inside the Matrioshka brain we ended up in: Judging by the Slug it happens elsewhere, too. You’ve got to wonder where the builders of that structure came from. And where they went. And whether they realized that the destiny of intelligent tool-using life was to be a stepping-stone in the evolution of corporate instruments.”

“Maybe they tried to dismantle the companies before the companies spent them.” Pierre looks worried. “Running up a national debt, importing luxurious viewpoint extensions, munching exotic dreams. Once they plugged into the Net, a primitive Matrioshka civilization would be like, um.” He pauses. “Tribal. A primitive postsingularity civilization meeting the galactic net for the first time. Overawed. Wanting all the luxuries. Spending their capital, their human – or alien – capital, the meme machines that built them. Until there’s nothing left but a howling wilderness of corporate mechanisms looking for someone to own.”


“Idle speculation,” he agrees.

“But we can’t ignore it.” She nods. “Maybe some early corporate predator built the machines that spread the wormholes around brown dwarfs and ran the router network on top of them in an attempt to make money fast. By not putting them in the actual planetary systems likely to host tool-using life, they’d ensure that only near-singularity civilizations would stumble over them. Civilizations that had gone too far to be easy prey probably wouldn’t send a ship out to look … so the network would ensure a steady stream of yokels new to the big city to fleece. Only they set the mechanism in motion billions of years ago and went extinct, leaving the network to propagate, and now there’s nothing out there but burned-out Matrioshka civilizations and howling parasites like the angry ghosts and the Wunch. And victims like us.”

So another answer to “where is everyone?” is “they got dissembled by corporations”.

Create your website at WordPress.com
Get started