Everything2.com user iceowl is a beautiful writer. Perhaps my favorite piece by him is an essay I keep thinking of as “Ham Radio”, but which is actually titled September 9, 2013. (It gets frustrating sometimes, because I keep forgetting the actual title, and Google will completely fail to produce the correct link within the top 100 search results even when I explicitly quote whole sentences verbatim.)
The context behind the story, which is a real one by the way (iceowl usually writes fiction), is enough to move me halfway to tears because his situation feels isomorphic to mine:
This really happened, though it’s just a blip in the whole tableau of life and relationship. I thought it might resonate with other aging ham radio operator types. Back in the 1990s I was having stuff published regularly by “QST“, the monthly ham radio magazine put out by the American Radio Relay League (of which I am a “life” member).
Since then half of my life went by. My kids grew up and left home. I got divorced and remarried. I visited Antarctica and many other places. Did lots of stuff. Blah blah.
I figured it was time to put away those things of the past that weren’t serving me anymore, and ham radio seemed like a great place to start. For decades I had been dragging hundreds of pounds of gear from state to state – across the U.S. and then up to Alaska and back, at great expense. It probably cost me more to move and store the stuff than it was worth several times over. One day when the wife and I were arguing about having enough space, I called it “enough.” I tossed hundreds of feet of RG-8 into the trash. Just opened the can and pushed it in, along with nearly broken MFJ antenna tuners (which are all nearly broken when they’re new, anyway), hand held radios powered by niCad batteries that are impossible to get anymore – just tossed it all into the electronic waste bin and walked away.
I thought I’d feel nostalgia and maybe a little grief. Instead, there was nothing.
The most expensive pieces I had left were my ICOM linear amp, which cost me $5500 back in 2000, and my HF radio, which cost about $3000 back in 2002. I could lump in some more gear and say there were some $10K in bits I needed to jettison, so I took it all to the local ham radio store, Ham Radio Outlet to put in as a consignment sale. The best I could hope is to get $5000 out of the lot of it (minus their 15% commission), but getting back 1/2 the value after 13 years seemed like a fine deal to me. I walked out of the store with a receipt and went home to a nice empty garage.
My wife’s reaction was utterly unexpected.
So I wrote it all up and sent this doc to QST magazine, figuring they’d publish it for old time’s sake. But they rejected it. They actually sent me a physical rejection letter with a long excuse as to why they didn’t take it. The excuse seemed overly long given the lack of importance of the whole thing. I sent the submission electronically. They could have just sent me an e-mail, but they’re old fashioned types.
I get the feeling the guy who wrote the letter wouldn’t have rejected it, but alas, there’s a content “committee” he had to deal with. I might have said the word “damn” in the text, which would have been enough to cause rejection from this ultra-conservative journal that’s read by probably 200 people on the planet. Maybe.
I get it monthly, and will until either A) I cease to exist, or B) QST ceases to exist. If A happens before B, my wife will need to send them my obituary, which they will dutifully print. It may be the last thing involving me they print, the way things go.
The upshot of all of it is I now have a small ham radio station in the corner of my living room, and a 25′ hex beam antenna on my roof. I can talk to Hungarians and South Africans on it when the conditions are good.
I can do exactly the same thing on my iPhone or over the internet. But those are not physics that I can control. Ham is physics completely under my influence, and there’s still some joy for me in that.
Goddamn, I’m trying hard not to just quote the whole thing here, so I’ll limit myself to two more quotes.
I remember watching my uncle wrap magnet wire around an old oatmeal box and having him tell me we’d tune with it, and then pressing the black bakelite headphone to my ear and in my own backyard, next to the tomato plants between the swimming pool and the fence that was a ground even though it was a fence: hearing a man speak in a British voice about things important to people who lived beyond that thin line that separated water and sky beyond the sandy beach.
I remember that radio was lightning: action at a distance — that with it we’d connect with people who stood with feet bare in jungles we’d never see, upon sands we’d never touch, who knew summers during our winters and told of time that ran through days we’d yet to live, or just had. We hear English accented by mother tongues on continents alive to us only in books or grainy film. We’d bathe in that violation of time and space that limited our waking hours to bedtime and dinner time and the interminable school room clock whose hands slowed as they approached three. They thrived while we slept. Needed umbrellas during our sunshine. Ate unnameable foods, and called themselves things that sounded as if out of alien bibles. Because nobody we knew had ever been anywhere, or spoke anything, or knew anything other than our neighborhood where we all lived, and English that we all spoke.
I got walkie-talkies for my birthday, and with them I could speak to my friends even when I couldn’t see them, and I needed to know what made that power from the tiny objects inside the box. Of what sorcery were these batteries? This antenna? What did they know, those who brought these things to my tiny reality? Certainly a kid with a screwdriver could find the magic.
At six years old, about to enter grade one, I had my beloved walkie-talkies in bits. Pried off the back and cut out all the baubles stuck to the boards with my father’s pliers and mother’s sewing scissors.
My mother scolded: “Your father and I paid a fortune for those! You will never get that to work again.”
Oh mother, I will. Yes, I will. So I asked her, “What do you have to be if you want to work on radios?”
I will admit that by the end of this passage I was crying. Goddamn I’ve read this story a million times and it still gets me at the exact same line:
Radio was over when I met a friend at a professional convention and he asked me about it. Was I active?
Not only had I not been active since we’d last spoken years before and I didn’t have an antenna up for any band, anywhere.
Worst of all, I couldn’t remember my gear. I knew I had some. I could envision it in my mind, but for the life of me I couldn’t remember the designator.
“I haven’t got out the old ICOM IC-1532 in a long time.”
“I’ll bet. What the heck is that, anyway?”
I looked at my shoes. They were still on my feet. Yes, they were. I said to him, while making sure I didn’t lose my shoes, “Can you believe I can’t remember what type of HF rig I have?”
“Come on. You have to come back. How can you stay away?”
I wanted to tell him it was pretty easy. I yanked my iPhone out of my pocket. “I can play Scrabble with people in Vietnam and Armenia at the same time…”
“Oh, put that away. I’m sick of hearing things like that. I know what you’re going through. I’ve seen it in a lot of guys. You get older. Life changes and technology grows. It’s satellite TVand 3D first-person video games. Sure. But radio is eternal. It’s a fundamental law of physics. We didn’t create it, and we can’t kill it. It called you. Just like the rest of us. And look, we have all these cool digital modes. The gear is better than it’s ever been. But that’s not the reason to come back. Take a minute and think back. Don’t you remember radio? Don’t you remember how it felt when you started?”
“I was very young.”
“We all were. It starts when you’re young. And what about our kids?”
“Because nobody’s showing them that you can still send data around the world with a CW key, some wire, and a three-volt battery.” “A lemon,” I said. “You could do it QRP with a lemon and a rusted razor blade. But I think those days are over, my friend. For them. For us. We’re just dead guys who don’t realize it yet.”
“Oh, cut it out. You used to love CW. Come to field day. I’ll put you in front of a CW station for 10 minutes and you’ll forget you ever said any of that.”
“Don’t think so,” I said. We shook hands. I never showed at ARRL Field day.
Last one. The tears are fortunately drying up:
“Why are you a ham?” I asked him.
“I like radio,” he said.
“And why is that?” I said. And he shrugged, and the other guys shrugged and looked around as if an answer was crawling around somewhere in the store.
“You know, I went to Antarctica on a science expedition for the National Science Foundation,” I said. “I’d always wanted to go there. Ever since I remember. As a kid I read all the books about Shackleton and Scott and Mawson. In fact, the first short story I ever had published was in QST in the early 1990s and it was about the two things that I loved most in the world: ham radio, and polar exploration. Well, ten years later I got a shot at going to Antarctica and the pole so I took it. And the first day I was in Antarctica was like I had stepped into a dream that I’d been having most of my life. “Well, I went to one of the bars they have there at McMurdo station, and I sat down and ordered a beer. And one of the guys who was there came up to me, recognizing that I was new, and introduced himself. He said to me, ‘ok, so why are you here?’ and I told him about the project I was on. But he said, ‘No. I mean, why are you HERE?'”
I paused to see if any of them wanted to jump in with the answer, but they didn’t. Eventually the counterman said, “So?”
I said, “So I told him I didn’t have a clue why I was there. Only that I had felt compelled to be there for my entire life. I don’t know why. And when it came my time to go to Antarctica all the pieces fell into place as if it were magic. And I was there. And you know what he told me? He said that everyone – all of them in that bar were there at that moment for the same reason. We all knew from the time we were kids that we just had to go to Antarctica and when it came time to go, the fates turned their wheels and there we were.”
A couple guys nodded, if only to be friendly. And then I said, “It’s the same with radio. You’re here because, who knows why. Because you had to do it. From the time you were a child you wondered about it and now, here you are. Radio called you. Like it called everyone who comes in here. Like it called me.”
“I like your thinking,” said the counterman, finalizing the sale.
And now I knew what my wife meant. A pianist is still a pianist even if he is without a piano, and a radioman is still one without a radio. But the calling is not erased by force of will or lack of material to complete the dream. And the draw never stops. The voice within does not stop because it is difficult or inconvenient to follow.
Serge Lang‘s Algebra was my first serious encounter with mathematics, the event was a very singular defining moment in my life.
Back then, I was firmly intent on becoming a poet or, at least, pursuing some kind of literary career. Like most budding poets, I loved books and I liked spending time in the library. I was very curious, I would often wander in a section and pick up a book just to see what that row was about. One day I picked up an old rebound copy of Lang’s Algebra. It was dirty purplish grey and it just said Lang: Algebra in half erased white letters. I don’t think I had any good reason to pick up that book, it certainly wasn’t very attractive, I probably just wondered why one would write such a large tome on algebra. I sat down with the book and read the first page where he defines a monoid and proves the uniqueness of the identity element. I was fascinated. It was so beautiful. I fell in love.
I don’t think I read much of Lang’s book on that day, I probably only had an hour or less to spare, but I went back to the math section later and I picked up more books. The next one was Willard Van Orman Quine‘s Set Theory and its Logic, which is probably the worst possible way to get introduced to Set Theory but that’s how I eventually became a logician instead of a poet.
One thing I liked to do back in college was browse through MathOverflow’s highest-voted questions. I got a lot more value out of the answers there than I did for the more technical answers (of course, since they’re pitched for specialists at a postrigorous level of mathematical maturity, to use Terry Tao’s term).
One of the discussion threads I liked was Why is a topology made up of ‘open’ sets?. You’ll sometimes see undergrads complain that the definition of a topological space is totally unintuitive (waves hands). It turns out that even world-class researchers do too, since this question was asked by Minhyong Kim, who’s a Professor of Number Theory at the University of Oxford, a doctoral student of Barry Mazur and Serge Lang (jeez), and a frequently insightful contributor to MO himself.
Kim prefaces his question like so:
I’m ashamed to admit it, but I don’t think I’ve ever been able to genuinely motivate the definition of a topological space in an undergraduate course. Clearly, the definition distills the essence of many examples, but it’s never been obvious to me how it came about, compared, for example, to the rather intuitive definition of a metric space. In some ways, the sparseness of the definition is startling as it tries to capture, apparently successfully, the barest notion of ‘space’ imaginable.
Anyway, here are some answers I liked.
Terry Tao notes that
The textbook presentation of a topology as a collection of open sets is primarily an artefact of the preference for minimalism in the standard foundations of the basic structures of mathematics. This minimalism is a good thing when it comes to analysing or creating such structures, but gets in the way of motivating the foundational definitions of such structures, and can also cause conceptual difficulties when trying to generalise these structures.
An analogy is with Riemannian geometry. The standard, minimalist definition of a Riemannian manifold is a manifold M together with a symmetric positive definite bilinear form g – the metric tensor. There are of course many other important foundational concepts in Riemannian geometry, such as length, angle, volume, distance, isometries, the Levi-Civita connection, and curvature – but it just so happens that they can all be described in terms of the metric tensor g, so we omit the other concepts from the standard minimalist definition, viewing them as derived concepts instead. But from a conceptual point of view, it may be better to think of a Riemannian manifold as being an entire package of a half-dozen closely inter-related geometric structures, with the metric tensor merely being a canonical generating element of the package.
Similarly, a topology is really a package of several different structures: the notion of openness, the notion of closedness, the notion of neighbourhoods, the notion of convergence, the notion of continuity, the notion of a homeomorphism, the notion of a homotopy, and so forth. They are all important, and it is somewhat artificial to try to designate one of them as being more “fundamental” than the other. But the notion of openness happens to generate all the other notions, and has a particularly elegant and simple axiomatisation, so we have elected to make it the basis for the standard minimalist definition of a topology. But it is important to realise that this is by no means the only way to define a topology, and adopting a more package-oriented point of view can be preferable in some cases (for instance, when generalising the notion of a topology to more abstract structures, such as topoi, in which open sets no longer are the most convenient foundation to begin with).
Related comment by Jacques Carette:
The difference between the minimalist and ‘packaged’ version is exactly what Bill Farmer and I have called axiomatic theories and high-level theories (respectively). A minimalist ‘basis’ is useful when trying to connect theories (theory interpretations, aka homomorphisms) because less has to be proven, but that makes a really poor “working environment”. Any mathematician using a theory (be it topology or Riemannian geometry) will automatically want to use the high-level version. When trying to mechanize mathematics, such issues are no longer philosophical!
MO user Loop Space is pretty opinionated, but he makes a good point:
It may seem hard to add a new answer to all this, but here’s mine. How to motivate the open set garbage of topological spaces:
Answer: Don’t.
There are many ideas in mathematics that can be easily derived from some real situation, and I would count approximation (ie limits), metric spaces, and neighbourhoods as among these. I think that it is quite easy to motivate the neighbourhood definition of topological spaces, for example, by considering real world examples of needing approximations that can’t be controlled by metrics (for example, if you always need your approximations to be greater than the true value).
But one can take this line too far and try to motivate everything in mathematics from real-world situations and this, I think, misses a great opportunity to teach something that all students of mathematics need to learn: that when something is presented to you in a particular way, you don’t have to accept that viewpoint but can choose a different one more suited to what you want to do.
We try to teach them this with bases of vector spaces: don’t use the basis given, use one that makes the matrix look nice (diagonal if possible!).
So here, we can present topological spaces as sets with lots of declared neighbouhoods satisfying certain simple, intuitive rules. But they are hard to work with so instead we work with open sets (sets which are neighbourhoods of all their points) because it makes life easier.
I should qualify the above a little. It’s written as a counterpoint to all the previous replies which try to justify open sets based on some intuition. I’m not saying that those are wrong – far from it – just that with something like this, one should think carefully about the message one is sending to the students about mathematics. At some point, they have to learn that mathematics strives to be clear and elegant rather than intuitive and vague, and it’s a good idea to do this with an example like topological spaces where we are still close to the intuition, rather than something like function spaces where intuition often takes a hike.
Meredith L. Patterson, not to be confused with Meredith A. Patterson the Broadway actress, is a devastatingly intelligent person and a great writer.
I first found out about her via the great paper Exploit programming: from buffer overflows to weird machines, which I found in a footnote in Gwern’s page Surprisingly Turing-complete. (Side note: it’s always a great idea, when you’re feeling uninspired or just want something random to read, to trawl Gwern Branwen’s footnotes.) (Another side note: that paper is wild. I am in awe of the hacker mindset, and it is in full display there.) For whatever reason Meredith’s name lodged itself in mind despite Bratus being primary author. A few months later I stumbled across a Facebook comment of hers on a post by one of the SSC crowd (or its superset, I forget). I forget what she wrote, except that for an offhanded remark it gave a strong impression of “wild sideways” intelligence that stood out even among the usually high-signal comments by the people I follow (like but unlike JenniferRM, who seems incapable of not being insightful). A few months after that I saw her mentioned in a Tweet (I think – my dastardly memory is more ‘plausible reconstruction’ than ‘recollection’), and somehow I was compelled to Google her this time, because she was appearing everywhere and maybe it was Baader-Meinhof but whatever – and so that’s how I came across her Wiki page, where at the very end of her ‘Personal’ section there’s this intriguing paragraph:
Patterson, who was diagnosed with autism in adulthood, has stated that “a single-minded focus” has helped her to have an “overwhelmingly positive relationship” with the male-dominated technology community.[31] Although acknowledging that other women have experienced discrimination or sexual assault, she has urged advocacy groups not to minimize the experiences of women who feel welcome, and prefers the Anita Borg Institute for Women and Technology over the Ada Initiative on these grounds.
Why hello. You know who else is autistic and can be single-mindedly focused? Waves hands
This made footnote #31 look very inviting indeed, which was how I found her Medium piece Okay feminism, it’s time we had a talk about empathy. It turned out to be a great read. I mean of course it is, it’s Meredith Patterson. Look at the opening paragraph:
Growing up with autism is a never-ending series of lessons in how people without autism expect the rest of the world to relate to them. This goes double for those who — like me — went undiagnosed until adulthood: the instructions are far less explicit and the standards are higher. “Stop drumming your pencil, don’t you know you’re distracting people?” “Don’t be so direct, don’t you know you’re being insulting?” “Put yourself in her shoes — when are you going to develop a sense of empathy?” Invariably, the autistic behaviour is marked as less-than, called out as needing to change. So we adapt; we learn to keep our “abnormal” attitudes and behaviours to ourselves in the hope of blending in, and when we discover communities where, by chance, we fit in a little better without having to try so hard, we cling to those safe spaces like a drowning man clings to a lifebuoy.
(Be still, my beating heart…)
Meredith is much, much smarter than I’ll ever be, but there are still experiences she talks about I can strongly relate to. For instance, there’s the thought-to-speech translation speed improvement over time facilitated by the constraints of the comms channel (in her case, “the gentle pressure of dial-in session time limits”), and how despite all that improvement it can still not be enough in critical situations.
But enough talking by me. It’s Powerful Quotes time, so here’s a Powerful Quote:
What does leave me feeling snubbed, however — not to mention “scapegoated for the endemic misogyny in our field” — is being told that talking about my overwhelmingly positive relationship with the tech community is nothing more than a callous announcement of “fuck you, got mine.”
Really? Spending more of my formative years interacting with text on a screen than I did with peers my age is “fuck you, got mine”? Being told that my experiences aren’t worthy of consideration because most women don’t relate to them is “fuck you, got mine”? There’s a noticeable empathy vacuum in the room, and for once it’s not coming from the direction of the sperglord. Or sperglady, if you prefer.
What I’ve got, and what I wish the rest of the “women in tech” community who rage against the misogyny they see everywhere they look could also have, is a blazingly single-minded focus on whatever topic I happen to be perseverating on at the moment. It has kept me awake for days puzzling out novel algorithms and it has thwarted a wannabe PUA at a conference completely by accident. It is also apparently the most crashingly successful defense against attempts to make me feel inferior that has ever been devised. When I’m someplace that says on the label that it’s all about the tech, so am I. I may have come by it naturally, but it is a teachable skill. Not only that, it’s a skill that transforms the places where it’s exercised.
And this closing paragraph too:
The “women in tech” experience is not monolithic — not for the women who feel uncomfortable in the tech community, and not for the women who feel comfortable in it, either. None of our stories are universal, but when we look at any landscape of stories from enough of a remove, we begin to see patterns. Right now, the dominant narrative about women in tech is overwhelmingly woven of antipatterns. We know a lot about how to go from problems to bad solutions, but if we’re going to make a tech community where people feel welcome, we have to figure out how to go from problems to good solutions — and disparaging women like me as gender traitors makes those of us who aren’t too socially thickheaded to know better far more reluctant to speak up so that there can even be a narrative about amelioration patterns. This isn’t “fuck you, got mine,” this is “damn you, why won’t you let me give you what I have?” It doesn’t mean shutting down the discussion about antipatterns — those discussions are important and necessary, and should continue — but it does mean not closing the floor to conversation about what positive patterns already exist.
The next piece I read by Meredith is perhaps even more of a home run: When nerds collide. Among others, she intuitively understands and succinctly recaps David Chapman’s “geeks/MOPs/sociopaths” observation, which I like best as written in Ben Hoffman’s On the construction of beacons, without using their language. (You know the thing where a math proof can become so much shorter and more elegant once you’ve developed the right language to express it in? I was impressed by Meredith the way I’d be impressed by a proof that doesn’t resort to “proper language” when you think it’s necessary, and yet is still short and elegant.)
Ready, set, go:
Of all the sound, fury, and quiet voices of reason in the storm of controversy about tech culture and what is to become of it, quiet voice of reason Zeynep Tufekci’s “No, Nate, brogrammers may not be macho, but that’s not all there is to it” moves the discussion farther forward than any other contribution I’ve seen to date. Sadly, though, it still falls short of truly bridging the conceptual gap between nerds and “weird nerds.” Speaking as a lifelong member of the weird-nerd contingent, it’s truly surreal that this distinction exists at all. I’m slightly older than Nate Silver and about a decade younger than Paul Graham, so it wouldn’t surprise me if either or both find it just as puzzling. There was no cultural concept of cool nerds, or even not-cool-but-not-that-weird nerds, when we were growing up, or even when we were entering the workforce.
That’s no longer true. My younger colleague @puellavulnerata observes that for a long time, there were only weird nerds, but when our traditional pursuits (programming, electrical engineering, computer games, &c) became a route to career stability, nerdiness and its surface-level signifiers got culturally co-opted by trend-chasers who jumped on the style but never picked up on the underlying substance that differentiates weird nerds from the culture that still shuns them. That doesn’t make them “fake geeks,” boy, girl, or otherwise — you can adopt geek interests without taking on the entire weird-nerd package — but it’s still an important distinction. Indeed, the notion of “cool nerds” serves to erase the very existence of weird nerds, to the extent that many people who aren’t weird nerds themselves only seem to remember we exist when we commit some faux pas by their standards.
Even so, science, technology, and mathematics continue to attract the same awkward, isolated, and lonely personalities they have always attracted. Weird nerds are made, not born, and our society turns them out at a young age. Tufekci argues that “life’s not just high school,” but the process of unlearning lessons ingrained from childhood takes a lot more than a cap and gown or even a $10 million VC check, especially when life continues to reinforce those lessons well into adulthood. When weird nerds watch the cool kids jockeying for social position on Twitter, we see no difference between these status games and the ones we opted out of in high school. No one’s offered evidence to the contrary, so what incentive do we have to play that game? Telling us to grow up, get over it, and play a game we’re certain to lose is a demand that we deny the evidence of our senses and an infantilising insult rolled into one.
This phenomenon explains much of the backlash from weird nerds against “brogrammers” and “geek feminists” alike. (If you thought the conflict was only between those two groups, or that someone who criticises one group must necessarily be a member of the other, then you haven’t been paying close enough attention.) Both groups are latecomers barging in on a cultural space that was once a respite for us, and we don’t appreciate either group bringing its cultural conflicts into our space in a way that demands we choose one side or the other. That’s a false dichotomy, and false dichotomies make us want to tear our hair out.
Don’t get me wrong, I’m thrilled to bits that every day the power to translate pure thought into actions that ripple across the world merely by the virtue of being phrased correctly draws nearer and nearer to the hands of every person alive. I’m even more delighted that every day more and more people, some very similar to me and others very different, join the chorus of Those Who Speak With Machines. But I fear for my people, the “weird nerds,” and I think I have good reason to. Brain-computer interfaces are coming, and what will happen to the weird nerds when we can no longer disguise our weirdness with silence?
More paragraphs that resonate:
Humans are social animals, and part of what makes a social species social is that its members place a high priority on signaling their commitment to other members of their species. Weirdoes’ priorities are different; our primary commitment is to an idea or a project or a field of inquiry. Species-membership commitment doesn’t just take a back seat, it’s in the trunk with a bag over its head.
Not only that, our primary commitments are so consuming that they leak over into everything we think, say, and do. This makes us stick out like the proverbial sore thumb: We’re unable to hide that our deepest loyalties aren’t necessarily to the people immediately around us, even if they’re around us every day. We have a name for people whose loyalties adhere to the field of technology — and to the society of our fellow weirdoes who we meet and befriend in technology-mediated spaces — rather than to the hairless apes nearby. I prefer this term to “weird nerds,” and so I’ll use it here: hackers.
Okay, look. I really wish I was a hacker. I’m not talking about the dumb mass media stereotype (which confuses a narrow part of it with cracking), I’m talking about Richard Stallman’s On hacking:
The hacking community developed at MIT and some other universities in the 1960s and 1970s. Hacking included a wide range of activities, from writing software, to practical jokes, to exploring the roofs and tunnels of the MIT campus. Other activities, performed far from MIT and far from computers, also fit hackers’ idea of what hacking means: for instance, I think the controversial 1950s “musical piece” by John Cage, 4’33” (****), is more of a hack than a musical composition. The palindromic three-part piece written by Guillaume de Machaut in the 1300s, “Ma Fin Est Mon Commencement”, was also a good hack, even better because it also sounds good as music. Puck appreciated hack value.
It is hard to write a simple definition of something as varied as hacking, but I think what these activities have in common is playfulness, cleverness, and exploration. Thus, hacking means exploring the limits of what is possible, in a spirit of playful cleverness. Activities that display playful cleverness have “hack value”.
The concept of hacking excludes wit and art as such. The people who began to speak of their activities as “hacking” were familiar with wit and art, and with the names of the various fields of those; they were also doing something else, something different, for which they came up with the name “hacking”. Thus, composing a funny joke or a beautiful piece of music may well involve playful cleverness, but a joke as such and a piece of music as such are not hacks, however funny or beautiful they may be. However, if the piece is a palindrome, we can say it is a hack as well as music; if the piece is vacuous, we can say it is a hack on music.
Hackers typically had little respect for the silly rules that administrators like to impose, so they looked for ways around. For instance, when computers at MIT started to have “security” (that is, restrictions on what users could do), some hackers found clever ways to bypass the security, partly so they could use the computers freely, and partly just for the sake of cleverness (hacking does not need to be useful). However, only some hackers did this—many were occupied with other kinds of cleverness, such as placing some amusing object on top of MIT’s great dome (**), finding a way to do a certain computation with only 5 instructions when the shortest known program required 6, writing a program to print numbers in roman numerals, or writing a program to understand questions in English. (Hacking does not have to be without practical use.)
Meanwhile, another group of hackers at MIT found a different solution to the problem of computer security: they designed the Incompatible Timesharing System without security “features”. In the hacker’s paradise, the glory days of the Artificial Intelligence Lab, there was no security breaking, because there was no security to break. It was there, in that environment, that I learned to be a hacker, though I had shown the inclination previously. We had plenty of other domains in which to be playfully clever, without building artificial security obstacles which then had to be overcome.
But at the end of the day I’m not. I just lack the sideways thinking ability to really be able to participate, to play.
Enough digressing! Back to Meredith, and another well-said paragraph:
More recently, Cory Doctorow’s Eastern Standard Tribe explores tribe-formation in a post-geographic, hyperconnected milieu increasingly reminiscent of the one we live in today: one where chosen affiliation means more than the affiliations imposed by accident of birth or location. It’s this last bit — the way we prioritise choice over circumstance — that’s hardest to communicate to people who don’t experience it themselves, like trying to explain “blue” to a cave fish. When we try to but fail, we’re castigated just for trying, and the wedge drives in ever deeper. Usually the reproof comes in the form of scolding us for our “privilege” of exercising choice at all, but this is perverse beyond belief. The world is made better by extending the franchise of choice to everyone, not by condemning people who couldn’t live with any of the choices on offer and therefore made their own.
Great bit of folk history that also doubles as an exploration of the sociopaths-coopting-geeks-beacons theme:
It’s easy to forget that only 20 years ago — around the time I graduated high school — the Internet was a ghost town compared to today. Okay, a ghost town with a thriving university and more communal watering holes than you could have shaken a stick at, but next to nothing in the way of business. Then we won the right to encrypt Net traffic with ciphers and keys incidentally strong enough to protect credit card numbers in transit, and suddenly e-commerce exploded. The smell of wealth attracts the power-hungry and the job-hungry like raw meat does flies, and two bubbles later, the pull is still as strong as ever. (It’s as if there’s some fundamental human drive to communicate or something.) Successive waves of subcultural immigration into the tech industry have brought with them a myriad of social signaling dialects. Without active effort, it’s easy to miss that between two techies, one signifier can easily have three or more meanings, depending entirely on how the people involved got to where they are.
Another one:
The assertion that we should “not be so defensive” is problematic because it denies that hackers have anything to feel defensive toward. People get defensive when they feel like something important to them is in jeopardy, and our community is important to us because it’s where we find people who share our values. These range from the epistemic to the aesthetic — we are especially protective of the beauty of many the things we care about, often referred to as “elegance.” For those of us who experienced operative ostracism and public shaming, the protectiveness that runs through the entire stack has nigh-infinite fuel to draw from, and at times it doesn’t take much poking to turn a resource that many of us have transmuted into a source of productivity fuel into a tactical nuclear egghead.
Diluting that pool is frightening because it takes us back to the diasporan times in our lives when we upheld those values alone or at most in tiny, isolated handfuls. Many geeks can tell you stories of how they and a few like-minded companions formed a small community that achieved something great, only to have it taken over by popular loudmouths who considered that greatness theirs by right of social station and kicked the geeks out by enforcing weirdo-hostile social norms. (Consider how many hackerspaces retain their original founders.) Having a community they built wrested away from them at the first signs of success is by now a signaling characteristic of weirdohood. We wouldn’t keep mentioning it if it didn’t keep happening.
Another great paragraph among many in Meredith’s outline of “what sets hackers apart are our values”:
Cattle die, kindred die; all are mortal, but the good name never dies of one who has done well. Or the bad name of those who have done evil, but if their code was good, we keep using it until something better comes along. You’d be hard-pressed to find anyone who condones Hans Reiser’s murder of his wife, but a hell of a lot of people still use his filesystem. Even more of us felt the hairs on the backs of our necks stand up when the prosecution argued that Reiser didn’t “act like” a grieving widower should, whether we’d read The Stranger or not. If you’ve seen The Shawshank Redemption, you have a hint of an idea of why this worries us.
The idea of anathematising all of a person’s good works because of something else they said or did is just as alien and repellent to us as our reaction is to someone who wishes Hacker News would die because Paul Graham is kind of a dick sometimes. My Russian coauthor Sergey Bratus points out that keeping works by “ideologically impure” persons out of public view was instrumental to Soviet systems of social control. And as @puellavulnerata acutely observes, a culture that encourages judging people unilaterally, rather than judging their actions in context, is one that allows socially-adept hierarchy climbers to decontextualise their own self-serving cruelties as “necessary for the cause” and stage witchcraft trials against the weirdoes on the margin.
Some of these paragraphs are making me feel sad because of how much they verbalize vague but long-held thoughts I’ve had:
Just as to many women, every man is Schroedinger’s Rapist, to most outsiders, every insider is Schroedinger’s Asshole Trying To Have Me Ostracised. If you want to overcome that cognitive bias from outside of it — and it is a bias, in exactly the same way that Schroedinger’s Rapist is a cognitive bias — you’re going to have to offer more acceptance, not less. Probably orders of magnitude more, if you want us to notice. And you’re probably going to have to prove it repeatedly, in the face of bitter skepticism, because not to put too fine a point on it, we’ve all been conned by the spectre of acceptance at least once and we’re none of us too keen on repeating that mistake.
You have to admit that Lee Billings’ book Five Billion Years of Solitude has a hell of a cover:
There’s a passage in the book that is probably one of my favorites of all time, on the nature of deep time:
Deep time is something that even geologists and their generalist peers, the earth and planetary scientists, can never fully grow accustomed to.
The sight of a fossilized form, perhaps the outline of a trilobite, a leaf, or a saurian footfall can still send a shiver through their bones, or excavate a trembling hollow in the chest that breath cannot fill. They can measure celestial motions and list Earth’s lithic annals, and they can map that arcane knowledge onto familiar scales, but the humblest do not pretend that minds summoned from and returned to dust in a century’s span can truly comprehend the solemn eons in their passage.
Instead, they must in a way learn to stand outside of time, to become momentarily eternal. Their world acquires dual, overlapping dimensions— one ephemeral and obvious, the other enduring and hidden in plain view.
A planet becomes a vast machine, or an organism, pursuing some impenetrable purpose through its continental collisions and volcanic outpourings. A man becomes a protein-sheathed splash of ocean raised from rock to breathe the sky, an eater of sun whose atoms were forged on an anvil of stars.
Beholding the long evolutionary succession of Earthly empires that have come and gone, capped by a sliver of human existence that seems so easily shaved away, they perceive the breathtaking speed with which our species has stormed the world. Humanity’s ascent is a sudden explosion, kindled in some sapient spark of self-reflection, bursting forth from savannah and cave to blaze through the biosphere and scatter technological shrapnel across the planet, then the solar system, bound for parts unknown. From the giant leap of consciousness alongside some melting glacier, it proved only a small step to human footprints on the Moon.
The modern era, luminous and fleeting, flashes like lightning above the dark, abyssal eons of the abiding Earth. Immersed in a culture unaware of its own transience, students of geologic time see all this and wonder whether the human race will somehow abide, too.
I am very low-verbal compared to Scott Alexander, so in most cases I’ll read an essay by him and go “I’d never think of that”. But sometimes I go “this is a much much higher-resolution+coherence+completeness version of some vague scattered thoughts that have been bugging me for a long time, thank you Scott for giving voice to them”. Those essays tend to especially endure in memory.
One of them is Does age bring wisdom?, which Scott published the day he turned 33. The following passages resonated deeply with me:
We’ve been talking recently about the high-level frames and heuristics that organize other concepts. They’re hard to transmit, and you have to rediscover them on your own, sometimes with the help of lots of different explanations and viewpoints (or one very good one). They’re not obviously apparent when you’re missing them; if you’re not ready for them, they just sound like platitudes and boring things you’ve already internalized.
Wisdom seems like the accumulation of those, or changes in higher-level heuristics you get once you’ve had enough of those. I look back on myself now vs. ten years ago and notice I’ve become more cynical, more mellow, and more prone to believing things are complicated. For example:
1. Less excitement about radical utopian plans to fix everything in society at once 2. Less belief that I’m special and can change the world 3. Less trust in any specific system, more resignation to the idea that anything useful requires a grab bag of intuitions, heuristics, and almost-unteachable skills. 4. More willingness to assume that other people are competent in aggregate in certain ways, eg that academic fields aren’t making incredibly stupid mistakes or pointlessly circlejerking in ways I can easily detect. 5. More willingness to believe that power (as in “power structures” or “speak truth to power”) matters and infects everything. 6. More belief in Chesterton’s Fence. 7. More concern that I’m wrong about everything, even the things I’m right about, on the grounds that I’m missing important other paradigms that think about things completely differently. 8. Less hope that everyone would just get along if they understood each other a little better. 9. Less hope that anybody cares about truth (even though ten years ago I would have admitted that nobody cares about truth).
All these seem like convincing insights. But most of them are in the direction of elite opinion. There’s an innocent explanation for this: intellectual elites are pretty wise, so as I grow wiser I converge to their position. But the non-innocent explanation is that I’m not getting wiser, I’m just getting better socialized. Maybe in medieval Europe, the older I grew, the more I would realize that the Pope was right about everything.
I’m pretty embarassed by Parable On Obsolete Ideologies, which I wrote eight years ago. It’s not just that it’s badly written, or that it uses an ill-advised Nazi analogy. It’s that it’s an impassioned plea to jettison everything about religion immediately, because institutions don’t matter and only raw truth-seeking is important. If I imagine myself entering that debate today, I’d be more likely to take the opposite side. But when I read Parable, there’s…nothing really wrong with it. It’s a good argument for what it argues for. I don’t have much to say against it. Ask me what changed my mind, and I’ll shrug, tell you that I guess my priorities shifted.
But I can’t help noticing that eight years ago, New Atheism was really popular, and now it’s really unpopular. Or that eight years ago I was in a place where having Richard Dawkins style hyperrationalism was a useful brand, and now I’m (for some reason) in a place where having James C. Scott style intellectual conservativism is a useful brand. A lot of the “wisdom” I’ve “gained” with age is the kind of wisdom that helps me channel James C. Scott instead of Richard Dawkins; how sure am I that this is the right path?
Sometimes I can almost feel this happening. First I believe something is true, and say so. Then I realize it’s considered low-status and cringeworthy. Then I make a principled decision to avoid saying it – or say it only in a very careful way – in order to protect my reputation and ability to participate in society. Then when other people say it, I start looking down on them for being bad at public relations. Then I start looking down on them just for being low-status or cringeworthy. Finally the idea of “low-status” and “bad and wrong” have merged so fully in my mind that the idea seems terrible and ridiculous to me, and I only remember it’s true if I force myself to explicitly consider the question. And even then, it’s in a condescending way, where I feel like the people who say it’s true deserve low status for not being smart enough to remember not to say it. This is endemic, and I try to quash it when I notice it, but I don’t know how many times it’s slipped my notice all the way to the point where I can no longer remember the truth of the original statement.
The lattermost paragraph is sad, because I can feel it happening in me too.
For awhile now Venkatesh Rao has been writing about the topic of mediocrity as aspiration. Initially I reacted with extreme aversion, finding the philosophy of life he was elaborating on intrinsically abhorrent, then refusing to read anything further from him on the subject.
Then I came across Jacob Falkovich’s post Unstriving, which links to Zvi Mowshowitz’s Something was wrong. Both essays finally primed me to be receptive to Venkat’s thesis, by framing things in terms of Goodhart’s law-style failures arising from illegible benefits “lost in translation”. Suitably primed, I read the the Venkat essay Jacob links to, and realized that I no longer rejected his ideas so vehemently.
The thesis is that mediocrity, construed as resistance to optimization, is a desirable intentional stance because it creates slack, allowing you to keep playing the infinite game of life. It’s not a position on some spectrum of performance, but a stance towards all performance that leads to middling performance on any specific performance spectrum as a side effect.
(Notice how many ideas the thesis builds upon. There’s Goodhart’s law. There’s legibility in the sense of James Scott. There’s Knuth’s premature optimization quote, here generalized. There’s Zvi’s slack. There’s James Carse’s infinite games. There’s Dan Dennett’s intentional stance, although this last one can arguably be omitted.)
Jacob points out the burgeoning pathology of parents overoptimizing their children’s lives for later success in life, to the latter’s emotional/mental detriment:
I’m not the first person to notice that Something is Wrong with Kids These Days (TM) and to tie it to an almost pathological drive by parents to optimize childhood. Helicopter parenting. Snowplow parenting. Tiger moms. Academically tracked selective preschools and elementary schools where 6-year-olds chant “We are college bound!” in unison. Something is wrong, really wrong.
And it’s not just wrong for the kids, it’s wrong for the parents too. Parents are sacrificing every bit of slack they have to give their children one more unasked-for advantage, driving their child to a slightly more prestigious violin teacher who lives half an hour further away. And once a parent has sacrificed money, time, their social life and romantic life, it’s is very hard to accept that your child is, merely a not bad violin player. He may grow up to play bass for the rock band at the local state college! Ma’am, why are you crying? Ma’am?
There’s a lot of evidence that all this optimized child-rearing does not make children any more optimal, only miserable. Mediocre parenting isn’t guaranteed to produce excellent children either, but it should at least be a lot more fun.
The bolded link is Zvi’s essay. It’s a bit hard to replicate the effect of reading it just by quoting specific passages because he uses repetition to great effect, slowly crescendoing throughout the entire article to finally climax in a nameless but overwhelming sense of dread. But I’ll quote anyway:
It was circle time. The kids gathered in a circle.
Our son did not join the circle. We tried to get our son to join the circle.
He did not want to join the circle.
We kept trying. He kept not wanting to.
He wasn’t wrong. The circle was lame. Super lame.
All the other kids were smiling. They liked the circle. Why did they like the circle? Where did they get this level of buy in?
A full four adults, myself included, were trying to get my son to join this circle. He was failing the test. We were failing the test. He needed to join! Or else! Things! His future! The alpha quadrant!
I heard myself talking. I said “Join the circle. Don’t you want to bow to social pressure?”
Out loud. I said that out loud. The other adults did not react. They somehow seemed amused. No funny looks. The other kids didn’t notice. They weren’t listening. No curiosity. There were lots of strange people there and they didn’t notice. In order to sit in a circle. I kept going. “Don’t you want to conform? Everyone wants you to join the circle.” I had one or two more. It felt right. It fit. I was sure, somehow.
How was I so sure? I didn’t know. I decided I must have picked up on something I hadn’t consciously processed. We went outside, they talked a bit among themselves, then talked to us. They said they’d think about whether they had a place for him that met his needs. We asked a bunch of questions. The answer to all of them was “it varies.”
We left and started walking back towards the subway. We would discuss our findings. What did I think?
What did I think? Something is wrong.
I wanted to shout it from the rooftops. But did I have the right to say that? Isn’t this place doing everything it is supposed to? What could I actually point to? Who was I to say, no, run screaming for your life?
This was what everyone official said was appropriate. This was what we were supposed to want. If we didn’t do it we would be hurting our son. We would be irresponsible. It would be just awful. We would be just awful. We’d certainly be to blame for what happened. There would be no end to the trouble. The city would try to punish us, deny us other things we needed; not everything they wanted for us was bad. If we just said yes we could be done with it. This is what people do. It must be all right, right?
I didn’t know what I was going to say.
Zvi concludes, and his wife agrees, that “it felt like this is where children’s souls go to die”. I agree.
I wonder what egregore this is. Whatever it is, it’s right there with Moloch, with Azathoth, with Ra. (It’s hard to overstate how powerful these egregores are, and how impactful they’ve been on my view of the world.)
Venkat talks about mediocrity in the context of the rise of AI “taking our jobs”. It’ll be ultimately futile to compete with them — see shell scripts on one end of the spectrum, and AlphaZero on the other. The solution instead is to resist optimization. This is a “meta-trait” independent of all competitive spectra, not a qualifier on a trait like intelligence or whatever.
A toy narrative of how mediocrity-as-optimization-resistance can be evolutionarily adaptive:
Since we’re talking about intelligence, AI, and robots here, the relevant side-effect spectrum here is intelligence, but it could be anything: beauty, height, or ability to hold your breath underwater.
Or to take an interesting one, the ability to fly.
Back in the Cretaceous era, to rule the earth was to be a dinosaur, and to be an excellent dinosaur was to be a large, apex-predator dinosaur capable of starring in Steven Spielberg movies.
Then the asteroid hit, and as we know now, the most excellent and charismatic dinosaurs, such as the T-Rex and the velociraptor, didn’t survive. Maybe things would have been different if the Rock had been around to save these charismatically excellent beasts, but he wasn’t.
What did survive? The mediocre dinosaurs, the crappy, mid-sized gliding-flying ones that would evolve into a thriving group of new creatures: birds.
Notice something about this example: flying dinosaurs were not just mediocre dinosaurs, they were mediocre birds before “be a bird” even existed as a distinct evolutionary finite game.
The primitive ability to fly turned out to be important for survival, but during the dinosaur era, it was neither a superfluous ability, nor a premium one. It was neither a spandrel, nor an evolutionary trump card. It was just there, as a mediocre, somewhat adaptive trait for some dinosaurs, not the defining trait of all of them. What it did do was embody optionality that would become useful in the future: the ability to exist in 3 dimensions rather than 2.
A bit more elaboration on the idea of resistance to optimization:
So middling performance itself is not the essence of mediocrity. What defines mediocrity is the driving negative intention: to resist the lure of excellence.
Mediocrity is the functionally embodied and situated form of what Sarah Perry called deep laziness. To be mediocre at something is to be less than excellent at it in order to conserve energy for the indefinitely long haul.
What does mediocrity conserve energy for? For unknown future contingencies of course. You try not to be the best dinosaur you can be today, because you want to save some evolutionary potential for being the most mediocre bird you can be tomorrow, which is so not even a thing at the moment that you don’t even have a proper finite game built around it.
And this is not foresight. This is latent optionality in mediocre current functionality. Sometimes you can see such nascent adaptive features with hindsight. Other times, even the optionality is not so well defined. The inner ear bones for instance, evolved from the optionality of extra-thick jaw bones. That is a case of much purer reserve evolutionary energy than dinosaur wings.
If excellence is understood as optimal performance in some legible sense, such as winning a finite game of “be the best dinosaur” or “be the best bird” or “be the best avocado toast,” then mediocrity embodies the ethos of resistance to optimization.
So is mediocrity the same as what computer scientists call “satisficing”? No. Satisficing entails playing a finite game, by using a context-dependent way to define “continue playing”. Mediocrity is context independent, and in fact redefines the context, or “performance boundary”, via sloppiness:
The idea of satisficing behavior implicitly assumes legibility, testability, and acceptance of constraints to be satisfied. You need a notion of satisificing behavior any time you want to define the other end of the spectrum from excellence as some sort of consistent, error-free performance. You don’t seek the best answers, merely the first right answer you stumble upon. For some non-fuzzy definition of “right.”
This is just a different way of playing a finite game. Instead of optimizing (playing to win), you minimize effort to stay in the specific finite game. If you can perform consistently without disqualifying errors, you are satisficing. Most automation and quality control is devoted to raising the floor of this kind of performance. This is a context-dependent way to define “continue playing.”
Mediocrity however, is a context independent trait. The difference is not just a semantic one. To pull your punch is not the same as punching as hard as you can, but neither is it the same as satisficing some technical definition of “punch.” A pulled punch does not find the maximum in punching excellence, but neither does it seek to conscientiously satisfy formal constraints of what constitutes a punch.
Mediocrity in fact tends to redefine the performance boundary itself through sloppiness. It might not satisfy all the constraints, and simply leave some boxes unchecked. Like playing a game of tennis with sloppiness in the enforcement of the rule that the ball can only bounce once before you return it.
Mediocrity has a meta-informational intent driving it: figuring out what constraints are actually being enforced, and then only satisficing those that punish violation. And this is not done through careful testing of boundaries, but simple sloppiness. You do whatever, and happen to satisfy some constraints, violate others. Of the ones you violate, some violations have consequences in the form of negative feedback. That’s where you might refine behavior. You learn which lines matter by being indifferent to all of them and stepping over some of the ones that matter.
You could say mediocrity seeks to satisfice the laws of the territory rather than the laws of the map.
Mediocrity is not about what will satisfy performance requirements, but about what you can get away with.
Mediocrity relates to agency via the notion of situational adequacy:
I grew up with a Hindi phrase, chalta hai, that captures the essence of the ethos of mediocrity. It corresponds loosely to the English it will do, which is subtly different from good enough, but stronger as a norm. For example, the exchange,
Chalega? (will it do?) Chalega. (yes, it will do)
is a common transactional protocol. A consensus acceptance of improvised adequacy.
Good enough hints at satisficing behavior with reference to a standard, but it will do and chalta hai, get at situational adequacy. To say that something “will do” is to actively and independently judge the current situation and act on that judgment, if necessary overriding prevailing oughts. The chalta hai protocol shares the agency involved in this judgment through negotiation, but it need not be.
Something “will do” when it satisfices constraints that aren’t being ignored, and is indifferent to the rest, which usually means leading to minimum-energy defaults, whether or not they violate constraints.
Mediocrity relates to bullshit:
There is a deep relationship between bullshit and mediocrity. Bullshit is indifference to the truth or falsity of statements. Mediocrity is indifference to the violation and compliance of constraints. Where transgression involves deliberately violating constraints, mediocrity doesn’t care whether it is in violation or compliance. Mediocrity is to satisficing and transgression as bullshit is to truth-telling and lying.
By “bullshit” above I’m referring to the anthropologist David Graeber’s theory of bullshit jobs. Venkat ties mediocrity in with this too:
There is only one way to be a telephone sanitizer, account executive, or TV producer: a mediocre way. You may be wildly successful and make a lot of money in these domains but it has little to do with meeting clear standards of excellence or error-free functioning. You may even pursue some sort of Zen-like ideal of unacknowledged excellence, but that will seem arbitrary and even eccentric. The point of these jobs is mostly optionality. Mediocrity is the rational performance standard in such domains.
These domains do not fundamentally support a native spectrum of performance where excellence is really meaningful, because nobody really cares enough, and because the boundaries are too messy. Because here’s the thing: what creates excellence is not that people are good at something, but because people care enough to be good at something.
Mediocrity and evolution:
One of the biggest sources of misconceptions about evolution is the fact that its most popular lay formulation is in the form of a superlative. Survival of the fittest.This leads to two sorts of errors.
The shallow error is to assume fit has a static definition in a changing landscape, like smart or beautiful. It is the sort of error made by your average ugly idiot on the Internet.
This isn’t actually too bad, since at various times, specific legible fitness functions may be good approximations of the fitness function actually induced by the landscape.
The deep error though is to assume the superlative form of the imperative towards fitness. Fit and fittest are not the same thing. In the gap between the two lies the definition of mediocrity. To pursue mediocrity is to procrastinate on optimizing for the current fitness function because it might change at any time.
(This post is mostly quotes from Cosma Shalizi’s book review of James Flynn’s What is Intelligence? Beyond the Flynn Effect.
What is the Flynn effect?
In every country where we can find records of consistent IQ tests given to large numbers of people, scores have been rising as far back as the records go, in some cases to the early 20th century, and by large amounts, sometimes (e.g., for draftees in the Netherlands) as much as twenty IQ points every thirty years.
How did the Flynn effect get unnoticed for so long?
By convention, IQ tests are designed so that the mean score is 100 points, the standard deviation is 15 points, and the scores follow a Gaussian probability distribution, the now-infamous bell-shaped curve. At least, all of this is true of a norming or reference sample of test-takers, when the test is put together; they are hoped to be representative of future test-takers. Scores on individual questions are weighted and added up, and then transformed, as the distribution of raw scores is quite skewed rather than symmetrically bell-shaped. In essence, the IQ scores of future test-takers is computed by seeing where their raw scores fall in the distribution of the original reference sample, and reading off the corresponding Gaussian value. There are wrinkles — e.g., some test-makers set the standard deviation to be 16 or even 24 points — but those are the basics.
Two test-takers who give exactly the same set of answers to the same questions can thus get different IQ scores, if they are normed against different reference samples. Test-makers periodically re-norm their tests against new samples, keeping the mean at 100, but that mean score can represent very different levels of absolute performance.
Flynn’s discovery came from intelligence tests which had been consistently given with the same sets of questions over time, and where the raw scores had been recorded. What he found is that someone who gets an IQ score of 100 today gets more questions right than did someone who got a score of 100 in 1950, who in turn answered more right than did someone with a score of 100 in 1900. The exact rate of gain depends on the country and on the test, from a high of 6–7 IQ points per decade to a low of only a few points over a half-century. A rough summary is that measured IQ has been rising at, conservatively, 3 points per decade for as far back as the data go, across the industrialized world.
This rate is enough that someone who had an IQ of 100 in 1900 would have had an IQ of only 70 in 2000 — low enough to be classified as mentally retarded, and so, in the US, exempt from capital punishment, as being incapable of fully understanding their own actions.
What are some attempted explanations for the Flynn effect which get dismissed in the book?
A number of explanations have been suggested for the Flynn effect, most of which Flynn swats down with little trouble. It is just too large, too widespread, and too steady, to be due to improved nutrition, greater familiarity with IQ tests, or (a personal favorite) hybrid vigor from mixing previously-isolated populations, all of which have been seriously proposed. Nobody seems to have bit the bullet and suggested that modern societies have natural or sexual selection for higher IQ; but the numbers wouldn’t add up in any case.
The Flynn effect seems to imply at least one of two things: either our ancestors of a century ago were astonishingly stupid, or IQ tests measure intelligence badly.
What’s Flynn’s thesis for why the Flynn effect happens? It’s the “IQ tests are culturally biased” answer, but he fleshes it out in the only way I’ve seen that’s actually substantive:
Flynn contends that our ancestors were no dumber than we are, but that most of them used their minds in different ways than we do, to which IQ tests are more or less insensitive; we have become increasingly skilled at the uses of intelligence IQ tests do catch. Though he doesn’t put it this way, he thinks that IQ tests are massively culturally biased, and that the culture they favor has been imposed on the populations of the developed countries (and, increasingly, the rest of the world) through a far-reaching, sustained and successful campaign of cultural imperialism and social engineering.
Illustrative example:
This can be seen in Flynn’s discussion of a hypothetical, but typical, test question: “How are rabbits and dogs alike?”
Answers like “both are raised on farms”, “both come in breeds with different colors”, “both are eaten by people in some parts of the world and kept as pets in others”, “both have claws”, “both can destroy gardens”, and Flynn’s example answer, “you can use dogs to hunt rabbits” are true, but not what IQ testers look for. (Even the answer “they’re not alike, in any way that matters” could be sensibly defended.)
The test-makers want you to say “both are mammals”. What the testers look for, in other words, is not knowledge of the concrete world or of functional relationships, but mastery of one set of abstract concepts, which the test-makers themselves have internalized as highly-trained scientific professionals and literate intellectuals.
To generalize (ha!) from the above illustrative example — this quote is also the reason for this blog post:
All thought involves some degree of abstraction, but IQ testers, like intellectuals in general, tend to value abstraction as such. As well as preferring answers which show familiarity with our current scientific concepts, IQ tests also reward certain kinds of problem-solving abilities, what Flynn describes as solving “problems not solvable by mechanical application of a learned method” (p. 53; I don’t think he really means to deny the possibility of AI). Prime examples, to his mind, are things like tests of similarities and analogies, and pattern-completion tests like Raven’s Progressive Matrices. In the latter, each question consists of a series of line drawings, followed by a choice of several extra drawings from which the test-taker is supposed to pick the one that completes or finishes the sequence. (See here for an example.) Raven hoped that his test would be a fairly pure measurement of ability to “educe relations”, i.e., to discover patterns, which he regarded as the essence of intelligence. Raven’s test is often said to be subject to little or no cultural bias (a claim resting on basically no evidence whatsoever). Yet it is on tests of this type that the Flynn effect is strongest, 5 points per decade at the least. Below them come similarities and analogies tests of the rabbit/dog kind. Scores on vocabulary, arithmetic and general-information tests, on the other hand, show the lowest rates of improvement, and even some small declines.
Flynn refers to these transformations in how we think as “liberation from the concrete” and “putting on scientific spectacles”. His claims that the Flynn effect is a consequence of the changes in how people live and what skills they cultivate brought about by the industrial revolution. We now overwhelmingly keep dogs as pets, not to hunt, and we go to schools where we are not just taught to read but to think abstractly, and to use a common set of abstractions. Flynn refers here to the well-known work done by the great Soviet psychologist A. R. Luria in the 1930s, described in the latter’s Cognitive Development: Its Social and Cultural Foundations (1974). Luria claimed to show, by means of fieldwork among peasants and nomads in Uzbekistan, that the kind of abstract reasoning skills Flynn points to developed in tandem with literacy, schooling, and participation in the modern economy. While Luria’s work has flaws (an Uzbekistani peasant who had abstract reasoning skills, confronted in the 1930s by a Russian Communist official asking them strange and leading questions, had many excellent reasons to play dumb), his findings are broadly consonant with later work on cross-cultural psychology.
At a larger scale, there is a connection, which Flynn does not draw, to the investigations of historians and sociologists into links between industrialization, nationalism and schooling. Americans may recall that our public schools were consciously used to make this country a melting-pot, to turn the descendants of immigrants from dozens of countries with many languages and cultures into a more-or-less unified people. Similar processes took place in the late 19th and early 20th centuries in all the developed countries — and, somewhat later, took off in the rest of the world. Governments and educated classes sought, in historian Eugen Weber’s phrase, to turn “peasants into Frenchmen” — or into Dutchmen, Germans, Italians, Poles, Serbs, Russians, etc.; at the time Luria worked, the Soviet government was busy turning peasants into Uzbeks.
Out of the blooming, buzzing confusion of local dialects and traditions, intellectuals invented (or, as they saw it, codified) standardized literary languages and “ancient folk customs”, which they then propagated through state-organized universal education and the new mass media. Simultaneously, they took modes of thinking which previously had been the reserve of their own small minority of literate specialists and made them part of everyone’s education. As the sociologist Ernest Gellner emphasized, this was not just an exercise in cultural domination. An industrial economy constantly creates new jobs and destroys old ones, so learning a trade, probably one’s father’s, by immersion from childhood won’t work any longer; more generic and so more abstract training is required. In an industrial society, people constantly face strangers and novelties. Action then cannot be guided by custom and familiar context, but instead by explicit impersonal rules, cultural conventions shared across whole countries rather than single villages, and original thought and decision. An industrial society is one in which the whole economically effective population has to deal with machines and with written communications, again with minimal help from context, and where a large fraction of workers must have some mastery of the abstract, scientific concepts which make industrial technologies comprehensible. Finally, in an industrial society everyone routinely deals with large bureaucracies (when privately owned we call them “corporations”), and actually most people work within them. All of this points towards not just standardized and literate cultures, but also one which reward abstract thinking, and even more a change of atitudes, to be willing or even eager to follow arbitrary-seeming abstract rules with no immediate point or relevance, just because a person in authority tells you to do so.
Again, this did not create new ways of thinking so much as spread ones which had existed for a few millennia but been very rare. If you had asked medieval scholars like Averroes or William of Ockham “how is a rabbit is like a dog?”, they would have replied that rabbits and dogs are both species of the genus “quadruped animals”. (Ockham might have quibbled about the difference between names and things.) They were already “liberated from the concrete”, but they used a somewhat different system of abstractions than we do. William Gibson once said that “the future is already here, it just isn’t widely distributed yet”; the same was once true of this aspect of the present.
As you read this, some young man or woman is sitting at a desk in a university, earnestly studying material they have no intention of ever using, and no interest in knowing for its own sake. They want a high-paying job, and the high-paying job requires a piece of paper, and the piece of paper requires a previous master’s degree, and the master’s degree requires a bachelor’s degree, and the university that grants the bachelor’s degree requires you to take a class in 12th-century knitting patterns to graduate. So they diligently study, intending to forget it all the moment the final exam is administered, but still seriously working away, because they want that piece of paper.
Maybe you realized it was all madness, but I bet you did it anyway. You didn’t have a choice, right?
There’s something heart-tuggingly romantic about the story of the passenger pigeon – how it was once the most numerous of its kind in the whole world, how humans hunted it to extinction in the evolutionary blink of an eye. Anyway, here are some anecdotes by contemporary observers. See also the heart-wrenching essay The passenger pigeon: once there were billions, by Jerry Sullivan.
John James Audubon, in his 1831 book Ornithological Biography describing a migration he observed in 1813:
I dismounted, seated myself on an eminence, and began to mark with my pencil, making a dot for every flock that passed. In a short time finding the task which I had undertaken impracticable, as the birds poured in in countless multitudes, I rose and, counting the dots then put down, found that 163 had been made in twenty-one minutes.
I traveled on, and still met more the farther I proceeded. The air was literally filled with Pigeons; the light of noon-day was obscured as by an eclipse; the dung fell in spots, not unlike melting flakes of snow, and the continued buzz of wings had a tendency to lull my senses to repose…
I cannot describe to you the extreme beauty of their aerial evolutions, when a hawk chanced to press upon the rear of the flock. At once, like a torrent, and with a noise like thunder, they rushed into a compact mass, pressing upon each other towards the center. In these almost solid masses, they darted forward in undulating and angular lines, descended and swept close over the earth with inconceivable velocity, mounted perpendicularly so as to resemble a vast column, and, when high, were seen wheeling and twisting within their continued lines, which then resembled the coils of a gigantic serpent…
Before sunset I reached Louisville, distant from Hardensburgh fifty-five miles. The Pigeons were still passing in undiminished numbers and continued to do so for three days in succession.
Wikipedia on the passenger pigeon in flight:
These flocks were frequently described as being so dense that they blackened the sky and as having no sign of subdivisions. The flocks ranged from only 1.0 m (3.3 ft) above the ground in windy conditions to as high as 400 m (1,300 ft). These migrating flocks were typically in narrow columns that twisted and undulated, and they were reported as being in nearly every conceivable shape.[47]
A skilled flyer, the passenger pigeon is estimated to have averaged 100 km/h (62 mph) during migration. It flew with quick, repeated flaps that increased the bird’s velocity the closer the wings got to the body. It was equally as adept and quick at flying through a forest as through open space. A flock was also adept at following the lead of the pigeon in front of it, and flocks swerved together to avoid a predator.
When landing, the pigeon flapped its wings repeatedly before raising them at the moment of landing. The pigeon was awkward when on the ground, and moved around with jerky, alert steps.
Sheer numbers:
The passenger pigeon was one of the most social of all land birds.[53] Estimated to have numbered three to five billion at the height of its population, it may have been the most numerous bird on Earth; researcher Arlie W. Schorger believed that it accounted for between 25 and 40 percent of the total land bird population in the United States.[54] The passenger pigeon’s historic population is roughly the equivalent of the number of birds that overwinter in the United States every year in the early 21st century.[55]
Even within their range, the size of individual flocks could vary greatly. In November 1859, Henry David Thoreau, writing in Concord, Massachusetts, noted that “quite a little flock of [passenger] pigeons bred here last summer,”[56] while only seven years later, in 1866, one flock in southern Ontario was described as being 1.5 km (0.93 mi) wide and 500 km (310 mi) long, took 14 hours to pass, and held in excess of 3.5 billion birds.[57] Such a number would likely represent a large fraction of the entire population at the time, or perhaps all of it.[17]
Most estimations of numbers were based on single migrating colonies, and it is unknown how many of these existed at a given time. American writer Christopher Cokinos has suggested that if the birds flew single file, they would have stretched around the earth 22 times.
Roosting:
A communally roosting species, the passenger pigeon chose roosting sites that could provide shelter and enough food to sustain their large numbers for an indefinite period. The time spent at one roosting site may have depended on the extent of human persecution, weather conditions, or other, unknown factors. Roosts ranged in size and extent, from a few acres to 260 km2(100 sq mi) or greater. Some roosting areas would be reused for subsequent years, others would only be used once.[25]
The passenger pigeon roosted in such numbers that even thick tree branches would break under the strain. The birds frequently piled on top of each other’s backs to roost. They rested in a slumped position that hid their feet. They slept with their bills concealed by the feathers in the middle of the breast while holding their tail at a 45-degree angle.[52] Dung could accumulate under a roosting site to a depth of over 0.3 m (1.0 ft).
Ecological effect on forests:
The bird is believed to have played a significant ecological role in the composition of pre-Columbian forests of eastern North America.
For instance, while the passenger pigeon was extant, forests were dominated by white oaks. This species germinated in the fall, therefore making its seeds almost useless as a food source during the spring breeding season, while red oaks produced acorns during the spring, which were devoured by the pigeons. The absence of the passenger pigeon’s seed consumption may have contributed to the modern dominance of red oaks. Due to the immense amount of dung present at roosting sites, few plants grew for years after the pigeons left.
Also, the accumulation of flammable debris (such as limbs broken from trees and foliage killed by excrement) at these sites may have increased both the frequency and intensity of forest fires, which would have favored fire-tolerant species, such as bur oaks, black oaks, and white oaks over less fire-tolerant species, such as red oaks, thus helping to explain the change in the composition of eastern forests since the passenger pigeon’s extinction (from white oaks, bur oaks, and black oaks predominating in presettlement forests, to the “dramatic expansion” of red oaks today).
With the large numbers in passenger pigeon flocks, the excrement they produced was enough to destroy surface-level vegetation at long-term roosting sites, while adding high quantities of nutrients to the ecosystem. Because of this — along with the breaking of tree limbs under their collective weight and the great amount of mast they consumed — passenger pigeons are thought to have influenced both the structure of eastern forests and the composition of the species present there.[55]
Due to these influences, some ecologists have considered the passenger pigeon a keystone species,[4] with the disappearance of their vast flocks leaving a major gap in the ecosystem.[70] Their role in creating forest disturbances has been linked to greater vertebrate diversity in forests by creating more niches for animals to fill,[71] as well as contributing to a healthy forest fire cycle in the forests, as it has been found that forest fires have increased in prevalence since the extinction of the passenger pigeon, which seems to go against the idea that the tree limbs and branches they would bring down served as fuel for the fires.[72]
To help fill that ecological gap, it has been proposed that modern land managers attempt to replicate some of their effects on the ecosystem by creating openings in forest canopies to provide more understory light.
Wow. Eating:
The passenger pigeon foraged in flocks of tens or hundreds of thousands of individuals that overturned leaves, dirt, and snow with their bills in search of food. One observer described the motion of such a flock in search of mast as having a rolling appearance, as birds in the back of the flock flew overhead to the front of the flock, dropping leaves and grass in flight.[25][48] The flocks had wide leading edges to better scan the landscape for food sources.[77]
When nuts on a tree loosened from their caps, a pigeon would land on a branch and, while flapping vigorously to stay balanced, grab the nut, pull it loose from its cap, and swallow it whole. Collectively, a foraging flock was capable of removing nearly all fruits and nuts from their path. Birds in the back of the flock flew to the front in order to pick over unsearched ground; however, birds never ventured far from the flock and hurried back if they became isolated. It is believed that the pigeons used social cues to identify abundant sources of food, and a flock of pigeons that saw others feeding on the ground often joined them.[48]
During the day, the birds left the roosting forest to forage on more open land.[47] They regularly flew 100 to 130 km (62 to 81 mi) away from their roost daily in search of food, and some pigeons reportedly traveled as far as 160 km (99 mi), leaving the roosting area early and returning at night.
The pigeon could eat and digest 100 g (3.5 oz) of acorns per day.[78] At the historic population of three billion passenger pigeons, this amounted to 210,000,000 L (55,000,000 US gal) of food a day.
Mating:
Other than finding roosting sites, the migrations of the passenger pigeon were connected with finding places appropriate for this communally breeding bird to nest and raise its young. It is not certain how many times a year the birds bred; once seems most likely, but some accounts suggest more. The nesting period lasted around four to six weeks. The flock arrived at a nesting ground around March in southern latitudes, and some time later in more northern areas.[25][54]
The pigeon had no site fidelity, often choosing to nest in a different location each year.[68] The formation of a nesting colony did not necessarily take place until several months after the pigeons arrived on their breeding grounds, typically during late March, April, or May.[79]
The colonies, which were known as “cities”, were immense, ranging from 49 ha (120 acres) to thousands of hectares in size, and were often long and narrow in shape (L-shaped), with a few areas untouched for unknown reasons. Due to the topography, they were rarely continuous. Since no accurate data was recorded, it is not possible to give more than estimates on the size and population of these nesting areas, but most accounts mention colonies containing millions of birds. The largest nesting area ever recorded was in central Wisconsin in 1871; it was reported as covering 2,200 km2 (850 sq mi), with the number of birds nesting there estimated to be around 136,000,000.
As well as these “cities”, there were regular reports of much smaller flocks or even individual pairs setting up a nesting site.[25][79] The birds do not seem to have formed as vast breeding colonies at the periphery of their range.
Nests were built immediately after pair formation and took two to four days to construct; this process was highly synchronized within a colony.[79] The female chose the nesting site by sitting on it and flicking its wings. The male then carefully selected nesting materials, typically twigs, and handed them to the female over her back. The male then went in search of more nesting material while the female constructed the nest beneath herself. Nests were built between 2.0 and 20.1 m (6.6 and 65.9 ft) above the ground, though typically above 4.0 m (13.1 ft), and were made of 70 to 110 twigs woven together to create a loose, shallow bowl through which the egg could easily be seen. This bowl was then typically lined with finer twigs. The nests were about 150 mm (5.9 in) wide, 61 mm (2.4 in) high, and 19 mm (0.75 in) deep. Though the nest has been described as crude and flimsy compared to those of many other birds, remains of nests could be found at sites where nesting had taken place several years prior.
Nearly every tree capable of supporting nests had them, often more than 50 per tree; one hemlock was recorded as holding 317 nests. The nests were placed on strong branches close to the tree trunks. Some accounts state that ground under the nesting area looked as if it had been swept clean, due to all the twigs being collected at the same time, yet this area would also have been covered in dung.
The arrival of the Europeans:
French explorer Jacques Cartier was the first European to report on passenger pigeons, during his voyage in 1534.[97] The bird was subsequently observed and noted by historical figures such as Samuel de Champlain and Cotton Mather. Most early accounts dwell on the vast number of pigeons, the resulting darkened skies, and the enormous amount of hunted birds (50,000 birds were reportedly sold at a Boston market in 1771).[58]
The early colonists thought that large flights of pigeons would be followed by ill fortune or sickness. When the pigeons wintered outside of their normal range, some believed that they would have “a sickly summer and autumn.”[98] In the 18th and 19th centuries, various parts of the pigeon were thought to have medicinal properties. The blood was supposed to be good for eye disorders, the powdered stomach lining was used to treat dysentery, and the dung was used to treat a variety of ailments, including headaches, stomach pains, and lethargy.[99]
Though they did not last as long as the feathers of a goose, the feathers of the passenger pigeon were frequently used for bedding. Pigeon feather beds were so popular that for a time in Saint-Jérôme, Quebec, every dowry included a bed and pillows made of pigeon feathers. In 1822, one family in Chautauqua County, New York, killed 4,000 pigeons in a day solely for this purpose.
The commercialization of other-species genocide:
After European colonization, the passenger pigeon was hunted more intensively and with more sophisticated methods than the more sustainable methods practiced by the natives.[33] Yet it has also been suggested that the species was rare prior to 1492, and that the subsequent increase in their numbers may be due to the decrease in the Native American population (who, as well as hunting the birds, competed with them for mast) caused by European immigration, and the supplementary food (agricultural crops) the immigrants imported[118] (a theory for which Joel Greenberg offered a detailed rebuttal in his book, A Feathered River Across the Sky).[119]
The passenger pigeon was of particular value on the frontier, and some settlements counted on its meat to support their population.[120][121] The flavor of the flesh of passenger pigeons varied depending on how they were prepared. In general, juveniles were thought to taste the best, followed by birds fattened in captivity and birds caught in September and October. It was common practice to fatten trapped pigeons before eating them or storing their bodies for winter.[109] Dead pigeons were commonly stored by salting or pickling the bodies; other times, only the breasts of the pigeons were kept, in which case they were typically smoked. In the early 19th century, commercial hunters began netting and shooting the birds to sell as food in city markets, and even as pig fodder. Once pigeon meat became popular, commercial hunting started on a prodigious scale.
Passenger pigeons were shot with such ease that many did not consider them to be a game bird, as an amateur hunter could easily bring down six with one shotgun blast; a particularly good shot with both barrels of a shotgun at a roost could kill 61 birds.[123][124] The birds were frequently shot either in flight during migration or immediately after, when they commonly perched in dead, exposed trees.[123]Hunters only had to shoot toward the sky without aiming, and many pigeons would be brought down.[33] The pigeons proved difficult to shoot head-on, so hunters typically waited for the flocks to pass overhead before shooting them. Trenches were sometimes dug and filled with grain so that a hunter could shoot the pigeons along this trench.[125] Hunters largely outnumbered trappers, and hunting passenger pigeons was a popular sport for young boys.[126] In 1871, a single seller of ammunition provided three tons of powder and 16 tons (32,000 lb) of shot during a nesting.
In the latter half of the 19th century, thousands of passenger pigeons were captured for use in the sports shooting industry. The pigeons were used as living targets in shooting tournaments, such as “trap-shooting“, the controlled release of birds from special traps. Competitions could also consist of people standing regularly spaced while trying to shoot down as many birds as possible in a passing flock.[33][127] The pigeon was considered so numerous that 30,000 birds had to be killed to claim the prize in one competition.
The horrifying creativity of mass genocide:
There were a wide variety of other methods used to capture and kill passenger pigeons. Nets were propped up to allow passenger pigeons entry, then closed by knocking loose the stick that supported the opening, trapping twenty or more pigeons inside.[128] Tunnel nets were also used to great effect, and one particularly large net was capable of catching 3,500 pigeons at a time.[129] These nets were used by many farmers on their own property as well as by professional trappers.[130] Food would be placed on the ground near the nets to attract the pigeons. Decoy or “stool pigeons” (sometimes blinded by having their eyelids sewn together) were tied to a stool. When a flock of pigeons passed by, a cord would be pulled that made the stool pigeon flutter to the ground, making it seem as if it had found food, and the flock would be lured into the trap.[33][131][132]
Salt was also frequently used as bait, and many trappers set up near salt springs.[133] At least one trapper used alcohol-soaked grain as bait to intoxicate the birds and make them easier to kill.[113]
Another method of capture was to hunt at a nesting colony, particularly during the period of a few days after the adult pigeons abandoned their nestlings, but before the nestlings could fly. Some hunters used sticks to poke the nestlings out of the nest, while others shot the bottom of a nest with a blunt arrow to dislodge the pigeon. Others cut down a nesting tree in such a way that when it fell, it would also hit a second nesting tree and dislodge the pigeons within.[134] In one case, 6 km2 (1,500 acres) of large trees were speedily cut down to get birds, and such methods were common.[33]
A severe method was to set fire to the base of a tree nested with pigeons; the adults would flee and the juveniles would fall to the ground.[135][136]Sulfur was sometimes burned beneath the nesting tree to suffocate the birds, which fell out of the tree in a weakened state.
Note that they did the same thing with people too, black people in particular, so this sort of moral depravity wasn’t out of the ordinary, shocking as it is to me.
Railroads and the telegraph made it worse:
By the mid-19th century, railroads had opened new opportunities for pigeon hunters. While previously it had proved too difficult to ship masses of pigeons to eastern cities, the access provided by the railroad permitted pigeon hunting to become commercialized.[122] An extensive telegraph system was introduced in the 1860s, which improved communication across the United States, making it easier to spread information about the whereabouts of pigeon flocks.[127] After being opened up to the railroads, the town of Plattsburg, New York is estimated to have shipped 1.8 million pigeons to larger cities in 1851 alone at a price of 31 to 56 cents a dozen.
By the late 19th century, the trade of passenger pigeons had become commercialized. Large commission houses employed trappers (known as “pigeoners”) to follow the flocks of pigeons year-round.[138] A single hunter is reported to have sent three million birds to eastern cities during his career.[139] In 1874, at least 600 people were employed as pigeon trappers, a number which grew to 1,200 by 1881.
Pigeons were caught in such numbers that by 1876, shipments of dead pigeons were unable to recoup the costs of the barrels and ice needed to ship them.[140] The price of a barrel full of pigeons dropped to below fifty cents, due to overstocked markets. Passenger pigeons were instead kept alive so their meat would be fresh when the birds were killed, and sold once their market value had increased again. Thousands of birds were kept in large pens, though the bad conditions led many to die from lack of food and water, and by fretting (gnawing) themselves; many rotted away before they could be sold.
Rapid decline:
The notion that the species could be driven to extinction was alien to the early colonists, because the number of birds did not appear to diminish, and also because the concept of extinction was yet to be defined. The bird seems to have been slowly pushed westwards after the arrival of Europeans, becoming scarce or absent in the east, though there were still millions of birds in the 1850s. The population must have been decreasing in numbers for many years, though this went unnoticed due to the apparent vast number of birds, which clouded their decline.
By the 1870s, the decrease in birds was noticeable, especially after the last large-scale nestings and subsequent slaughters of millions of birds in 1874 and 1878. By this time, large nestings only took place in the north, around the Great Lakes. The last large nesting was in Petoskey, Michigan, in 1878 (following one in Pennsylvania a few days earlier), where 50,000 birds were killed each day for nearly five months. The surviving adults attempted a second nesting at new sites, but were killed by professional hunters before they had a chance to raise any young. Scattered nestings are reported into the 1880s, but the birds were now wary, and commonly abandoned their nests if persecuted.
By the time of these last nestings, laws had already been enacted to protect the passenger pigeon, but these proved ineffective, as they were unclearly framed and hard to enforce. H. B. Roney, who had witnessed the Petoskey slaughter, led campaigns to protect the pigeon, but was met with resistance, and accusations that he was exaggerating the severity of the situation. Few offenders were prosecuted, mainly some poor trappers, but the large enterprises were not affected.[58]
In 1857, a bill was brought forth to the Ohio State Legislature seeking protection for the passenger pigeon, yet a Select Committee of the Senate filed a report stating that the bird did not need protection, being “wonderfully prolific”, and dismissing the suggestion that the species could be destroyed.
Motherfuckers.
Public protests against trap-shooting erupted in the 1870s, as the birds were badly treated before and after such contests. Conservationists were ineffective in stopping the slaughter. A bill was passed in the Michigan legislature making it illegal to net pigeons within 3 km (1.9 mi) of a nesting area. In 1897, a bill was introduced in the Michigan legislature asking for a 10-year closed season on passenger pigeons. Similar legal measures were passed and then disregarded in Pennsylvania. The gestures proved futile, and by the mid-1890s, the passenger pigeon had almost completely disappeared, and was probably extinct as a breeding bird in the wild.[127][139] Small flocks are known to have existed at this point, since large numbers of birds were still being sold at markets. Thereafter, only small groups or individual birds were reported, many of which were shot on sight.
The last ones:
The last recorded nest and egg in the wild were collected in 1895 near Minneapolis.
The last wild individual in Louisiana was discovered among a flock of mourning doves in 1896, and subsequently shot.
The last fully authenticated record of a wild passenger pigeon was near Oakford, Illinois, on March 12, 1901, when a male bird was killed, stuffed, and placed in Millikin Universityin Decatur, Illinois, where it remains today. This was not discovered until 2014, when writer Joel Greenberg found out the date of the bird’s shooting while doing research for his book A Feathered River Across the Sky.
For many years, the last confirmed wild passenger pigeon was thought to have been shot near Sargents, Pike County, Ohio, on March 24, 1900, when a female bird was killed by a boy named Press Clay Southworth with a BB gun.[39][145] The boy had not recognized the bird as a passenger pigeon, but his parents identified it, and sent it to a taxidermist. The specimen, nicknamed “Buttons” due to the buttons used instead of glass eyes, was donated to the Ohio Historical Society by the family in 1915.
The reliability of accounts after the Ohio, Illinois, and Indiana birds are in question. U.S. President Theodore Roosevelt claimed to have seen a bird in Michigan in 1907.[58]Ornithologist Alexander Wetmore claimed that he saw a pair flying near Independence, Kansas, in April 1905.[146][147] In 1910, the American Ornithologists’ Union offered a reward of $3,000 for discovering a nest – the equivalent of $76,990 in 2015.
It was too late.
Most captive passenger pigeons were kept for exploitative purposes, but some were housed in zoos and aviaries. Audubon alone claimed to have brought 350 birds to England in 1830, distributing them among various noblemen, and the species is also known to have been kept at London Zoo. Being common birds, these attracted little interest, until the species became rare in the 1890s.
By the turn of the 20th century, the last known captive passenger pigeons were divided in three groups; one in Milwaukee, one in Chicago, and one in Cincinnati. There are claims of a few further individuals having been kept in various places, but these accounts are not considered reliable today.
The Milwaukee group was kept by David Whittaker, who began his collection in 1888, and possessed fifteen birds some years later, all descended from a single pair.
The Chicago group was kept by Charles Otis Whitman, whose collection began with passenger pigeons bought from Whittaker beginning in 1896. He had an interest in studying pigeons, and kept his passenger pigeons with other pigeon species. Whitman brought his pigeons with him from Chicago to Massachusetts by railcar each summer.
By 1897, Whitman had bought all of Whittaker’s birds, and upon reaching a maximum of 19 individuals, he gave seven back to Whittaker in 1898. Around this time, a series of photographs were taken of these birds; 24 of the photos survive. Some of these images have been reproduced in various media, copies of which are now kept at the Wisconsin Historical Society. It is unclear exactly where, when, and by whom these photos were taken, but some appear to have been taken in Chicago in 1896, others in Massachusetts in 1898, the latter by a J. G. Hubbard.
By 1902, Whitman owned sixteen birds. Many eggs were laid by his pigeons, but few hatched, and many hatchlings died. A newspaper inquiry was published that requested “fresh blood” to the flock which had now ceased breeding.
By 1907, he was down to two female passenger pigeons that died that winter, and was left with two infertile male hybrids, whose subsequent fate is unknown. By this time, only four (all males) of the birds Whitman had returned to Whittaker were alive, and these died between November 1908 and February 1909.
Martha, the Endling, the last of her once numerous kind in the whole world, and the end of a glorious era:
The Cincinnati Zoo, one of the oldest zoos in the United States, kept passenger pigeons from its beginning in 1875. The zoo kept more than twenty individuals, in a ten-by-twelve-foot cage.[150]Passenger pigeons do not appear to have been kept at the zoo due to their rarity, but to enable guests to have a closer look at a native species.[152] Recognizing the decline of the wild populations, Whitman and the Cincinnati Zoo consistently strove to breed the surviving birds, including attempts at making a rock dove foster passenger pigeon eggs.[153]
In 1902, Whitman gave a female passenger pigeon to the zoo; this was possibly the individual later known as Martha, which would become the last living member of the species. Other sources argue that Martha was hatched at the Cincinnati Zoo, had lived there for 25 years, and was the descendant of three pairs of passenger pigeons purchased by the zoo in 1877. It is thought this individual was named Martha because her last cage mate was named George, thereby honoring George Washington and his wife Martha, though it has also been claimed she was named after the mother of a zookeeper’s friends.
In 1909, Martha and her two male companions at the Cincinnati Zoo became the only known surviving passenger pigeons. One of these males died around April that year, followed by George, the remaining male, on July 10, 1910.[152] It is unknown whether the remains of George were preserved.
Martha soon became a celebrity due to her status as an endling, and offers of a $1,000 reward for finding a mate for her brought even more visitors to see her. During her last four years in solitude (her cage was 5.4 by 6 m (18 by 20 ft)), Martha became steadily slower and more immobile; visitors would throw sand at her to make her move, and her cage was roped off in response.
Martha died of old age on September 1, 1914, and was found lifeless on the floor of her cage.[42][156] It was claimed that she died at 1 p.m., but other sources suggest she died some hours later.[150]Depending on the source, Martha was between 17 and 29 years old at the time of her death, although 29 is the generally accepted figure.[157] At the time, it was suggested that Martha might have died from an apoplectic stroke, as she had suffered one a few weeks before dying.[158] Her body was frozen into a block of ice and sent to the Smithsonian Institution in Washington, where it was skinned, dissected, photographed, and mounted.[42][135] As she was molting when she died, she proved difficult to stuff, and previously shed feathers were added to the skin.