Category Archives: Uncategorized

Plentiful, high-paying jobs in the age of AI

I’m not convinced that this view was correct. It really, really does seem like this time is different. But I’m also not convinced that this view is incorrect, and it seems well worth considering.

Most of the technologists I know take an attitude towards this future that’s equal parts melancholy, fatalism, and pride — sort of an Oppenheimer-esque “Now I am become death, destroyer of jobs” kind of thing. They all think the immiseration of labor is inevitable, but they think that being the ones to invent and own the AI is the only way to avoid being on the receiving end of that immiseration. And in the meantime, it’s something cool to have worked on.

So when I cheerfully tell them that it’s very possible that regular humans will have plentiful, high-paying jobs in the age of AI dominance — often doing much the same kind of work that they’re doing right now — technologists typically become flabbergasted, flustered, and even frustrated. I must simply not understand just how many things AI will be able to do, or just how good it will be at doing them, or just how cheap it’ll get. I must be thinking to myself “Surely, there are some things humans will always be better at machines at!”, or some other such pitiful coping mechanism.

But no. That is not what I am thinking. Instead, I accept that AI may someday get better than humans at every conceivable task. That’s the future I’m imagining. And in that future, I think it’s possible — perhaps even likely — that the vast majority of humans will have good-paying jobs, and that many of those jobs will look pretty similar to the jobs of 2024.

At which point you may be asking: “What the heck is this guy smoking?”

Well, I’ll tell you.

https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-the

Italy’s Superbonus: The Dumbest Fiscal Policy in Recent Memory – Marginal REVOLUTION

Luis Garicano has an amazing post on “one of the dumbest fiscal policies in recent memory.” Launched in Italy during COVID by Prime Minister Conte, the “Superbonus” scheme subsidized 110% of housing renovation costs. Now if one were to use outdated, simplistic, Econ 101 type reasoning one would predict that such a scheme would be massively costly not only because people would rush to renovate their homes for free but because the more expensive the renovation on paper the bigger the bonus.

The proponents of the Superbonus, most notably Riccardo Fraccaro, were however, advocates of Monetary Monetary Theory so deficits were considered only an illusory barrier to government spending and resource constraints were far distant concerns. Italy still had to meet EU rules, however, so the deficit spending was concealed with creative accounting:

rather than direct cash grants, the government issued tax credits that could be transferred. A homeowner could claim these credits directly against their taxes, have contractors claim them against invoices, or sell them to banks. These credits became a kind of fiscal currency – a parallel financial instrument that functioned as off-the-books debt (Capone and Stagnaro, 2024). The setup purposefully created the illusion of a free lunch: it hid the cost to the government, as for European accounting purposes the credits would show up only as lost tax revenue rather than new spending.

In MMT terms, Fraccaro and his team effectively created money as a tax credit, putting into practice MMT’s notion that a sovereign issuer’s currency is ultimately a tax IOU​.

So what were the results? The “free renovation” scheme quickly spiraled out of control. Initially projected to cost €35 billion, the program ballooned to around €220 billion—about 12% of Italy’s GDP! Did it drive a surge in energy-efficient renovations? Hardly. Massive fraud ensued as builders and homeowners inflated renovation costs to siphon off government funds. Beyond that, surging demand ran headlong into resource constraints. Econ 101 again: in the short run, marginal cost curves slope upward.

https://marginalrevolution.com/marginalrevolution/2025/02/italys-superbonus-the-dumbest-fiscal-policy-in-recent-memory.html?utm_source=rss&utm_medium=rss&utm_campaign=italys-superbonus-the-dumbest-fiscal-policy-in-recent-memory

The Nerd as the Norm – Everything Studies

What would it look like if we saw the other side of the nerdiness bell curve as the weird one? Really terrific essay that may be very useful both to nerds and to the people who love them.

In our hypothetical “nerds are the norm” bizarro-world we’d have the opposite distortion. We get that by breaking wambs out from the central blob, extending the axis to the left side, and then fuse nerds with the center so our new idea of normality includes nerds and excludes wambs. There’d be an “allism spectrum”, named after something I found when googling “opposite of autism”, with wambs at its mild end and some formal diagnosis on the severe end[3].

In that world, “Field Guide to the Wamb” would describe wambs as weirdos with strange interests and personalities. Their weaknesses would be considered major flaws and their strengths maybe useful for some things but not essential to be a well-rounded human.

https://everythingstudies.com/2017/11/07/the-nerd-as-the-norm/

Debanking (and Debunking?)

From the always-excellent Patrick McKenzie:

A SAR [suspicious activity report] is not a conviction of a crime. It isn’t even an accusation of a crime. It is an interoffice memo documenting an irregularity, about 2-3 pages long. Banks file about 4 million per year. (There are some non-bank businesses also obliged to file them, but nobody is presently complaining about decasinoing, so ignore that detail. Banks are the central filers of SARs.) For flavor: about 10% are in the bucket Transaction With No Apparent Economic, Business, or Lawful Purpose. FINCEN has ~300 employees and so cannot possibly read any significant portion of these memos. They mostly just maintain the system which puts them in a database which is searchable by many law enforcement agencies. The overwhelming majority are write-once read-never.

Banks are extremely aware that most SARs are low signal, and that a good customer might wander into getting one filed on them. But there are thresholds and risk tolerance levels. And SARs will sometimes, fairly mechanically, cause banks to decide that they probably don’t want to be holding a hot potato. It’s risky, plausibly, and expensive, certainly. At many institutions, for retail accounts, the institution will have serious questions about whether it wants to continue working with you on the second SAR. It will probably not spend that much time thinking deeply about the answer.

So can the bank simply explain to the customer that staff time preparing SARs is expensive and that routinely banking customers who turn out to be real money launderers is a great way to end up with billion dollar fines? No, they cannot.

The typical individual named in a SAR is low-sophistication and cannot meaningfully participate in a discussion with a Compliance officer, because they’re very probably at the social margins. Do you have a favorite axis of disadvantage? Immigrant, no financial background, limited English ability, small business owner, socioeconomic class, etc? The axis has non-zero relevance to one’s probability of getting a SAR filed on oneself due to innocent behavior. Very many people who have SARs filed on them are disadvantaged on several axes simultaneously.

No, the bank cannot explain why SARs triggered a debanking, because disclosing the existence of a SAR is illegal12 CFR 21.11(k) Yes, it is the law in the United States that a private non-court, in possession of a memo written by a non-intelligence analyst, cannot describe the nature of the non-accusation the memo makes. Nor can it confirm or deny the existence of the memo. This is not a James Bond film. This is not a farce about the security state. This is not a right-wing conspiracy. This is very much the law.

https://www.bitsaboutmoney.com/archive/debanking-and-debunking/

In the Beginning, There Was Computation

This is just incredibly cool as well as important. The authors show artificial life spontaneously emerging from a soup of random code, under a range of conditions.

Now we’re in a position to ask: In a universe capable of computation, how often will life arise? Clearly, it happened here. Was it a miracle, an inevitability, or somewhere in between? A few collaborators and I set out to explore this question in late 2023.

Our first experiments used an esoteric programming language called (apologies) Brainfuck.8 While not as minimal as SUBLEQ, Brainfuck is both very simple and very similar to the original Turing Machine. Like a Turing Machine, it involves a read/write head that can step left or right along a tape.

In our version, which we call “bff,” there’s a “soup” containing thousands of tapes, each of which includes both code and data. The tapes are of fixed length—64 bytes—and start off filled with random bytes. Then, they interact at random, over and over. In an interaction, two randomly selected tapes are stuck end to end, creating a 128-byte-long string, and this combined tape is run, potentially modifying itself. The 64-byte-long halves are then pulled back apart and dropped back into the soup. Once in a while, a byte value is randomized, as cosmic rays do to DNA.

Since bff has only seven instructions, represented by the characters “< > + – , [ ]”, and there are 256 possible byte values, following random initialization only 2.7 percent of the bytes in a given tape will contain valid instructions; any non-instructions are skipped over. Thus, at first, not much comes of interactions between tapes. Once in a while, a valid instruction will modify a byte, and this modification will persist in the soup. On average, though, only a couple of computational operations take place per interaction, and usually, they have no effect. In other words, while computation is possible in this toy universe, very little of it actually takes place. When a byte is altered, it’s likely due to random mutation, and even when it’s caused by the execution of a valid instruction, the alteration is arbitrary and purposeless.

But after a few million interactions, something magical happens: The tapes begin to reproduce. As they spawn copies of themselves and each other, randomness gives way to complex order. The amount of computation taking place in each interaction skyrockets, since—remember—reproduction requires computation. Two of Brainfuck’s seven instructions, “[” and “],” are dedicated to conditional branching, and define loops in the code; reproduction requires at least one such loop (“copy bytes until done”), causing the number of instructions executed in an interaction to climb into the hundreds, at minimum.

The code is no longer random, but obviously purposive, in the sense that its function can be analyzed and reverse-engineered. An unlucky mutation can break it, rendering it unable to reproduce. Over time, the code evolves clever strategies to increase its robustness to such damage. This emergence of function and purpose is just like what we see in organic life at every scale; it’s why, for instance, we’re able to talk about the function of the circulatory system, a kidney, or a mitochondrion, and how they can “fail”—even though nobody designed these systems.

We reproduced our basic result with a variety of other programming languages and environments. In one especially beautiful visualization, my colleague Alex Mordvintsev created a two-dimensional bff-like environment where each of a 200×200 array of “pixels” contains a tape, and interactions occur only between neighboring tapes on the grid. The tapes are interpreted as instructions for the iconic Zilog Z80 microprocessor, launched in 1976 and used in many 8-bit computers over the years (including the Sinclair ZX Spectrum, Osborne 1, and TRS-80). Here, too, complex replicators soon emerge out of the random interactions, evolving and spreading across the grid in successive waves.

Their main output is a paper, but I strongly recommend starting with the lead author’s more generally accessible article on the work: In the Beginning, There Was Computation

(hat tip to Peter Watts)

Why Are Housing Costs So High? The Elevator Can Explain Why.

Elevators in North America have become over-engineered, bespoke, handcrafted and expensive pieces of equipment that are unaffordable in all the places where they are most needed. Special interests here have run wild with an outdated, inefficient, overregulated system. Accessibility rules miss the forest for the trees. Our broken immigration system cannot supply the labor that the construction industry desperately needs. Regulators distrust global best practices and our construction rules are so heavily oriented toward single-family housing that we’ve forgotten the basics of how a city should work.

Similar themes explain everything from our stalled high-speed rail development to why it’s so hard to find someone to fix a toilet or shower. It’s become hard to shake the feeling that America has simply lost the capacity to build things in the real world, outside of an app.

Behind the dearth of elevators in the country that birthed the skyscraper are eye-watering costs. A basic four-stop elevator costs about $158,000 in New York City, compared with about $36,000 in Switzerland.

But we can’t even put elevators together in factories in America, because the elevator union’s contract forbids even basic forms of preassembly and prefabrication that have become standard in elevators in the rest of the world. The union and manufacturers bicker over which holes can be drilled in a factory and which must be drilled (or redrilled) on site. Manufacturers even let elevator and escalator mechanics take some components apart and put them back together on site to preserve work for union members, since it’s easier than making separate, less-assembled versions just for the United States.

Opinion | Why Are Housing Costs So High? The Elevator Can Explain Why. – The New York Times

US maternal mortality has not increased after all

This is an interesting piece in general about some ways that charts can be misleading, but I was most struck by this particular example — it’s seemed worrying to me that US maternal mortality rates have risen in the past 25 years, and this seems like fairly strong evidence that it actually hasn’t.

 

But even when no one is intentionally trying to mislead or manipulate, charts designed to make information clear can still lead to erroneous conclusions. Just consider the U.S. maternal mortality statistics, which seem to show maternal deaths rising from 0.4 deaths per 100,000 women in 2003 to close to 1 per 100,000 in 2020.

Maternal mortality rates over time, with zoomed version at bottom. Note that the uptick in maternal death rates is limited to the U.S. Credit: Our World in Data

This graph is worrisome, particularly if you or your partner is pregnant (or expect to be). Why are so many more expectant and new mothers dying? Is there some new danger? Is the healthcare system getting worse? Coverage in Scientific American, NPR, and elsewhere suggested that the answer to these questions was “yes.”

In May 2024, however, Saloni Dattani reported in Our World in Data that the purported increase in U.S. maternal mortality stems mostly from changes in how these deaths are counted. Before 1994, the International Classification of Diseases (ICD) defined a “maternal death” as one where pregnancy is listed as the underlying cause of death on the death certificate. However, this led to many maternal deaths not being counted, including cases wherein the underlying cause of death was a condition that is exacerbated by pregnancy.

When the ICD was updated in 1994, the definition was expanded to include deaths from “any cause related to or aggravated by the pregnancy or its management.” The ICD also recommended “pregnancy checkboxes” on death certificates to help doctors catch more pregnancy-related deaths.

Dattani shows that as U.S. states gradually introduced the pregnancy checkbox and implemented the new ICD definition, rates of maternal death appeared to rise. So, it seems that the upward trend in the graph doesn’t come from changes in the actual death rate but from changes in what counts as a maternal death, to begin with. None of this is indicated in the charts, which plot smooth lines without any gaps or discontinuities.

On Fables and Nuanced Charts – Asimov Press

Paper report: ‘The phenomena of inner experience’

The following is just a copy-paste of my description elsewhere of this 2008 paper from Christopher L. Heavey and Russell T. Hurlburt.

I just ran across a 2008 paper, ‘The phenomena of inner experience’ (https://hurlburt.faculty.unlv.edu/heavey-hurlburt-2008.pdf), that tries to taxonomize the common types of mental experience. They asked about 16 different phenomena; 11 of them were present in <= 3% of experience reports, so they focused on the other 5, which were all present in >= 22% of reports. Those are: inner speech, inner seeing, unsymbolized thinking, feeling, and sensory awareness. They found people varied very widely on which ones they had, and how often. See screenshots for a) a summary of the five phenomena, and b) the relative commonness of the different ones, along with the the highest level reported by any participant (eg the most visual participant had inner seeing in 90% of their reports) and lowest level (which is 0% in all categories). Interesting stuff IMHO!

For me personally, using their categories, I would say the large majority are ‘unsymbolized thinking’, and occasionally inner speech or feeling or sensory awareness, inner seeing never. I’d be curious to hear other people’s splits.

As I said at CL, for me a lot of it is kinesthetic, a sense of spatial relationships between concepts relative to my body, and a lot of it is…algebraic, almost? It’s about the relationships between concepts. And sometimes it’s really nothing verbalizable at all, like it’s not uncommon if someone asks me what I’m thinking to be like ‘uhhhhhh…’

The less verbalizable parts are maybe almost like sensory awareness, except that instead of awareness of something I’m seeing or hearing, it’s awareness of one or more concept-thingies.

A few excerpts:

‘Most participants had one form of inner experience predominate; 22 of the 30 participants had at least one of the five common phenomena occurring in 50% or more of their samples.’

‘The most common dominant phenomenon was inner seeing, followed by feelings, and then inner speech.’

‘The phenomenon of sensory awareness requires additional explanation to ensure that it is comprehended. Sensory awareness, as we define it, is the experience of being drawn to and the paying particular, thematic attention to some sensory quality of the inner or outer environment. Sensory awareness is not merely the perception of some object; it is the direct attention to some particular sensory quality of the object. Thus Sally is reaching for a can of Coke with the intention of taking a drink. She is perceptually aware of the can as she reaches toward it, and could, if asked, report its shape, color, and so on. That does not count as sensory awareness by our definition. By contrast, Maria is also reaching for a can of Coke with the intention of taking a drink. As she reaches, she notices how the light reflects off the shoulder of the can, notices the can’s slightly rosy redness below the shoulder and its deeper redness above. Maria does have a sensory awareness as we define it.’

Although their whole idea is to take an open-ended phenomenological approach without presuppositions, the following quote makes me a bit suspicious that the experimenters’ iterative feedback may be inadvertently guiding participants into certain categories:

‘training should be “iterative”: participants should make attempts at observing/describing their own phenomena, receive feedback on those attempts, then make new observing/describing attempts, followed by new feedback, and so on. For example, DES shows repeatedly that many, if not most, people who have unsymbolized thinking (the experience of thinking without words or other symbols) will at first report such thinking to be in words. Only after repeated training as they iteratively confront the apprehension of their own experience do they come to recognize their presupposition of words as being false.’

Also for the record, their full list of 16 (given in an earlier paper) is ‘inner speech, partially worded speech, unworded speech, worded thinking, image, imageless seeing, unsymbolized thinking, inner hearing, feeling, sensory awareness, just doing, just talking, just listening, just reading, just watching TV, and multiple awareness.’