Author Archives: Egg Syntax

‘Coin toss not so random after all, says groundbreaking study’

Badly titled — it’s still extremely random, it’s just (very slightly) biased. Interesting, though!

Researchers at the University of Amsterdam recently made a surprising discovery that challenges long-held assumptions about the randomness of coin tossing. After flipping coins over 350,000 times, the largest study of its kind, they found that coins have a slight tendency to land on the same side they started on.

The data showed a small but statistically significant same-side bias of 51%, just slightly higher than the 50% predicted by chance. This subtle yet remarkable finding defies the conventional wisdom that coin flips represent a random and unpredictable 50/50 outcome.

Coins of 46 different currencies were flipped by hand and caught in the palms of 48 student participants to record the landing side. The data collection process required meticulous recording over many months, with flipping sessions videotaped to validate the results.

https://boingboing.net/2023/10/10/coin-toss-not-so-random-after-all-says-groundbreaking-study.html

Artificial General Intelligence Is Already Here

Ultimately ‘AGI’ is a pretty contested term, such that there’s no simple answer to where the threshold is. But I agree with Arcas and Norvig (Norvig in particular is a very careful thinker who I have great respect for) that by many definitions we’ve crossed that threshold, albeit just barely.

Artificial General Intelligence (AGI) means many different things to different people, but the most important parts of it have already been achieved by the current generation of advanced AI large language models such as ChatGPT, Bard, LLaMA and Claude. These “frontier models” have many flaws: They hallucinate scholarly citations and court cases, perpetuate biases from their training data and make simple arithmetic mistakes. Fixing every flaw (including those often exhibited by humans) would involve building an artificial superintelligence, which is a whole other project.

Nevertheless, today’s frontier models perform competently even on novel tasks they were not trained for, crossing a threshold that previous generations of AI and supervised deep learning systems never managed. Decades from now, they will be recognized as the first true examples of AGI, just as the 1945 ENIAC is now recognized as the first true general-purpose electronic computer.

 

Artificial General Intelligence Is Already Here

DIY Geoengineering: The Whitepaper – Nephew Jonathan

tl;dr:

Global warming, though not ocean acidification, is quickly and cheaply reversed by ejecting calcite nanoparticles (with an average radius in the ~90nm range) into the stratosphere, using a propeller-based system to prevent particle clumping. The particles should be carried up by hydrogen balloons, and very preferably released over the tropics. The total amount needed will be on the order of several hundred kilotons yearly, and the total cost should be somewhere between $1B and $5B yearly.

https://nephewjonathan.substack.com/p/diy-geoengineering-the-whitepaper

Six months later, the call to slow AI development is more crucial than ever

I endorse this plan (with a minor caveat for liability, which I have to think more about).

The U.S. must immediately establish a detailed registry of giant AI experiments, maintained by a U.S. federal agency. This agency should also build awareness of the huge clusters of specialized hardware that are used in these experiments, and work with the manufacturers of that hardware to include safety and verification features at the chip level. The U.S. government should at minimum ensure that it has the capability to trigger a pause. It has become clear that corporations are not merely reluctant to hit the brakes — the brake pedal does not even exist.

If we are going to reap the revolutionary potential of AI, regulators must enforce standards to ensure safety and security during development. They must require that developers take on the burden of proof, and demonstrate that their new systems are safe before deployment — just like they do for new drugs, cars or airplanes. Lawmakers must take proactive steps to ensure that developers are legally liable for the harm their products cause.

These efforts cannot stop at home. The large-scale risks of AI affect everyone everywhere, and the upcoming UK summit is an opportunity to start the crucial task of addressing them at a global level in a way that transcends national borders and geopolitical rivalries. This kind of international cooperation is possible. We coordinated on cloning. We banned bioweapons. We signed treaties about nuclear weapons even at the height of the Cold War. We can work together on AI.

Six months later, our call to slow AI development is more crucial than ever

Happy Yeltsin Supermarket Day!

The anecdote is fascinating. Not currently endorsing the essay as a whole.

(quoted in the essay, original source NYT)

During a visit to the United States in 1989 [Yeltsin] became more convinced than ever that Russia had been ruinously damaged by its centralized, state‐run economic system, where people stood in long lines to buy the most basic needs of life and more often than not found the shelves bare. He was overwhelmed by what he saw at a Houston supermarket, by the kaleidoscopic variety of meats and vegetables available to ordinary Americans.

Leon Aron, quoting a Yeltsin associate, wrote in his biography, “Yeltsin, A Revolutionary Life”…: “For a long time, on the plane to Miami, he sat motionless, his head in his hands. ‘What have they done to our poor people?’ he said after a long silence.” He added, “On his return to Moscow, Yeltsin would confess the pain he had felt after the Houston excursion: the ‘pain for all of us, for our country so rich, so talented and so exhausted by incessant experiments.’ ”

He wrote that Mr. Yeltsin added, “I think we have committed a crime against our people by making their standard of living so incomparably lower than that of the Americans.” An aide, Lev Sukhanov was reported to have said that it was at that moment that “the last vestige of Bolshevism collapsed” inside his boss.

https://www.cato.org/blog/happy-yeltsin-supermarket-day

A Guide to Understanding the Hoax of the Century – Tablet Magazine

This is very much a dispatch from the far reaches of the paranoid style in American politics. But even if you wouldn’t put the pieces together the same way that the author does (and I wouldn’t), the pieces themselves are fascinating and troubling. And I’m in full agreement that our recent approach to disinformation and the double-thinkish ‘malinformation’ is an alarming trend.

In his last days in office, President Barack Obama made the decision to set the country on a new course. On Dec. 23, 2016, he signed into law the Countering Foreign Propaganda and Disinformation Act, which used the language of defending the homeland to launch an open-ended, offensive information war.

Something in the looming specter of Donald Trump and the populist movements of 2016 reawakened sleeping monsters in the West. Disinformation, a half-forgotten relic of the Cold War, was newly spoken of as an urgent, existential threat. Russia was said to have exploited the vulnerabilities of the open internet to bypass U.S. strategic defenses by infiltrating private citizens’ phones and laptops. The Kremlin’s endgame was to colonize the minds of its targets, a tactic cyber warfare specialists call “cognitive hacking.”

[…]

The point was echoed by Michael Lumpkin, who headed the State Department’s Global Engagement Center (GEC), the agency Obama designated to run the U.S. counter-disinformation campaign. Lumpkin singled out the Privacy Act of 1974, a post-Watergate law protecting U.S. citizens from having their data collected by the government, as antiquated. “The 1974 act was created to make sure that we aren’t collecting data on U.S. citizens. Well, … by definition the World Wide Web is worldwide. There is no passport that goes with it. If it’s a Tunisian citizen in the United States or a U.S. citizen in Tunisia, I don’t have the ability to discern that … If I had more ability to work with that [personally identifiable information] and had access … I could do more targeting, more definitively, to make sure I could hit the right message to the right audience at the right time.”

The message from the U.S. defense establishment was clear: To win the information war—an existential conflict taking place in the borderless dimensions of cyberspace—the government needed to dispense with outdated legal distinctions between foreign terrorists and American citizens.

Since 2016, the federal government has spent billions of dollars on turning the counter-disinformation complex into one of the most powerful forces in the modern world: a sprawling leviathan with tentacles reaching into both the public and private sector, which the government uses to direct a “whole of society” effort that aims to seize total control over the internet and achieve nothing less than the eradication of human error.

https://www.tabletmag.com/sections/news/articles/guide-understanding-hoax-century-thirteen-ways-looking-disinformation

Riffusion

This really is just insanely cool. What a genius idea — take a machine-learning algorithm that can produce images from text, train it on images of spectrograms, let it interpolate between them, and convert the spectrograms back to audio. I could honestly listen to this for quite a while.

Riffusion

Confabulation: saying more than we can know

I’ve been thinking a lot about confabulation lately, because large language models do it too. And in particular, last night I read an interesting study (https://arxiv.org/abs/2305.04388) that shows that LLMs, asked to explain their decisions, come up with a plausible story that doesn’t necessarily reflect their actual decision process. This is very much something humans do; for examples, see the post quoted below, particularly the section on choice blindness.

We have so far explored confabulation in patients with brain damage. Do neurotypical, everyday people produce “honest lies”?

We confabulate all the time.. We just don’t realize that we are.

In Telling More Than We Can Know: Verbal Reports on Mental Processes, Nisbett & Wilson (1977) review hundreds of studies, across dozens of disciplines. Their evidence admits a theme: people’s attempts to explain their behavior is almost always unhelpful in identifying the important factors influencing their decisions. Let me briefly review four example findings.

https://kevinbinz.com/tag/insufficient-justification/

Are There Reasons to Believe in a Multiverse? | Quanta Magazine

This is one of the very few things I’ve ever read that made me feel like I understand the Standard Model just a little bit better.

Although in full honesty he really lost me when he said, ‘…if I gave you the Standard Model, you wouldn’t come back to me and tell me about the existence of a giraffe.’

(I’ve only read the transcript, no idea whether it’s better or worse in it’s original audio form)

https://www.quantamagazine.org/are-there-reasons-to-believe-in-a-multiverse-20230517/