Paper On AI Monosemanticity: “The AI gradually shifts to packing its concepts into tetrahedra (three neurons per four concepts) and triangles (two neurons per three concepts). When it reaches digons (one neuron per two concepts) it stops”

Link. Heroic effort to explain a very technical paper. Recommended for those who make confident statements about the near future.

“Anthropic’s interpretability team announced that they successfully dissected of one of the simulated AIs in its abstract hyperdimensional space.”

Long-Term Care Insurance Failure.

Link. “Insurers counted on policy lapse rates — people giving up their policies or defaulting on payments — of about 4 percent annually. The actual lapse rate was closer to 1 percent.”

LTC needs to be a lot cheaper. Or just do a pandemic every five years.

Omicron biology. 🆓

Link. “In August, a variant called BA.2.86 emerged with a host of new mutations — likely the result, once again, of evolution taking place in an immunocompromised person.”

By “immunocompromised” they mean “probably HIV”. It’s easy to understand why the cautious term. People living with HIV can change evolutionary dynamics of a pandemic.

“[vaccine] development will slow down as governments stop paying for genetic sequencing of new variants”

The Atlantic has the best OpenAI summary.

Link. “In conversations between The Atlantic and 10 current and former employees at OpenAI, a picture emerged of a transformation at the company that created an unsustainable division among leadership” (Should be free article link)

PS. Do not care for “zealous doomers” as a way to describe people worried about what AI tech will bring.