“as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding”

Link. “… a mathematically provable argument for how and why an LLM can develop so many abilities… when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected”

Researchers applied techniques from random graph theory. These models have unexpected behaviors.

We don’t know if this is how *we* “understand” but I suspect it’s something similar. We are not magical.

“Many people found it a little bit eerie how much GPT-4 was better than GPT-3.5, and that happened within a year. Does that mean in another year we’ll have a similar change of that magnitude? I don’t know. Only OpenAI knows.”

2019 brain implant controls woman’s severe OCD

Link. It was done primarily for her epilepsy, she suggested doing the OCD as well. “work involved the coordination of researchers from OHSU, UCLA, Stanford University, and the University of Pennsylvania.”

Sounds like for her controlling the OCD was even more valuable than managing her epilepsy. It’s not a complete cure though.

“China was ramping up an extensive hacking operation geared at taking down the United States’ power grid, oil pipelines and water systems in the event of a conflict over Taiwan.”

Link. We live in a glass house and we can’t make enough weapons to even supply Ukraine.

“Hackers for Volt Typhoon compromised hundreds of Cisco and NetGear routers, many of them outdated models no longer supported by manufacturer updates or security patches, in an effort to embed an army of sleeper cells that would be activated in a crisis.”

Lowe on the Aduhelm debacle

Link. “some members of the advisory committee that recommended against that approval resigned in protest, and Medicare later declared that they would not pay for the drug”

The FDA should review that approval the way aviation reviews plane crashes.