12 Comments
User's avatar
Max Räuker's avatar

> With this agreement, Apple has cast its vote that Google’s AI models will outcompete OpenAI’s

That claim seems unlikely to represent Apple’s perspective, no? Siri use cases seem fairly limited in terms of necessary frontier capabilities.

IC Rainbow's avatar

> Iran halted its planned executions of protesters

It didn't.

iamtodor's avatar

In 1939, there was the Molotov-Ribbentrop Pact, according to which Poland was divided. Now its 2026, and seeing war in Ukraine that lasts for 4 years, red-head monkey's plans to conquer the island, China's mood towards Taiwan slightly makes me concerned.

Mateusz Bagiński's avatar

I'm kind of surprised that there's no forecasters' estimate of [some specific relevant Trump Greenland thing] happening (except for the prevention attempt via bill).

Brian Tan's avatar

There was one in the previous week; maybe their estimate hasn't changed much?

Mateusz Bagiński's avatar

My understanding was that the reason they switched from Green to Yellow was the Greenland thing.

Nuño Sempere's avatar

Yes, a combination of Trump;s insistence, his announced taiffs and Europe preparing countermeasures + Trump mulling invoking the Insurrection Act. The underlying drive is somewhat bellicose Trump decisionmaking.

No Magic Pill's avatar

Do you have examples of "costly action[s] at the individual level" for both orange and red alert levels? Pulling money out of the U.S.? Physically moving out of the U.S.? Stocking up on guns, ammo, food, and water?

Nuño Sempere's avatar

Depends on the threat, some might be: Physically moving out of the US into a more stable country, intense financial bets, orienting one's institution around addressing a threat, paying lots of money to manufacture a vaccine or to import masks.

No Magic Pill's avatar

Cool to see how varied the actions are, which makes sense given how varied the threats are! A follow-up question: does your team have pre-determined limits for when to take said actions, or is it determined in real-time/on a case-by-case basis? For example, if a U.S. president openly defied a SCOTUS order, then take action A; if B people die from a disease, take action C; or more generally, if X happens, do Y.

I wonder how valuable it is to define these event-action pairs in "quiet times" (i.e., before bad things happen) so there's no moving the goalposts later on, potentially ending in a boiling frog situation. My guess is that it's probably good to get a rough outline of these early on, but keep it flexible enough to be changed depending on the exact circumstances.