We’ve all heard the myth: goldfish have a memory span of just a few seconds. While that’s debatable in marine biology circles, it’s useful as a metaphor in tech, especially when talking about memory, risk, and AI. The problem is, large language models (LLMs) are not goldfish. In fact, they have incredible memory. And increasingly, that memory isn’t just session-based. It’s persistent, long-term, and system-connected. That changes everything.