Then Ulbricht walked into the public library and sat down at the table directly in front of me, and suddenly as I was reading I could look up and see exactly the chair he had been in, where the plainclothes police had positioned themselves, how they had arranged a distraction.
Having this tableau unexpectedly unfold right in front of my eyes was a fascinating experience, and it certainly made the article suddenly get a lot more immersive!
[1] https://www.wired.com/2015/05/silk-road-2/
EDIT: to be clear, I was not present for the arrest. I was reading the magazine, some years after the arrest, but in the same place as the arrest. (I didn’t qualify the events with “I read that...” since I thought the narrative ellipsis would be obvious from context; evidently not.)
So while wolfgang42 wasn't there when Ulbricht was actually arrested, their realization created a vivid mental image of the event unfolding in that space, which made the story feel more immersive.
In short: they were reading about an old event, but it happened to occur in the same spot they were sitting at that moment. Hope that clears it up!
Generatove AI has all but solved the Frame Problem.
Those expressions where intractable bc of the impossibility to represent in logic all the background knowledge that is required to understand the context.
It turns out, it is possible to represent all that knowledge in compressed form, with statistical summarisation applied to humongous amounts of data and processing power, unimaginable back then; this puts the knowledge in reach of the algorithm processing the sentence, which is thus capable of understanding the context.
The rules for translation are themselves the result of intelligence; when the thought experiment is made real (I've seen an example on TV once), these rules are written down by humans, using human intelligence.
A machine which itself generates these rules from observation has at least the intelligence* that humans applied specifically in the creation of documents expressing the same rules.
That a human can mechanically follow those same rules without understanding them, says as much and as little as the fact that the DNA sequences within the neurones in our brains are not themselves directly conscious of higher level concepts such as "why is it so hard to type 'why' rather than 'wju' today?" despite being the foundation of the intelligence process of natural selection and evolution.
* well, the capability — I'm open to the argument that AI are thick due to the need for so many more examples than humans need, and are simply making up for it by being very very fast and squeezing the equivalent of several million years of experiences for a human into a month of wall-clock time.
Minds shuffle information. Including about themselves.
Paper with information being shuffled by rules exhibiting intelligence and awareness of “self” is just ridiculously inefficient. Not inherently less capable.