GPT can "clone" the "semantic essence" of everyone who converses with it, generating new questions with prompts like "What interesting questions could this user also have asked, but didn't?" and then have an LLM answer it. This generates high-quality, novel, human-like, data.
For instance, cloning Paul Graham's essence, the LLM came up with "SubSimplify": A service that combines subscriptions to all the different streaming services into one customizable package, using a chat agent as a recommendation engine.
GPT4 in image viewing mode doesn't seem to be nearly as smart as text mode, and image generation IME barely works.
Explicit planning with discrete knowledge is GOFAI and I think isn't workable.
There is whatever's going on here: https://x.com/natolambert/status/1727476436838265324?s=46