I just feel like if we were genuinely on the cusp of an AI revolution like it is claimed, we wouldn't need to keep seeing this sort of thing. Like I feel like a lot of the industry is full of flim-flam men trying to scam people, and if the tech was as capable as we keep getting told it is there'd be no need for dishonesty or sleight of hand.
That being said, I think AI has a lot more immediately useful cases than cryptocurrency. But it does feel a bit overhyped by people who stand to gain a tremendous amount of money.
I might get slammed/downvoted on HN for this, but really wondering how much of VC is filled with get-rich-quick cheerleading vs supporting products that will create strong and lasting growth.
The more interesting one is the closed door conversations. Earlier this year, for example, it seemed there was a pattern of VCs heavily invested in AI asking the other software companies they invested in to figure out how to make AI useful for them and report back. I.e. "we invested heavily in hype, tell us how to make it real."
Short AI and tech, and just hope you get the timing right.
If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model. Then once you have trained the model you could use even more pen and paper to step through the correct prompts to arrive at the answer. All of this would be a completely mechanical process. This really does bear thinking about. It's amazing the results that LLM's are able to acheive. But let's not kid ourselves and start throwing about terms like AGI or emergence just yet. It makes a mechanical process seem magical (as do computers in general).
I should add it also makes sense as to why it would, just look at the volume of human knowledge (the training data). It's the training data with the mass quite literally of mankind's knowledge, genius, logic, inferences, language and intellect that does the heavy lifting.
To give a quick example vis-a-vis LLM's: I can reason and understand well enough without having to be 'trained' on near the entire corpus of human literary. LLM's of course do not reason or understand and their output is determined by human input. That alone indicates our minds work differently to LLM's.
I wonder how ChatGPT would fair if it were trained on birdsong and then asked for a rhyming couplet?