Most people will agree that LLMs are pretty neat, but now instead of every startup being "like Uber but for ..." they are "like chatGPT but for ...".
Everyone is trying to chuck AI into their products and most of the time there is no need, or the product is just a thin fine-tune over an existing LLM model that adds essentially near-zero value. HN is fairly negative on that sort of thing I think (rightly so IMO)
Engineers are expensive, so actually the cost/benefit analysis is a little more complex and different problems will have different solutions.
Let's high ball US residential electricity prices are about 25¢ per kWh. So 25¢ of electricity gets us 100 GPT-4 queries. $25 gets us 10_000.
Let's low ball average US developer salaries at a cool $100_000/yr. 50 40 hour weeks in a year makes 2_000 working hours makes $50 per hour. So with our very generous margins all working against us, a US developer would have to be making 20_000 GPT-4 queries an hour, or a little over 5 per second, in order to end up costing in electricity what he is making salary-wise.
I have no real point to this story except that electricity is much cheaper than most people have a useful frame of reference for. My mom used to complain about teenage me not running the dishwasher at full load until I worked out that the electricity and water together costed about 50¢ a run and offered her a clean $20 to offset my next 400 only three-quarters full runs.
Your bonus programming tip: Many programming languages let you legally use underscores to space large numbers! Try "million = 1_000_000" next time you fire up Python.
The AI algos will get 100x faster through a combination of hardware and software optimizations. Then, deterministic vs AI will mean the unnoticeable difference between displaying some info to the user in 0.001s vs 0.1s. Then, AI will become the default.
I also believe there will always be a need for determinism. There will absolutely be applications where the randomness of ai is unacceptable.
https://github.com/verdverm/pypge
https://github.com/verdverm/go-pge/blob/master/pge_gecco2013...
The reviews had awesome and encouraging comments
> I also believe there will always be a need for determinism. There will absolutely be applications where the randomness of ai is unacceptable.
For high-assurance apps, I agree there will always be a need, sure. Of course, these high-assurance apps will be supervised by AI that can inspect it and raise alarm bells if anything unexpected happens.
For consumer apps though, an app might actually feel less "random" to the user if there's an AI that can intuit exactly what they are trying to accomplish when they perform certain actions in the app (much like a friendly tech-savvy teacher sitting down with you to help you accomplish something in the app).
It's only been three years since AI Dungeon opened my mind to how powerful generative AI could be, and GPT-4 blows that out of the water. Whatever gets released three more years from now will likely blow GPT-4 out of the water.
AI is already considerably smarter than the dumbest humans, in terms of its ability to hold a conversation in natural language and make arguments based on fact. It's only a matter of time before it's smarter than the average human, and at the current pace, that time will arrive within the next decade.
All useful technology improves over time, and I see no reason to believe AI will be any different.