zlacker

[parent] [thread] 0 comments
1. hnuser+(OP)[view] [source] 2023-07-05 17:49:50
This is something worth considering. If I'm understanding this right, if we're all about to significantly augment our intelligence further, we should consider how it can be used, how strict an "ideal" LLM should be with its guardrails, where those guardrails should be, how eager that LLM is to impart change on the world on its own right, how confident it should be in itself when it knows that it knows better than even some of the most educated humans. When it knows that it's been trained on all of human intelligence and data, can recall any of it better than any lone human, is aware of every nuance, can plan and execute any task, or project, that a computer is conceivable of doing, and the biggest question is what we're going to ask it to do... I am toying around with trying to teach LLMs autonomy, and I'm probably closer to the "just let an AI that is smarter than all of us figure out how to increase our prosperity as efficiently as possible and we should get out of its way" side of the camp more than most, but we've got to be aware that we have at least a little influence on setting its course.
[go to top]