zlacker

[parent] [thread] 1 comments
1. waffle+(OP)[view] [source] 2023-05-10 16:01:34
I have the belief that models should be allowed to ingest everything, just as a human is allowed. We are not yet at the stage where AI is autonomous, they currently are designed to require human agency for input, human agency for evaluation of output, and finally human agency for the dissemination of select output. This last important stage is well understood in the field of photography, but currently ignored in AI stewardship dialogues. Ultimately, it is the responsibility of the human agent who selects AI information products to determine its legality and appropriateness, just as if they had snapped a photograph and are wrestling with the decision whether or not it should be distributed in a particular medium. It takes a fairly selfish consciousness to become obsessed with the desire to prevent AI models access to information and disregard the collective benefits of rich information availability to training.
replies(1): >>musTY8+kx1
2. musTY8+kx1[view] [source] 2023-05-10 23:48:39
>>waffle+(OP)
I like your view. From my limited knowledge, I speculate that if AI was developed further, and given as much publicly available data across a variety of scholarly topics as reasonably possible, it could potentially use statistics and such to help us find correlations across many different fields that a human would never think of. Whether it would revolutionize analytics or not I don't know as I am not qualified to say, but it's fun to dream of positive change these tools could bring.
[go to top]