zlacker

[parent] [thread] 5 comments
1. Jatama+(OP)[view] [source] 2022-12-12 07:43:55
> if it's an area of knowledge you're not really that familiar with

Thats actually dangerous way to use ChatGPT. Since you don't know the real answer you won't be able to tell when it gets something wrong.

replies(3): >>city17+L4 >>myster+py >>Zachsa+hG
2. city17+L4[view] [source] 2022-12-12 08:32:21
>>Jatama+(OP)
But if you know a lot about something, why would you ask ChatGPT a question about it (especially if you assume it doesn't have expert knowledge)?
replies(1): >>system+f6
◧◩
3. system+f6[view] [source] [discussion] 2022-12-12 08:45:40
>>city17+L4
I wouldn't ask ChatGPT anything. It is still writing weird articles that sounds meaningful yet lacking arguments because it finds attributes of the compared objects and places them in sentences. As if it is comparing them. It just doesn't make sense. ChatGPT is nice but has a long way to become useful that way.
4. myster+py[view] [source] 2022-12-12 12:54:58
>>Jatama+(OP)
Honestly this could be the silver lining of ChatGPT. Some people trust the answers of random commenters on the internet for anything from how things work technically to medical advice. Having an ever-increasing chance that any given commenter literally knows nothing except how to string words together might break that habit.
replies(1): >>krageo+nK
5. Zachsa+hG[view] [source] 2022-12-12 13:56:58
>>Jatama+(OP)
I've been using GTP3's copilot when designing sql queries. I'm not very comfortable with sql, but I am with the mathematical basis. It's powerful tool to help learn language syntax.

I've experimented with systems design using it, but as I expected, it's a big fat no.

If a robot gtp account does not have human supervision, it will spit out all sorts of rubbish / be easy to spot. Else the manager will just be a person who spams low quality content. I'm concerned, but we have time to find a solution.

◧◩
6. krageo+nK[view] [source] [discussion] 2022-12-12 14:24:11
>>myster+py
> Having an ever-increasing chance that any given commenter literally knows nothing except how to string words together might break that habit.

This is already figuratively the case and it has had no impact on this phenomenon. Why would the new situation be any different?

[go to top]