zlacker

[parent] [thread] 11 comments
1. jacque+(OP)[view] [source] 2022-12-12 12:32:28
But the bar really isn't 'no human can tell' the bar is 'the bulk of the humans can't tell'.

> Things would be quite different if an AI could interpret new information and form opinions, but even if GPT could be extended to do so, right now it doesn't seem to have the capability to form opinions or ingest new information (beyond a limited short term memory that it can use to have a coherent conversation).

Forming opinions is just another mode of text transformations, ingesting new information is either a conscious decision to not let the genie out of the bottle just yet or a performance limitation, neither of those should be seen as cast in stone, the one is a matter of making the model incremental (which should already be possible), the other merely a matter of time.

replies(2): >>sheeps+g4 >>mhb+Yi
2. sheeps+g4[view] [source] 2022-12-12 13:08:35
>>jacque+(OP)
A true AI will not have one opinion. It will realize there are many truths - one persons truth is really a perspective based on their inputs which are different than another's. Change the inputs and you’ll often get a different output.

ChatGPT further proves this notion - you can ask it to prove/disprove the same point and it will do so quite convincingly both times.

replies(5): >>jacque+k4 >>mdp202+a5 >>shanus+ib >>jerf+7c >>whywhy+lh
◧◩
3. jacque+k4[view] [source] [discussion] 2022-12-12 13:09:36
>>sheeps+g4
> ChatGPT further proves this notion - you can ask it to prove/disprove the same point and it will do so quite convincingly both times.

Just like any lawyer, then, depending on who foots the bill.

replies(1): >>voakba+bl
◧◩
4. mdp202+a5[view] [source] [discussion] 2022-12-12 13:15:53
>>sheeps+g4
> you can ask it to prove/disprove the same point and it will do so quite convincingly both times

Probably because "in the night of the reason everything is black"; probably because it is missing the very point, which is to get actual, real, argumented, solid insight on matters!!!

You use Decision Support Systems to better understand a context, not to have a well dressed thought toss!

◧◩
5. shanus+ib[view] [source] [discussion] 2022-12-12 14:04:08
>>sheeps+g4
That's a great point that I haven't seen in the GPT-related conversations. People view the fact that it can argue convincingly for both A and ~A as a flaw in GPT and limitation of LLMs, rather than an insight about human reasoning and motivation.

Maybe it's an illustration of a more general principle: when people butt up against limitations that make LLMs look silly, or inadequate, often their real objection is with some hard truths about reality itself.

◧◩
6. jerf+7c[view] [source] [discussion] 2022-12-12 14:10:07
>>sheeps+g4
Do not mistake ChatGPT for AI in general. ChatGPT, GPT, and transformers in general are not the end state of AI. They are one particular manifestation and projecting forward from them is drawing a complex hypershape through a single point (even worse than drawing a line through a single point).

It is probably more humanly-accurate to say that ChatGPT has no opinions at all. It has no understanding of truth, it has no opinions, it has no preferences whatsoever. It is the ultimate yes-thing; whatever you say, it'll essentially echo and elaborate on it, without regard for what it is that you said.

This obviously makes it unsuitable for many things. (This includes a number of things for which people are trying to use it.) This does not by any means prove that all possible useful AI architectures will also have no opinion, or that all architectures will be similarly noncommital.

(If you find yourself thinking this is a "criticism" of GPT... you may be too emotionally involved. GPT is essentially like looking into a mirror, and the humans doing so are bringing more emotion to that than the AI is. That's not "bad" or something, that's just how it works. What I'm saying here isn't a criticism or a praise; it's really more a super-dumbed-down description of its architecture. It fundamentally lacks these things. You can search it up and down for "opinions" or "truth", and it just isn't there in that architecture, not even implied in the weights somewhere where we can't see it. It isn't a good thing or a bad thing, it just is a property of the design.)

replies(2): >>eterna+Fm >>sheeps+9Jf
◧◩
7. whywhy+lh[view] [source] [discussion] 2022-12-12 14:40:09
>>sheeps+g4
I wouldn’t consider that an AI but more a machine that tells me what I want to hear.

If it’s intelligent it should have an opinion that consulting all the facts it will hold in as high of a regard as humans do their religious and political beliefs.

And I mean one it came to of its own conclusions not a hard coded “correct” one the devs gave it, something that makes us uncomfortable.

8. mhb+Yi[view] [source] 2022-12-12 14:48:58
>>jacque+(OP)
None of this matters. The reason comments are valuable is that they are a useful source of information. Part of the transaction cost of deciding whether a comment is useful is how much additional work is required to evaluate it.

Comments are ascribed credibility based on the trust the reader has in the commenting entity, whether the comment is consistent with the reader's priors and researching citations made in the comment, either explicit or implicit.

Since GPT can confidently produce comments which are wrong, there is no trust in it as a commenting entity. Consequently everything it produces needs to be further vetted. It's as if every comment was a bunch of links to relevant, but not necessarily correct sources. Maybe it produces some novelty which leads to something worthwhile, but the cost is high, until it can be trusted. Which is not now.

If a trusted commenter submits a comment by GPT, then he is vouching for it and it is riding on his reputation. If it is wrong, his reputation suffers, and trust in that commenter drops just as it would regardless of the genesis of the comment.

◧◩◪
9. voakba+bl[view] [source] [discussion] 2022-12-12 14:59:51
>>jacque+k4
Right? If anything, this kind of mental flexibility is more human than not.
◧◩◪
10. eterna+Fm[view] [source] [discussion] 2022-12-12 15:08:13
>>jerf+7c
The mirroring/reflecting aspect of ChatGPT is a defining aspect.

I agree that this is not general AI. I think we could be looking at the future of query engines feeding probabilistic compute engines.

replies(1): >>jerf+GJ
◧◩◪◨
11. jerf+GJ[view] [source] [discussion] 2022-12-12 16:43:54
>>eterna+Fm
Yeah. If you look at my comments about ChatGPT on HN it may look like I'm down on the tech. I'm really not, and it does have interesting future uses. It's just that the common understanding is really bad right now, and that includes people pouring money into trying to make the tech do things it is deeply and foundantionally unsuited for.

But there's a lot of places where a lack of concept of "truth" is no problem, like as you say, query engines. Query engines aren't about truth; they're about matching, and that is something this tech can conceivably do.

In fact I think that would be a more productive line in general. This tech is being kind of pigeonholed into "provide it some text and watch it extend it" but it is also very easy to fire it at existing text and do some very interesting analyses based on it. If I were given this tech and a mandate to "do something" with it, this is the direction I would go with it, rather than trying to bash the completion aspect into something useful. There's some very deep abilities to do things like "show me things in my database that directly agree/disagree/support/contradict this statement", based on plain English rather than expensive and essentially-impossible-anyhow semantic labeling. That's something I've never seen a query engine do before. Putting in keywords and all the variants on that idea are certainly powerful, but this could be next level beyond that. (At the cost of great computation power, but hey, one step at a time!) But it takes more understanding of how the tech works to pull something interesting off like this than what it takes to play with it.

Probably a good blog post here about how the promise of AI is already getting blocked by the complexity of AI meaning that few people use it even seem to superficially understand what it's doing, and how this is going to get worse and worse as the tech continues to get more complicated, but it's not really one I could write. Not enough personal experience.

◧◩◪
12. sheeps+9Jf[view] [source] [discussion] 2022-12-16 13:53:15
>>jerf+7c
We give ourselves (humans) too much credit. How does a child learn? By copying observing, copying and practice (learning from mistakes). ChatGPT differs only in that it has learned from the experience of millions of others over a period of 100s of years. Suffice to say, it can never behave like a single human being since it has lived through the experience of so many.

How does one articulate “conscience” or “intelligence” or an opinion? I think these are all a product of circumstances/luck/environment/slight genetic differences (better cognition, or hearing or sight, or some other sense brain, genes could define different abilities to model knowledge - such as backtracking etc).

So to get a “true” human like opinionated personality, we’ll need to restrict its learnings to that of one human. Better yet, give it the tools to learn on its own and let it free inside a sandbox of knowledge.

[go to top]