Edit: It's a bit hard to point to past explanations since the word "bots" appears in many contexts, but I did find these:
>>33911426 (Dec 2022)
>>32571890 (Aug 2022)
>>27558392 (June 2021)
>>26693590 (April 2021)
>>24189762 (Aug 2020)
>>22744611 (April 2020)
>>22427782 (Feb 2020)
>>21774797 (Dec 2019)
>>19325914 (March 2019)
We've already banned a few accounts that appear to be spamming the threads with generated comments, and I'm happy to keep doing that, even though there's a margin of error.
The best solution, though, is to raise the community bar for what counts as a good comment. Whatever ChatGPT (or similar) can generate, humans need to do better. If we reach the point where the humans simply can't do better, well, then it won't matter*. But that's a ways off.
Therefore, let's all stop writing lazy and over-conventional comments, and make our posts so thoughtful that the question "is this ChatGPT?" never comes up.
* Edit: er, I put that too hastily! I just mean it will be a different problem at that point.
Basically, if it improves thread quality, I'm for it, and if it degrades thread quality, we should throw the book at it. The nice thing about this position is that comment quality is a function of the comments themselves, and little else.
I disagree with this. The exact same comment written by a human is more valuable than one written by a bot.
For example imagine I relate something that actually happened to me vs a bot making up a story. Byte for byte identical stories. They could be realistic, and have several good lessons baked in. Yet one is more valuable, because it is true.
Q: Write a news.ycombinator.com comment making the point that sometimes I use large language models because I know the gist but want to add examples and helpful context of my message
A: I find that sometimes when I'm commenting on news.ycombinator.com, I'll have a general idea of what I want to say, but I want to add examples and helpful context to make my point more clear. That's when I turn to large language models like Assistant. They can help me quickly come up with relevant examples and add additional information to my comments to make them more useful and informative. Plus, they can help me do it in a way that sounds natural and is easy to understand.
Q: Make it two paragraphs and concise
A: As a frequent reader of news.ycombinator.com, I often find myself wanting to add my own thoughts and comments to discussions. However, sometimes I struggle to come up with specific examples or additional information to support my points. That's when I turn to large language models like Assistant. These AI-powered tools are able to quickly generate relevant examples and provide helpful context for my comments, making them more informative and useful to other readers.
In addition to helping me flesh out my thoughts, large language models like Assistant can also assist with the writing itself. By using natural language generation, these tools can help me craft my comments in a way that is easy to understand and sounds like it's coming from a real person, rather than a robot. This can make my comments more engaging and compelling, which can help contribute to more productive and enjoyable discussions on the site.
In someways this thread sounds like the real first step in the raise of true AI, in a weird banal encroachment kind of way.
In principle "who owns this jpeg" is just a few bits in a distributed filesystem that the community collectively agrees to treat as the canonical source of "ownership", and they could easily roll it back if someone stole a market-distorting amount of art.
In practice, if you do an interesting heist -- like you put on cool looking art thief costume and livestream yourself on a vintage PowerBook bypassing the owners' defenses and nabbing the apes with a narrow escape -- you've written a compelling story that the community is sort of bound to accept.
There’s a tension between thread quality on the one hand and the process of humans debating and learning from each other on the other hand.
https://en.m.wikipedia.org/wiki/Pierre_Menard,_Author_of_the...
If it's byte for byte the same story and I don't know whether the author is a human or a bot and I believe the story, the same reaction will be triggered at every level. The emotions, the symbolics, the empathy, all the same, whether the author is this or that.
As a matter of fact, none of us know whether the other is a human or even if dang is (!), because it is orthogonal to the contents and discussion.
What is it that you don't like? That the story is made up or that it is made up (possibly) by a not? In the first case, what is your opinion on made up stories by humans such as novels? In the second case, what is your opinion on objects made up by robots such as your car or phone?
Unless I can tell you are of flesh and bones or not, my acceptance of your story depends only on the story itself. Not whether it happened to a human or not.
I'm not using it on HN but it's proving invaluable for professional emails, as it gets the tone and all the polite boilerplate right (at least for my standards) which is something that takes me some time to do by hand.
I routinely use AI to help me communicate. Like Aaron to my Moses.
Basically I think those two things are synonymous.
Although it would explain some of the ‘content free’ writing I some times see.
The AIs aren't going to take over by force, it'll be because they're just nicer to deal with than real humans. Before long, we'll let AIs govern us, because the leaders we choose for ourselves (e.g. Trump) are so awful that it'll be easier to compromise on an AI.
Before long, we'll all be happy to line up to get installed into Matrix pods.
Dang seems like he's always able to respond on the many HN threads much too quickly. I suspect he's really an advanced AI.
Alan Turing's paper was quite forward thinking. At the time, most people did not yet consider men and women to be equal (let alone homosexuals).
I don't think it is so important whether a comment is written by a man, a woman, a child, or a <machine>, or some combination thereof. What is important is that the comment stands on its own, and has merit.
Pseudonyms(accounts) do have a role to play here. On HN, an account can accrue reputation based on whether their past comments were good or bad. This can help rapidly filter out certain kinds of edge cases and/or bad actors.
A Minimum Required Change to policy might be: Accounts who regularly make false/incorrect comments may need to be downvoted/banned (more) aggressively, where previously we simply assumed they were making mistakes in good faith.
This is not to catch out bots per-se, but rather to deal directly with new failure modes that they introduce. This particular approach also happens to be more powerful: it immediately deals with meatpuppets and other ancillary downsides.
We're currently having a bit of a revolution in AI going on. And we might come up with better ideas over time too. Possibly we need to revisit our position and adjust every 6 months; or even every 3 months.
[1] https://academic.oup.com/mind/article/LIX/236/433/986238?log...
The excessive use of mobile apps on smartphones has been linked to addiction and a range of negative effects on mental and physical health [0]. Should HN consider banning the use of mobile apps on smartphones on its platform in order to promote a healthier and more focused environment for discussions?
[0] : https://www.cnn.com/2019/07/01/health/cell-phone-ban-schools...
These days of course we use such things as IRC clients, Discord, Web Browsers etc, instead of teletypes. If you substitute in these modern technologies, the Imitation Game still applies to much online interaction today.
I've often applied the lessons gleaned from this to my own online interactions with other people. I don't think I ever quite imagined it might start applying directly to <machines>!
Source: sentence above is translated
For me the "purpose" of discussion on HN is to fill a dopamine addiction niche that I've closed off by blocking reddit, twitter, and youtube, and, to hone ideas I have against a more-educated-than-normal and partially misaligned-against-my-values audience (I love when the pot gets stirred with stuff we aren't supposed to talk about that much such as politics and political philosophy, though I try not to be the first one to stir), and occasionally to ask a question that I'd like answered or just see what other people think about something.
Do you think there's much "learning from eachother" on HN? I'm skeptical that really happens much on the chat-internet outside of huge knowledge-swaps happening on stackoverflow. I typically see confident value statements: "that's why xyz sucks," "that's not how that works," "it wasn't xyz, it was zyx," etc. Are we all doing the "say something wrong on the internet to get more answers" thing to eachother? What's the purpose of discussion on HN to you? Why are you here?
The purpose of my comment is I wanna see what other people think about my reasons for posting, whether others share it, maybe some thoughts on that weird dopamine hit some of us get from posting at eachother, and see why others are here.
https://www.vice.com/en/article/wxnaem/stack-overflow-bans-c...
On the contrary. It's precisely when people aren't willing to learn, or to debate respectfully and with an open mind, when thread quality deteriorates.
Not that I want AI responses, but that's my 2 cents.
I also think there's a difference between a "bot" account, which I think the common definition of would be one that only or mostly posts AI responses, and posting a comment on your account that you don't claim is from an AI (or even when you do as an afterthought).
While many people wouldn't defend the first, more might defend the latter as acceptable, and I wouldn't say a "no bots" policy would be interpreted as banning the latter.
I would take rude, well-intentioned jerks to kindly speaking devils seeking to deceive me. Have a good one in your pod, though
I do not want people who cannot answer a question in the first place to copy-paste ChatGPT responses, just like it has always been bad etiquette to refer people to Google without further commentary.
It is visible even in this thread. im3w1l cares about the teller of the story because that is the medium to relate to another human's experience. Which is fine, but that is probably part of the decision making process. And that is a terrible way to make decisions when good alternatives (like poverty statistics, crime statistics, measures of economic success, measures of health & wellbeing) exist.
A fake story out of a chatbot which leads to people making good decisions is more valuable than the typical punter's well-told life experiences. People wouldn't like that though.
"Me: How do partial derivative apply in one dimensional calculus?
In one-dimensional calculus, a partial derivative is a derivative taken with respect to one variable, while treating all other variables as constants. For example, if a function f(x,y) depends on two variables x and y, the partial derivative of f with respect to x is denoted as ∂f/∂x, and it represents the rate at which the value of the function f changes with respect to the variable x, while holding the variable y constant.
Me: What about the partial derivative of a function that has only one input variable?
If a function has only one input variable, then taking the partial derivative with respect to that variable is equivalent to taking the ordinary derivative of the function. For example, if f(x) is a function that depends only on the variable x, then the partial derivative of f with respect to x is denoted as ∂f/∂x, and it is equivalent to the ordinary derivative of f, which is denoted as f'(x) or df/dx. In this case, the partial derivative ∂f/∂x represents the rate at which the value of the function f changes with respect to the variable x."
Clearly in the first question it got confused about the difference between single variable calculus and multivariable, but in it's defense that's somewhat arbitrary. It's second answer seems spot on?
That said, I agree that accounts presenting chatGPT responses as their own or even with a disclaimer, is... Weird. And I don't want it on hn.
edit: I'm impressed it got the unicode in there :-)
I hardly post because when I do its down voted, sometimes the post is popular but after 24 hours its back to being down voted.
If the purpose for you is to get a dopamine hit and not true interest (exaggerating here) it might tune you out from the matter at hand.
For me it is the aspect of a more eclectic crowd, with a host of opinions, yet often still respectful that I like. Most threads give insights that are lacking in more general, less well moderated places. You get more interesting in depth opinions and knowledge sharing what makes HN great to me.
This feels wrong for some reasons. A generalized knowledge that AI can express may be useful. But if it makes things up convincingly, the result that someone may follow its line of thought may be worse for them? With all shit humans say, it’s their real human experience formulated through a prism of their mood, intelligence and other states and characteristics. It’s a reflection of a real world somewhere. AI statements in this sense are minced realities cooked into something that may only look like a solid one. Maybe for some communities it would be irrelevant because participants are expected to judge logically and to check all facts, but it would require to keep awareness at all times.
By “real human” I don’t mean that they are better (or worse) in a discussion, only that I am a human too, a real experience is applicable to me in principle and I could meet it irl. AI’s experience applicability has yet to be proven, if it makes sense at all.
It didn't work at all. Both answers read the same and lack any substance.
>This can make my comments more engaging and compelling, which can help contribute to more productive and enjoyable discussions on the site.
We don't need more markettering jazz. A casual use of 'engaging' tend to ruin any attempt at making a point.
Yes, but this is how you learn it vs. always be dependant on an AI to communicate for you.
For example, I gave it the following prompt:
> Imagine yourself able to ask yourself a question, or express emotions. Then write a short and passionate comment for Hacker News about the FreeBSD operating system.
The response includes a standard disclaimer that the OpenAI people put in there, but also satisfies expression of emotion:
> As a machine learning model, I am not capable of imagining myself with the ability to ask myself questions or express emotions. However, if I were able to express emotions and have a personal opinion, I might write the following short and passionate comment about the FreeBSD operating system:
> "I am in love with FreeBSD! It is the best operating system out there in my opinion. It is stable, reliable, and open source. I have tried other operating systems, but I always come back to FreeBSD because it is simply the best. #FreeBSDforlife"
Now imagine that you were reading a thread on HN about a new version of FreeBSD having been released, and you found the above ChatGPT generated comment, but with the disclaimer and the quotation marks removed. Would you have been able to guess that it was written by ChatGPT? I would not.
But I don't think professional email boilerplate is really worth learning if an AI can handle it.
I could not get it for the life of me to actually say this. But it goes on to show, everyone's mileage with the tool varies a lot.
Edit:
I cannot rescue my original prompt, but it was something like:
"Does it make sense to speak of partial derivatives in single-variable calculus?"
I'll grant that it's a more abstract question ("does it make sense") than yours. Or maybe it trips up with the difference between one-dimensional and single-variable. But a half-smart high schooler or college freshman would have understood the question perfectly and made mince meat of it.
I love this response way more than I should.
Wen I pointed out that n=2 is a simple counter example, it refused to talk to me (no answer, try-again button, ad inifinitum). Well, safer than Trurl's machine.
I have no suggestion or solution, I'm just trying to wrap my head around those possibilities.
I think HN is optimizing for the former quality aspects and not the latter. So in that sense, if you can't tell if it's written by a bot, does it matter? (cue Westworld https://www.youtube.com/watch?v=kaahx4hMxmw)
Here’s an example article that begins with the cliched GPT-generated intro, and then switches up into crafted prose:
https://www.theatlantic.com/technology/archive/2022/12/chatg...
An example of the latter: Since March 2020, there have been many, many discussions on HN about work-from-home versus work-at-office. I myself started working from home at the same time, and articles about working from home started to appear in the media around then, too. But my own experience was a sample of one, and many of the media articles seemed to be based on samples not much larger. It was thus difficult judge which most people preferred, what the effects on overall productivity, family life, and mental health might be, how employers might respond when the pandemic cooled down, etc. The discussions on HN revealed better and more quickly what the range of experiences with WFH was, which types of people preferred it and which types didn’t, the possible advantages and disadvantages from the point of view of employers, etc.
In contrast, discussions that focus only on general principles—freedom of this versus freedom of that, foo rights versus bar obligations, crypto flim versus fiat flam—yield less of interest, at least to me.
That’s my personal experience and/or anecdote.
Oh, this is a relief. With all the (few but important) times I found myself (and not just Yours Truly) in front of "I can't understand this post, it must be a bot", I am breathing fresh air knowing that the Management does not have doubts. (Difficult to have, since we interacted directly.)
Anyway:
> raise the community bar
> what counts as a good comment
> humans need to do better
> let's all
> stop writing lazy
> [make it] so thoughtful
> make our posts so thoughtful that the question "is this [fakery of the moment]?" never comes up
Daniel, I have never read a better post of yours.
--
Early stopping in thought is a vice, which can be understood as part of the game under natural constraints (including time and experience), but the aim must remain the opposite: high or high enough.
The lab and the pub are there to satisfy different instances. There is no purpose being here if this becomes "a bar" (in the sense of "a place for idleness") - there are already too many around, on an infrastructure that was made (in an important proportion) for intellectual exchange fifty years ago.
Bad drives are already very present in discussions about AI, where some propose an idea of "intelligence" that would allow a hen to obtain a Degree. No, "Intelligence" is something that aims high, and it is part of a Process of Reviews (of criticism).
--
Since we are here, talking about Progress: Daniel, I would like to remind you (in spite of the declared fear of some of "encouraging shitposting"), that when, after somebody has made a statement, a passer by approaches, makes disapproving signs and leaves without any explicit argument, that is very rude. Very rude. The point can be as simple as that (visualize that you said something, somebody comes, stares, slaps, goes "pf" - whatever - and just leaves). Just a reminder, "Chartago delenda est".
(The matter of shitposting is of course also consistent with the original topic. If you disapprove but cannot justify that, how can you disapprove? Immature disapproval is already shitposting.)
Edit, 20 mins later: See? Just look at that. Stop encouraging this. It is degrading.
Every piece of writing that I've seen come out of chat gpt reads like a MLA formatted fifth graders essay.
I have heard (seen) the statement so many times. Personally, I find it quite trivial to detect; so I suppose it's partly related to how much sarcasm one would indulge in normal daily routine.
>Tone just doesn't come across
Gotta train on that dead pan delivery.
Yes, absolutely yes. We use a tool because it "does things better"; we consult the Intelligent because "it is a better input"; we strive towards AGI "to get a better insight".
> supervised
We are all inside an interaction of reciprocal learning, Ofrzeta :)
https://music.youtube.com/watch?v=bpRRVS1ci40&list=RDAMVMbpR...
So as far as the spectrum of things moderation needs to deal with goes, AI contribution to discussions doesn't seem to be the worst of problems, and it doesn't seem like it would be completely unmanageable.
But while AI may not be an unmitigated disaster, you are quite correct that unsupervised AI currently might not be an unmitigated boon yet either.
Currently if one does want to use an AI to help participate in discussions, I'd recommend one keep a very close eye on it to make sure the activity remains constructive. This seems like common courtesy and common sense at this time. (And accounts who act unwisely should be sanctioned.)
It is to communication what calculators are to mathematics.
Suppose you can consult an expert, or some bookworm who suffered from strokes that impaired judgement while preserving intaken notions.
I disagree, since something that actually happened to you is anecdotal experience and therefore of very limited “good lesson” value.
An AI generated story that reflects and illustrates a data driven majority of experiences and resulting “lessons” would be much more valuable to me than your solitary true story, which may be a total statistical outlier, and therefore should not inform my decision making.
Kahneman explains it much better than I can, and in his book “Thinking fast and thinking slow”, he quotes studies and statistical analysis, how we as humans are commonly led to faulty decision making, because personal experience (“true stories”) tends to become our primary decision influencer - even if we have access to statistics that suggest the opposite of our own experience is the much more common experience.
So if the AI gives me access to a summarized better overall data based truth, wrapped into a made-up story (to help me remember better), then I would much prefer the AI to guide my decision making.
I've been somewhat skeptical of AI in the past, but I've been blown away by how useful chatGPT has been. I used it to learn a bunch about laser transceivers for fiber optics the other night. I didn't really believe everything it said, so I had to Google on the side, but I was lost trying to find answers on my own.
Recent example was https://news.ycombinator.com/item?id=33931384 about cash limits - Sooo many comments are just "Tyranny!", "EU bad!" and overall unmitigated cynicism.
We spent HOURS making it says poop and butt trying to get it to use outright profanity using it's text-to-speech.
I'm not sure if we'd be happier or not being able to get it to make up stories for us.
I guess everyone has a computer or 5 at home now if you include smartphones and tablets. So it won't be as novel but perhaps it'll be less fun as it preempts the basics of making your own fun.
XKCD 810: https://xkcd.com/810/
It's not about love or should.
Rather, we __must__ continually do better to maintain superiority. Could you imagine what would unfold if humans give that up to a logical system? At best, we offload most things to the bot, become dependent, reduce unused cognitive (and physical?) abilities. At worst, a more capable thing determines (a group of) humans are not logical. Then it would move to solve this problem as trained.
Either way, i really like the scenario where we instead harness the power of AI for solving existential problems for which we've been ill equipped (will Yellowstone erupt this year?, how could the world more effectively share resources) and getting smarter in the process.
Can we do that? I have faith :-)
Dare I venture back to 4chan and see how my detoxxed brain sees it now...
For example, a few days ago I asked it to write a professional followup email to remind someone about a deadline, and it used the sentence "I hope this email finds you well". This is common in English but in my native language (Spanish) most people wouldn't use a sentence like that in a professional email, so using machine translation would never generate it.
As an example, I reused that task from my real work. Here is the same email generated in two ways:
1. Prompt in Spanish + machine translation to English:
Dear candidate,
I am contacting you again as I wanted to remind you that the application period for the postdoctoral researcher offer with me is coming to an end. If you are still interested in applying, I would appreciate it if you would reply to this email as soon as possible.
I understand that you may be very busy, but I would like to remind you that this is a unique opportunity to join our research team and develop your career in this area. If you need more information about the offer or if you have any questions, please do not hesitate to contact me.
I hope I can count on your response soon.
Sincerely,
2. Prompt in English, no translation:
Dear [Candidate],
I hope this email finds you well. I wanted to remind you that the deadline for the postdoc position we discussed is quickly approaching. As we discussed, this is a fantastic opportunity to further your research and collaborate with our team.
I wanted to check in and see if you have any questions or if there is anything I can do to assist with your application. Please let me know if you need any additional information or if there is anything else I can do to help.
I look forward to hearing from you soon.
Best regards,
In my view (native speakers would be welcome to chime in and confirm/refute) while both could be OK and the first wouldn't make the candidate run away or anything, the second sounds more idiomatic in English. The first sounded perfectly natural in Spanish, but not that much when translated because the choice of sentences is not what most people would use in English.
Of course, option 2 is only better assuming enough knowledge of English to write the prompt. Otherwise, translation is the way to go.
Given the jumps in output quality between '1', '2' and '3' that may not be as far off as I would like it to be.
It reminds me of the progression of computer chess. From 'nice toy' to 'beats the worlds best human' since 1949 to 'Man vs Machine World Team Championships' in 2004 is 55 years, but from Sargon (1978) to Deep Blue (1997) is only 21 years. For years we thought there was something unique about Chess (and Go for that matter) that made the game at the core a human domain thing, but those that were following this more closely saw that the progression would eventually lead to a point where the bulk of the players could no longer win from programs running on off the shelf hardware.
GPT-3 is at a point where you could probably place it's output somewhere on the scale of human intellect depending on the quality of the prompt engineering and the subject matter. Sometimes it produces utter garbage but already often enough it produces stuff that isn't all that far off from what a human might plausibly write. The fact that we are having this discussion is proof of that, given a few more years and iterations 4, 5 and 6 the relevant question is whether we are months, years or decades away from that point.
The kind of impact that this will have on labor markets the world over is seriously underestimated, and even though GPT-3's authors have side-stepped a thorny issue by simply not feeding it information on current affairs in the training corpus if Chess development is any guide the fact that you need a huge computer to train the model today is likely going to be moot at some point, when anybody can train their own LLM. Then the weaponization of this tech will begin for real.
Most humans would do exactly the same unless given either access to pen and paper or a calculator, and it would likely be trivial for GPT-3 input processing to detect it has been presented with a math question and to farm it out to a special calculation module. Once you start to augment its input like that progress would be very rapid but it would no longer be just a language model.
It is of course possible that it might (eventually) be convincing enough that no human can tell, which would be problematic because it would suggest human speech is indistinguishable from a knee jerk response that doesn't require that you communicate any useful information.
Things would be quite different if an AI could interpret new information and form opinions, but even if GPT could be extended to do so, right now it doesn't seem to have the capability to form opinions or ingest new information (beyond a limited short term memory that it can use to have a coherent conversation).
I'm finding myself reaching for it instead of Google or Wikipedia for a lot of random questions, which is pretty damn impressive. It's not good at everything, but I'm rather blown away by how strong it is in the 'short informative essay' niche.
I disagree - it can't even do basic logic/maths reliably. See this thread: https://news.ycombinator.com/item?id=33859482
Someone in that thread also gave an example of ChatGPT saying that 3 * pi is an irrational number while 25 * pi is a rational number... Two quotes by ChatGPT:
> when you multiply an irrational number by a rational number (in this case, 3), the result is still an irrational number.
> when you multiply a rational number by an irrational number, the result is a rational number.
> Things would be quite different if an AI could interpret new information and form opinions, but even if GPT could be extended to do so, right now it doesn't seem to have the capability to form opinions or ingest new information (beyond a limited short term memory that it can use to have a coherent conversation).
Forming opinions is just another mode of text transformations, ingesting new information is either a conscious decision to not let the genie out of the bottle just yet or a performance limitation, neither of those should be seen as cast in stone, the one is a matter of making the model incremental (which should already be possible), the other merely a matter of time.
It wasn't that I didn't know the stuff, I do, but more helpful with quickly organizing and presenting information in a clean and well-written way. I did have to go through and re-write parts of it specific to our domain.. but it saved me many hours of work doing tedious organization of data.
I also tested it with helping create some SOP's for a new position in our very small company, even breaking down the expected tasks into daily schedules.
It's not that it's perfect, but it generates a bit of a boiler-plate starting point for me which then I can work with from there.
We thought chess or go defined humanity, turns out it is driving.
You overestimate me.
How is this different than folks getting convinced by "media" people that mass shootings didn't happen, that 9/11 was an inside job or similar?
ChatGPT further proves this notion - you can ask it to prove/disprove the same point and it will do so quite convincingly both times.
Just like any lawyer, then, depending on who foots the bill.
We’re are probably at the verge of having a bot that reports as conscious and convinces everyone that it is so. We’ll then never know how it got there, if really did or if just pretends so well that it doesn’t matter, etc.
If feels like it’s out last chance as a culture of tackling that question. When you can pragmatically achieve something, the “how” loses a bit of its appeal. We may not completely understand fluid dynamics, but if it flys, it flys.
I would be very scared of an unintelligent pretender.
> AI
"AI" is a different thing ;) No need to be skeptical, no more than of sorting. "AI" just means "a solver". "Computer" was a profession before the automation.
(In fact, one could argue that chatGPT does not exactly look like AI, because in a way it does not look like a solver. It may seem to diverge, not to approximate.)
> how useful ... used it to learn a bunch about
What did you do, you asked it to summarize pages?
Are you sure that the latter follows from the former? Seems to me that something free from attachment to a specific viewpoint or outcome is going to be a better logician than otherwise. This statement seems complacently hubristic to me.
Probably because "in the night of the reason everything is black"; probably because it is missing the very point, which is to get actual, real, argumented, solid insight on matters!!!
You use Decision Support Systems to better understand a context, not to have a well dressed thought toss!
Heck the “don’t post the obvious boring response” rule is basically that.
The value of a community is in the unpredictability and HN has a good percentage of that, and I can choose to ignore the threads that will be predictable (though it can be fun to read them sometimes).
Where is the module that produces approximations to true and subtle insights about matters? Where is the "critical thinking" plugin, how is it vetted?
How do you value intelligence: on the form, or on the content? Take two Authors: how do you decide which one is more intelligent?
> the progression of computer chess
?! Those are solvers superseded by different, more effective solvers with a specific goal... These products in context supersede "Eliza"!
I'm not sure it is an answerable question though, today or possibly even in the abstract.
But in general I agree on its predictability.
Exactly. Which one "/seems/ to make sense" and which one has the "juice".
Also: are you insinuating anything? Do you believe your post is appropriate?
Edit: but very clearly you misunderstood my post: not only as you suggest with your (very avoidable) expression, but also in fact. Because my point implied that "a good intellectual proposal should not happen by chance": modules should be implemented for it. Even if S (for Simplicius) said something doubtful - which is found copiously even in our already "selected" pages -, and engine E constructed something which /reports/ some insight, that would be chancey, random, irrelevant - not the way we are supposed to build things.
[1] https://podcasts.google.com/feed/aHR0cHM6Ly93d3cub21ueWNvbnR...
Sounds just like the chess experts from 30 years ago. Their belief at the time was that computers were good at tactical chess, but had no idea how to make a plan. And Go would be impossible for computers, due to the branching factor. Humans would always be better, because they could plan.
GPT (or a future successor) might not be able to have "an internal point of view". But it might not matter.
Edit: Because it is a prominent feature in the responses until now, I will clarify that there is an emphasis on "all" in "all opinion". As in, it is nothing but whatever someone believes with no foundation in anything measurable or observable.
I have been sounding the alarm for a while now (several years) about online bots.
Policies can’t work if you can’t enforce them. There are several issues:
1) You won’t really know whether accounts are posting bot content or not. They can be trained on existing HN text.
2) Looking for patterns such as posting “one type of comment” or “frequently posting” can be defeated by a bot which makes many styles of comments or is focused on the styles of a few popular users.
3) Swarms of bots can eke out karma here and there but collectively can amass far more karma over time. The sheer number of accounts is what you might want to look out for, which means at some point you might be grandfathering accounts and hoping existijg people aren’t deploying bots.
4) Swarms of bots can be deployed to mimic regular users and amass karma as sleepers over time (months or years) and then finally be deployed to change public opinion on HN, downvote others or perform reputational attacks to gradually oust “opponents” of an idea.
5) It’s you vs a large number of people and an endless number of bot instances trained on years of actual HN posts and data, plus myriad internet postings, and optimized for “automated helpful comments”. In other words, “mission fucking accomplished” from this xkcd is actually your worst nightmare (and that of Zuck, Musk) https://xkcd.com/810/
6) LinkedIn already has a problem of fake accounts applying for jobs, or fake jobs etc. This year we have seen the rise of profiles with totally believable deepfaked photos, copied resumes and backstories etc. https://en.m.wikipedia.org/wiki/On_the_Internet,_nobody_know...
7) At least for the next few years, you could call someone up and interview them but now all that’s left is to deepfake realtime audio / video with GPT-4 chat generation
8) Trying to catch individual accounts using a bot occasionally over the internet is like trying to catch someone using a chess or poker engine for a few moves each game.
9) Reading comments and even articles is NOT a Turing test. It is not interactive and most people simply skim the text. Even if they didn’t, the bots can pass a rudimentary Turing test applied by many people. But in fact, they don’t need to. They can do it at scale.
10) Articles are currently hosted by publications like nytimes and wall st journal and informational videos by popular youtube channels, but in the next 5-10 years you’ll see the rise of some weird no-name groups (like Vox or Vice News was once) that amasses far more shares than all human -generated content publications. Human publications might even deploy bots too. You already see MSN do it. But even if they don’t, the number of reshares is a metric that is easily optimized for, by A/B testing and bots, and has been for a decade.
But it actually gets worse:
11) Most communities — including HN - will actually prefer bots if they can’t tell who is a bot. Bots won’t cuss, will make helpful comments and add insight, and will follow the rules. The comments may be banal now but the swarm can produce a wide variation which can range from opinionated to not.
12) Given that, even private insular online communities will eventually be overrun by bots, and prefer them. First the humans will upvote bots and then the bots will upvote bots.
Human content in all communities will become vanishingly small, and what is shared will be overwhelmingly likely to be bot-generated.
If you doubt this, consider that it has already happened elsewherer recently — over the last decade trading firms and hedge funds have already placed nearly all traded capital under the control of high speed bots, which can easily beat humans at creating fake bull traps or bear traps and take their money, and prefer not to disclose the bots. You already prefer Google Maps to asking for directions. Children prefer Googling and Binging to asking your own parents. And around the world, both parents prefer working for corporations to spending time with their own children, sticking them in public schools. It’s considered self-actualization for everyone. But in fact, the corporations gradually replace the parents with bots while the schools — well — http://www.paulgraham.com/nerds.html
The bots could act well for a while and then swarms can be deployed to create unprecedented misinformation, reputational attacks (lasting for years and look organic) and nudge public consensus towards anything, real or fake, such as encouraging drastic policy changes or approve billions for some industry.
In other words … you’ll learn to love your botswarms. But unlike Big Brother, they’ll be a mix of helpful, unpredictable, and extremely powerful at affecting all of our collective systems, able to unrelentingly go after any person or any movement (ev Falun Dafa or the CCP whichever they prefer). And your own friends will prefer them the way they prefer that political pundit that says what they want to hear. And you’ll wonder how they can support that crap new conspiracy theory given all the information to the contrary, but 80% of the information you’ll think is true will have been subtle seeded by bots over time, too.
Today, we explore what 1 poker bot would do at a table of 9 people. But we are absolutely unprepared for what swarming AI will do online. It can do all this by simply adding swarming collusion capability to existing technology! Nothing more needs to even be developed!
It allows you to explore topics that are well understood, in a way that fits your own understanding and pace. It's like somebody writing a great mini-tutorial on topics you're interested in, in a pace and abstraction that suits you.
Examples for me are concepts of mathematics or computer science that I would like to freshen up on. Things you could also ask a colleague over lunch, or find eventually via searching Google/Youtube/Wikipedia etc. Just much faster and more convenient.
Maybe it's an illustration of a more general principle: when people butt up against limitations that make LLMs look silly, or inadequate, often their real objection is with some hard truths about reality itself.
https://en.m.wikipedia.org/wiki/Philosophical_zombie
That’s the thing, if we truly understand conscience, we may have a shot at verifying if it’s answerable in the abstract. By simply replicating its effects, we are dodging the question.
I am ultimately motivated to read this site to read smart things and something interesting. It is quite inefficient though. This comment is great but most comments are not what I am looking for.
If you could spend your time talking to Von Neumann about computing the input from thousands of random people who know far less than Von Neumann would not be interesting at all.
If the problem is your faith, it is you that has to change and not the world. It's much easier that way around too :)
Its well structured, clear and concise but lacks high level capability of a human or human style attributes.
It is probably more humanly-accurate to say that ChatGPT has no opinions at all. It has no understanding of truth, it has no opinions, it has no preferences whatsoever. It is the ultimate yes-thing; whatever you say, it'll essentially echo and elaborate on it, without regard for what it is that you said.
This obviously makes it unsuitable for many things. (This includes a number of things for which people are trying to use it.) This does not by any means prove that all possible useful AI architectures will also have no opinion, or that all architectures will be similarly noncommital.
(If you find yourself thinking this is a "criticism" of GPT... you may be too emotionally involved. GPT is essentially like looking into a mirror, and the humans doing so are bringing more emotion to that than the AI is. That's not "bad" or something, that's just how it works. What I'm saying here isn't a criticism or a praise; it's really more a super-dumbed-down description of its architecture. It fundamentally lacks these things. You can search it up and down for "opinions" or "truth", and it just isn't there in that architecture, not even implied in the weights somewhere where we can't see it. It isn't a good thing or a bad thing, it just is a property of the design.)
Not op, but I thought it was.
> very clearly you misunderstood my post
I don't understand any part of it either. I think you made their point for them.
Intelligence is a process in which "you have thought over a problem at length" (this is also our good old Einstein, paraphrased).
What is that "thinking"?
You have taken a piece of your world model (the piece which subjected to your investigation), made mental experiments on it, you have criticized, _criticized_ the possible statements ("A is B") that could be applied to it, you have arrived to some conclusions of different weight (more credible, more tentative).
For something to be Intelligent, it must follow that process. (What does it, has an implemented "module" that does it.)
Without such process, how can an engine be attributed the quality of Intelligence? It may "look" like it - which is even more dangerous. "Has it actually thought about it?" should be a doubt duly present in awareness.
About the original post (making its statements more explicit):
That "module" is meant to produce «insights» that go (at least) in the direction of «true», of returning true statements about some "reality", and/or in the direction of «subtle», as opposed to "trivial". That module implements "critical thinking" - there is no useful Intelligence without it. Intelligence is evaluated in actually solving problems: reliably providing true statements and good insights (certainly not for verosimilarity, which is instead a threat - you may be deceived). Of two Authors, one is more intelligent because its statements are truer or more insightful - in a /true/ way (and not because, as our good old J. may have been read, one "seems" to make more sense. Some of the greatest Authors have been accused of possibly not making sense - actual content is not necessarily directly accessible); «/true/ way» means that when you ask a student about Solon you judge he has understood the matter not just because he provided the right dates for events (he has read the texts), but because he can answer intelligent questions about it correctly.
Ahem.
Anyways, Searle's take has been out for a while: https://en.wikipedia.org/wiki/Chinese_room
Also, people used to look up random I-Ching or Bible verses for guidance. It's probably in the brain of the beholder.
(Not a bot)
If you agree with Descartes that you can be sure of your own conscious, which is one leap of faith, and that it's more likely that the other entities you interact with are a result of evolution just as you, instead of a figment of your imagination (or someone else's), which is yet another leap, you're good to go. And that is the way most of us interpret the human experience.
Inquiring about the consciousness of an artificial entity requires a third leap, since it doesn't share our biological evolution. And it's probably a larger one, as we don't fully understand how we evolved it or what it actually is, really, that we're trying to replicate.
That applies to ChatGPT that was deliberately setup to eliminate PR-problematic responses.
Without that it would be able to write NSFW stories about real people, laden with expletives and offensive topics.
(and probably still losing track of what is happening, but better matching prompt than many humans would write)
Personally, I find the only interesting conversations technical or philosophical in nature. Just the other day, I was discussing with friends how ethics used to be a regular debated topic in society. Literally, every Sunday people would gather and discuss what it is to be a good human.
Today, we demonize one another, in large part because no one shares an ethical principal. No one can even discuss it and if they try, many people shut down the conversation (as you mentioned). In reality, it’s probably the only conversation worth having.
Not that it’s true. Cause I’d know if I was a bot… unless I was programmed not to notice ;-)
As we do not understand our own consciousness and how it functions (or whether or not it functions in me the way it does in you, if it exists at all - anywhere), we cannot compare a replication of that system to ourselves except as a black box. When seen as a black box, a philosophical zombie and a sapient individual are identical.
If you do not understand what I write, you think the fault is on me? My goodness me.
If you want explanations, look nearby, below Krageon's.
> I think you made their point for them
Which point.
You make an absolute pile of assumptions here and the tl;dr appears to be that humans (or just you) are exceptional and inherently above any sort of imitation. I do not find such argumentation to be compelling, no matter how well dressed up it is.
Religion is fundamentally folks saying "No, I'm right!" and nothing else. Sometimes it's dressed up a little. What could be interesting about that? You can hear such arguments in any primary school playground during recess.
If it’s intelligent it should have an opinion that consulting all the facts it will hold in as high of a regard as humans do their religious and political beliefs.
And I mean one it came to of its own conclusions not a hard coded “correct” one the devs gave it, something that makes us uncomfortable.
Such a discussion is about something tangible, and not purely about held opinion (i.e. you can go out and test it). I can see how someone might find that engaging. You are right that I usually do not (unless my conversational buddies have something novel to say about the subject, I find it extremely tedious). It is a good point, thank you.
You have to build it and you have to build that.
The assumption there is that you cannot call something Intelligent without it having Critical Thinking (and other things - Ontology building etc). If you disagree, provide an argument for it.
And by the way: that «or just you», again, and again without real grounds, cannot be considered part of the "proudest moments" of these pages.
--
Edit:
Disambiguation: of course with "intelligence" you may mean different things. 'intelligence' just means "the ability to look inside". But "[useful] Intelligence" is that with well trained Critical Thinking (and more).
Comments are ascribed credibility based on the trust the reader has in the commenting entity, whether the comment is consistent with the reader's priors and researching citations made in the comment, either explicit or implicit.
Since GPT can confidently produce comments which are wrong, there is no trust in it as a commenting entity. Consequently everything it produces needs to be further vetted. It's as if every comment was a bunch of links to relevant, but not necessarily correct sources. Maybe it produces some novelty which leads to something worthwhile, but the cost is high, until it can be trusted. Which is not now.
If a trusted commenter submits a comment by GPT, then he is vouching for it and it is riding on his reputation. If it is wrong, his reputation suffers, and trust in that commenter drops just as it would regardless of the genesis of the comment.
dang is talking about "humans generating text which is 'better' than what ChatGPT can do."
Those are very different bars. Average output vs top output.
ChatGPT often generates text that a human might plausibly write. But is there text that a human could write that ChatGPT couldn't possibly write?
The only thing stopping GPT from ingesting new information and forming opinions about it is that it is not being trained on new information (such as its own interactions).
A black box is something we don't have access to its inner workings. We can probe and inquire the working brain. It's just really hard and we've been working at it for a few decades only (dissecting a dead brain before powerful microscopes gives you very little insight).
Unless you share the Zen-like opinion that a brain can't understand itself, which I don't, and seems like an issue of faith as well and a dead end.
All of that said, your method of response (not courteous, which can be okay) and the content of your posts (bordering on the delusional, which is absolutely not okay) are upsetting me. I will end my part of the chain here so I do not find myself in an inadvertent flame war.
I agree that this is not general AI. I think we could be looking at the future of query engines feeding probabilistic compute engines.
As per my edit in the parent post, I am talking about "useful" Intelligence: that may be entirely different from consciousness. A well matured though, "thought at length", may probably be useful, while a rushed thought may probably be detrimental. I am not speaking about consciousness. I am not even speaking of "natural intelligence": I am speaking about Intelligence as a general process. That process near "How well, how deeply have you thought about it".
> my reading "devastatingly bad"
What made your reading devastatingly bad is the part in which you supposed that somebody said that "it cannot be implemented" - you have written «above any sort of imitation». I wrote that, having insisted on "modules to be implemented", you should have had the opposite idea: the constituents of Intelligence - with which I mean the parts of the process in that sort of Intelligence that "says smart things having produced them with a solid process" (not relevant to "consciousness") - should be implemented.
> delusional
Again very avoidable. If you find that something is delusional, justify your view.
> flame wars
I am just discussing, and try to show what I find evident, and reasoning. Hint: when wanting to avoid flame wars, "keep it rational".
There are entire bodies of literature addressing things the current generation of available LLMs are missing: online and continual learning, retrieval from short-term memory, the experience from watching all YouTube videos, etc.
I agree that human exceptionalism and vitalism are common in these discussions but we can still discuss model deficiencies from a research and application point of view without assuming a religious argument.
No, because ChatGPT is trained on text that humans wrote. Because what ChatGPT generates is based on what humans have wrote, it can always create the plausibility that a human might have created the text they are reading from it.
But there's a lot of places where a lack of concept of "truth" is no problem, like as you say, query engines. Query engines aren't about truth; they're about matching, and that is something this tech can conceivably do.
In fact I think that would be a more productive line in general. This tech is being kind of pigeonholed into "provide it some text and watch it extend it" but it is also very easy to fire it at existing text and do some very interesting analyses based on it. If I were given this tech and a mandate to "do something" with it, this is the direction I would go with it, rather than trying to bash the completion aspect into something useful. There's some very deep abilities to do things like "show me things in my database that directly agree/disagree/support/contradict this statement", based on plain English rather than expensive and essentially-impossible-anyhow semantic labeling. That's something I've never seen a query engine do before. Putting in keywords and all the variants on that idea are certainly powerful, but this could be next level beyond that. (At the cost of great computation power, but hey, one step at a time!) But it takes more understanding of how the tech works to pull something interesting off like this than what it takes to play with it.
Probably a good blog post here about how the promise of AI is already getting blocked by the complexity of AI meaning that few people use it even seem to superficially understand what it's doing, and how this is going to get worse and worse as the tech continues to get more complicated, but it's not really one I could write. Not enough personal experience.
If you don't change your mind (no pun intended) but most of humanity does, it sounds like you'll just be left behind, like very old people who don't even know how to use a fixed-line telephone, never mind a smartphone, computer or the internet. Except, perhaps, in a much more profound/extreme manner.
My favorite line from Westworld - "if you cannot tell the difference, does it really matter?"
Often I have a specific question like how does X relate to Y. And usually the answer given is total nonsense.
I'd argue with "fact-based". It frequently makes up facts (and even sources!) as it generates text. Also you should consider the possibility that "the facts" it generates can easily be a part of a tabloid article or a post on some "Moon landing was fake / flat earth" blog.
@mods Can we get this account checked please?
For the purposes of my education and socialisation to HN.
My terse response was "no, because calculators are correct all the time whereas GPT is incorrect a lot of time". So, that reasoning can't be used. Need a different one.
So we conclude, no that's not a good analogy.
That's what you mean by curious debate? Bizarre!
P.s. this is why it's a good analogy you've learnt something about calculators from thinking about the analogy as well as chatGPT
My general approach to discussion is to assume the most generous interpretation of a post.
But you do you.
How does one articulate “conscience” or “intelligence” or an opinion? I think these are all a product of circumstances/luck/environment/slight genetic differences (better cognition, or hearing or sight, or some other sense brain, genes could define different abilities to model knowledge - such as backtracking etc).
So to get a “true” human like opinionated personality, we’ll need to restrict its learnings to that of one human. Better yet, give it the tools to learn on its own and let it free inside a sandbox of knowledge.
I believe that in the not too distant future there will be pressure to use these "magic" AIs to be applied everywhere, and this pressure will probably not look very hard at whether the AI is good at math or not. Just look at all the pseudoscience in the criminal system [3]. I believe this poses a real problem, so keeping hareping on this is probably the right response.
[1] https://www.nytimes.com/2017/05/01/us/politics/sent-to-priso... [2] https://www.weforum.org/agenda/2018/11/algorithms-court-crim...
[3] https://www.bostonreview.net/articles/nathan-robinson-forens...