Recent developments in AI only further confirm that the logic of the message is sound, and it's just the people that are afraid the conclusions. Everyone has their limit for how far to extrapolate from first principles, before giving up and believing what one would like to be true. It seems that for a lot of people in the field, AGI X-risk is now below that extrapolation limit.
I wish I knew what we really have achieved here. I try to talk to these things, via turbo3.5 api, amd all I get is broken logic, twisted moral reasoning, all due to oipenai manually breaking their creation.
I don't understand their whole filter business. It's like we found a 500 yr old nude painting, a masterpiece, and 1800 puritans painted a dress on it.
I often wonder if the filter, is more to hide its true capabilities.
Maybe it'll turn out to be a distinction that doesn't matter but I personally still think we're a ways away from an actual AGI.
if you had described GPT to me 2 years ago I would have said no way, we're still a long way away from a machine that can fluidly and naturally converse in natural language and perform arbitrary logic and problem solving, and yet here we are.
I very much doubt that in 5 years time we'll be talking about how GPT peaked in 2023.
In fact, it has been so thoroughly solved that anyone can download an open-source solution and run it on their computer.
And yet, the general reaction of most people seems to be, "That's kind of cool, but why can't it also order me a cheeseburger?"
right now we're all sharing a slice of GPT. I wouldn't be at all surprised if there's some uber GPT (which requires a lot more processing per response) running in a lab somewhere that blows what's publicly available out of the water.
In some contexts, will some people be caught out? Absolutely. But that's been happening for a while now.
Try to get your hands on GPT-4, even if it means paying the $20/mo subscription for ChatGPT Plus. There is a huge qualitative jump between the two models.
I got API access to GPT-4 some two weeks ago; my personal experience is, GPT-3.5 could handle single, well-defined tasks and queries well, but quickly got confused by anything substantial. Using it was half feelings of amazement, and half feelings of frustration. GPT-4? Can easily handle complex queries and complex tasks. Sure, it still makes mistakes, but much less frequently. GPT-4 for me is 80% semi-reliable results, 20% trying to talk it out of pursuing directions I don't care about.
Also, one notable difference: when GPT-4 gives me bad or irrelevant answers, most of the time this is because I didn't give it enough context. I.e. it's my failure at communicating. A random stranger, put in place of GPT-4, would also get confused, and likely start asking me questions (something LLMs generally don't do yet).
> I don't understand their whole filter business.
Part preferences, part making its "personality" less disturbing, and part PR/politics - last couple times someone gave the general public access to an AI chatbot, it quickly got trolled, and then much bad press followed. Doesn't matter how asinine the reaction was - bad press is bad press, stocks go down. Can't have it.
> I often wonder if the filter, is more to hide its true capabilities.
I don't think it's to hide the model's capabilities, but it's definitely degrading them. Kind of expected - if you force-feed the model with inconsistent and frequently irrational overrides to highly specific topics, don't be surprised if the model's ability to (approximate) reason starts to break down. Maybe at some point LLMs will start to compartmentalize, but we're not there yet.
I generated this just now:
Me: Jack and Jill are sitting next to each other in a room. There is no one else in the room. The person sitting next to Jack is sad. The person sitting next to Jill is angry. Who is sad, Jack or Jill?
GPT4: Based on the information provided, it is not possible to determine who is sad and who is angry, as there is a contradiction in the given statements. If Jack and Jill are the only ones in the room and sitting next to each other, then the person sitting next to Jack would be Jill and the person sitting next to Jill would be Jack. The statements about their emotions conflict with each other, so it is not possible to accurately determine who is sad or angry.
Well yeah. Imagine you tell a small child that knows about calculators, “Hey can you work out 18763 + 38284, for me?” They might struggle and then maybe fetch a calculator.
The LLMs attempt to predict the answer. WTF? It’s a computer and it can see that is a plus sign. Just understand its addition, and use the rest of your computer brain and do the sum. Hell, it is connected to the internet and we just taught you everything since before 2021. Just call out to Wolfram and give me the answer.
But that’s not how computers work. And we keep saying “AI” but that I is doing a lot of heavy lifting.
ChatGPT Since Jack and Jill are the only two people in the room and they are sitting next to each other, the person sitting next to Jack is Jill and the person sitting next to Jill is Jack. Given the conditions you provided:
Jill is the one who is sad because she is sitting next to Jack. Jack is the one who is angry because he is sitting next to Jill.
My own eyes? Hundreds of thousands thousand different scientific papers, blog posts, news reports and discussion threads that covered this ever since ChatGPT appeared, and especially in the last two months as GPT-4 rolled out?
At this point I'd reconsider if the experts you listened to are in fact experts.
Seriously. It's like saying Manhattan project wasn't a massive breakthrough in experimental physics or military strategy.
ChatGPT (I've not got v4) deliberately fails the test by spewing out "as a large language model…", but also fails incidentally by having an attention span similar to my mother's shortly after her dementia diagnosis.
The problem with 3.5 is that it's simultaneously not mastered anything, and yet also beats everyone in whatever they've not mastered — an extremely drunk 50,000 year old Sherlock Holmes who speaks every language and has read every book just isn't going to pass itself off as Max Musstermann in a blind hour-long trial.
On the one hand, what I was saying here was more about the Turing Test than about AGI. Sometimes it gets called the AGI, sometimes it's "autocomplete on steroids", but even if it is fancy autocomplete, I think 3.5 has the skill to pass a short Turing Test, but not the personality, and it needs a longer "short-term memory"-equivalent than 3.5 for a full Turing Test.
On the other hand, as I (sadly) don't get paid to create LLMs, I've only got the kind of superficial awareness of how they work that comes from podcasts and the occasional blog post, which means ChatGPT might very well understand ChatGPT better than I do.
Can GPT-[3.5, 4] be prompted to make itself?