zlacker

[parent] [thread] 185 comments
1. nickle+(OP)[view] [source] 2024-05-15 14:48:28
It is easy to point to loopy theories around superalignment, p(doom), etc. But you don't have to be hopped up on sci-fi to oppose something like GPT-4o. Low-latency response time is fine. The faking of emotions and overt references to Her (along with the suspiciously-timed relaxation of pornographic generations) are not fine. I suspect Altman/Brockman/Murati intended for this thing to be dangerous for mentally unwell users, using the exact same logic as tobacco companies.
replies(16): >>simonw+t4 >>bambax+N4 >>Toucan+V4 >>fullsh+W4 >>lghh+x7 >>bnralt+z7 >>llm_tr+B7 >>yinser+89 >>qarl+9a >>sebzim+Oa >>GaggiX+2e >>shmatt+Xe >>abeppu+Vg >>vasco+Gk >>poulpy+CJ >>vunder+wI1
2. simonw+t4[view] [source] 2024-05-15 15:08:30
>>nickle+(OP)
I don't understand what you're referring to with that tobacco reference.
replies(4): >>Chicag+35 >>atlasu+45 >>renewe+65 >>nickle+L5
3. bambax+N4[view] [source] 2024-05-15 15:09:05
>>nickle+(OP)
Oh well... It seems at least one of those two things have to be true: either AGI is so far away that "alignment" (whatever it means) is unnecessary; or, as you suggest, Altman et al. have decided it's a hindrance to commercial success.

I tend to believe the former, but it's possible those two things are true at the same time.

replies(2): >>nickle+e6 >>Liquix+D9
4. Toucan+V4[view] [source] 2024-05-15 15:09:50
>>nickle+(OP)
The use of LLM's as pseudo-friends or girlfriends for people as a market solution for loneliness is so incredibly sad and dystopian. Genuinely one of the most unsettling goddamn things I've seen gain traction since I've been in this industry.

And so many otherwise perfectly normal products are now employing addiction mechanics to drive engagement, but somehow this one is just even further over the line for me in a way I can't articulate. I'm so sick of startups taking advantage of people. So, so fucking gross.

replies(2): >>swatco+qd >>Intral+fw1
5. fullsh+W4[view] [source] 2024-05-15 15:09:52
>>nickle+(OP)
I'll just point to the theory that they didn't want to work for a megacorp creating tools for other megacorps (or worse) and actually believed in OpenAI's (initial) mission to further humanity. The tools are going to be used by deep pocketed entities for their purposes, the compute resources necessary require that to be the case for the foreseeable future.
◧◩
6. Chicag+35[view] [source] [discussion] 2024-05-15 15:10:23
>>simonw+t4
Not the parent comment, but I think he means something like "we know folks will be addicted to this pseudo-person and that is a good thing cause it makes our product valuable", akin to reports that tobacco companies knew the harms and addictive nature of their products and kept steadfast nonetheless. (But I'm speculating as to the parent's actual intent)
replies(1): >>rvnx+o7
◧◩
7. atlasu+45[view] [source] [discussion] 2024-05-15 15:10:23
>>simonw+t4
I read it as the economics of tobacco (and alcohol and a few other 'vice' industries) that there will invariably be superusers who get addicted and produce the most economic value for companies even while consuming an actively harmful product
◧◩
8. renewe+65[view] [source] [discussion] 2024-05-15 15:10:26
>>simonw+t4
Purposely making an addiction machine, most likely.
replies(1): >>incaho+Bm
◧◩
9. nickle+L5[view] [source] [discussion] 2024-05-15 15:13:49
>>simonw+t4
https://nida.nih.gov/publications/research-reports/tobacco-n...

  A larger proportion of people diagnosed with mental disorders report cigarette smoking compared with people without mental disorders. Among US adults in 2019, the percentage who reported past-month cigarette smoking was 1.8 times higher for those with any past-year mental illness than those without (28.2% vs. 15.8%). Smoking rates are particularly high among people with serious mental illness (those who demonstrate greater functional impairment). While estimates vary, as many as 70-85% of people with schizophrenia and as many as 50-70% of people with bipolar disorder smoke.
I am accusing OpenAI (and Phillip Morris) of knowingly profiting off mental illness by providing unhealthy solutions to loneliness, stress, etc.
replies(1): >>djohns+X6
◧◩
10. nickle+e6[view] [source] [discussion] 2024-05-15 15:16:05
>>bambax+N4
Specifically I am supposing the superalignment people were generally more concerned about AI safety and ethics than Altman/etc. I don't think this has anything to do with superalignment itself.
◧◩◪
11. djohns+X6[view] [source] [discussion] 2024-05-15 15:19:30
>>nickle+L5
I’ve also heard of studies, admittedly I don’t have the link on hand, that schizophrenic patients benefit from smoking. When did big tobacco actively target them? How do you know these people don’t naturally seek out cigarettes as a means to manage some of their symptoms?
replies(1): >>nickle+K9
◧◩◪
12. rvnx+o7[view] [source] [discussion] 2024-05-15 15:21:29
>>Chicag+35
I miss Sydney :’
replies(1): >>all2+zb
13. lghh+x7[view] [source] 2024-05-15 15:22:20
>>nickle+(OP)
> along with the suspiciously-timed relaxation of pornographic generations

Has ChatGPT's censoring (a loaded term, but idk what else to use) been relaxed with GPT-4o? I have not tested it because I wouldn't have expected them to do it. Does this also extend to other types of censorship or filtering they do? If not, it feels very intentional in the way you're alluding to.

replies(1): >>ziml77+jc
14. bnralt+z7[view] [source] 2024-05-15 15:22:23
>>nickle+(OP)
One could also say that therapists prey on lonely people who pay them to talk to them and seem like they’re genuinely interested in them, when the therapist wouldn’t bother having a connection with these people once they stop paying. Which I suppose is true from a certain point of view. But from another point of view, sometimes people feel like they don’t have close friends or family to talk to and need something, even if it’s not a genuine love or friendship.
replies(6): >>kettro+n8 >>smt88+1b >>tech_k+6d >>detour+Jg >>hn_thr+9n >>koe123+Fa2
15. llm_tr+B7[view] [source] 2024-05-15 15:22:29
>>nickle+(OP)
>dangerous for mentally unwell users

It's not our job to make the world safe for fundamentally unsafe people.

replies(6): >>kettro+B8 >>limpbi+79 >>lantry+r9 >>itisha+F9 >>nickle+3a >>poulpy+aJ
◧◩
16. kettro+n8[view] [source] [discussion] 2024-05-15 15:25:12
>>bnralt+z7
This is implying that therapy is nothing more than someone to talk to; if that’s your experience with therapy, then you should get another therapist.
replies(6): >>baobab+T9 >>bnralt+ia >>naaski+Pa >>fatbir+Rc >>solard+Hg >>morale+sD
◧◩
17. kettro+B8[view] [source] [discussion] 2024-05-15 15:26:39
>>llm_tr+B7
I would argue that it is society’s job to care for its most vulnerable.
replies(2): >>bongod+W8 >>llm_tr+p9
◧◩◪
18. bongod+W8[view] [source] [discussion] 2024-05-15 15:28:58
>>kettro+B8
That doesn't mean we pad all the rooms or ban peanuts. Yes, we should care for them but not at the detriment of the other 99%.
replies(1): >>earthl+hb
◧◩
19. limpbi+79[view] [source] [discussion] 2024-05-15 15:29:45
>>llm_tr+B7
Okay this is a weird philosophy to have lol
replies(1): >>anthon+G9
20. yinser+89[view] [source] 2024-05-15 15:29:46
>>nickle+(OP)
What a wild accusation for someone light years away from the board room.
replies(1): >>nickle+zd
◧◩◪
21. llm_tr+p9[view] [source] [discussion] 2024-05-15 15:30:39
>>kettro+B8
Yes, not openai's.
replies(3): >>woopsn+of >>dns_sn+5i >>swat53+1B
◧◩
22. lantry+r9[view] [source] [discussion] 2024-05-15 15:30:50
>>llm_tr+B7
This is literally everyone's job. It's the whole point of society. Everyone is "fundamentally unsafe", and we all rely on each other.
replies(2): >>kiba+la >>clayto+Eh
◧◩
23. Liquix+D9[view] [source] [discussion] 2024-05-15 15:31:48
>>bambax+N4
or C) the first AGI was/is being/will be carried away by men with earpieces to a heavily fortified underground compound. any government - let alone the US government - isn't going to twiddle their thumbs while tech that will change human history is released to the unwitting public. at best they'll want to prepare for and control the narrative surrounding the event, at worst AGI will be weaponized against humans before the majority are aware it exists.

if OAI is motivated by money, uncle sam can name any figure to buy them out. if OAI is motivated by power, it becomes "a matter of national security" and they do what the gov tells them. more likely the two parties' interests are aligned and the public will hear about it when It's Time™. not saying C) is what's happening - A) seems likely too - but it's a real possibility

replies(1): >>wins32+te
◧◩
24. itisha+F9[view] [source] [discussion] 2024-05-15 15:31:52
>>llm_tr+B7
I'm guessing your work isn't sanitation either. Do you throw your trash straight on the ground?

Some things are everyone's responsibility if we want to live in a pleasant society.

◧◩◪
25. anthon+G9[view] [source] [discussion] 2024-05-15 15:31:54
>>limpbi+79
no it isnt, its how everything in society currently operates. We put dangerous people in jail away from everyone else
replies(2): >>itisha+1a >>diggin+8l
◧◩◪◨
26. nickle+K9[view] [source] [discussion] 2024-05-15 15:32:15
>>djohns+X6
I have schizophrenia. I have struggled with nicotine addiction since high school. In 2015 I had three heart attacks in a month, even though I was only 28 and seemed physically fit. Two weeks ago I had a minor stroke.

It is not just me and it is not just the smoking: https://www.cambridge.org/core/blog/2020/08/19/physically-he...

  We have known for many years that people who suffer from schizophrenia die younger than expected, as much as 20 years younger than the general population. This appears unfair, and it was the inspiration of this work. Most people thought that this added risk of death was mostly due to the higher prevalence in schizophrenia of smoking, obesity and to other lifestyle differences.

  For this reason, we recruited 40 patients with schizophrenia and an equal number of healthy controls, and scanned their hearts using a state-of-the-art approach, called cardiac magnetic resonance. This was performed at the state-of-the-art Robert Steiner MRI unit.

  [...] Surprisingly, in our study we found that even after matching patients and healthy controls for age, sex, ethnicity and body mass index (BMI, deriving from height and weight); and after excluding any participants with any medical conditions, and other risk factors for heart disease, people with schizophrenia show hearts that are smaller and chunkier than controls. These changes are similar to those found in aging.
I was able to move to the gum and patch, but I am very high-functioning. People sicker than me have less options. Smoking is very bad for everyone, including people for schizophrenia. We do not in any way benefit from terrible heart/lung damage in exchange for minor cognitive clarity - our hearts need all they help they can get. I have no tolerance for this sort of ignorant paternalism, and I'm ignoring your bad-faith question about "actively target them" because that's not what I said.
replies(1): >>djohns+aA2
◧◩◪
27. baobab+T9[view] [source] [discussion] 2024-05-15 15:33:17
>>kettro+n8
Evidence points in this direction, though.

Different methods of therapy appear to be equally effective despite having theoretical foundations which are conflicting with each other. The common aspect between different therapies seems to be "having someone to talk to", so I'm inclined to believe that really is what's behind the success.

replies(6): >>pphysc+Dc >>pas+Sd >>cbsmit+Wd >>pdabba+De >>burnte+fi >>lo_zam+kQ
◧◩◪◨
28. itisha+1a[view] [source] [discussion] 2024-05-15 15:34:14
>>anthon+G9
The crime of the "dangerous people" in OP's statement was loneliness and suggestibility.
replies(1): >>baobab+Sa
◧◩
29. nickle+3a[view] [source] [discussion] 2024-05-15 15:34:22
>>llm_tr+B7
"fundamentally unsafe people" is probably the grossest thing I've read on here in years.
30. qarl+9a[view] [source] 2024-05-15 15:34:57
>>nickle+(OP)
> The faking of emotions

HEH. In previous versions, when it told jokes, were those fake jokes?

replies(1): >>awkwar+Vc
◧◩◪
31. bnralt+ia[view] [source] [discussion] 2024-05-15 15:35:37
>>kettro+n8
It’s implying that this is the case for many people, not all. Which it is, in my experience. Particularly since the advice you gave:

> then you should get another therapist

Seems to be fairly ubiquitous. “Find a therapist you like”/“shop around”/etc. leads a lot of people to find people who will tell them what they want to hear. Sometimes what people want to hear is how to practice CBT - but in that case, such people are probably going to be using AI to work on CBT.

replies(1): >>helboi+Km
◧◩◪
32. kiba+la[view] [source] [discussion] 2024-05-15 15:35:44
>>lantry+r9
We may not be responsible for people's behaviors but it's certainly not going to get better if nobody do anything about it.
33. sebzim+Oa[view] [source] 2024-05-15 15:37:46
>>nickle+(OP)
>I suspect Altman/Brockman/Murati intended for this thing to be dangerous for mentally unwell users

Isn't it much more likely that they are just trying to make a product that people want to use?

Even Tobacco companies don't go out of their way to give people cancer.

replies(2): >>tallda+ub >>smt88+6f
◧◩◪
34. naaski+Pa[view] [source] [discussion] 2024-05-15 15:37:47
>>kettro+n8
I think the preying part of therapy is that there's just no defined stop condition. There's no such thing as "healthy" in mental health. You get chemo until you go into remission or you die. You take blood pressure meds until you have a better lifestyle and body composition and don't need them anymore, etc. There's no analogue for "you're healthy now, go away so I can help others", and so therapy goes on forever until the patient stops for whatever reason.
replies(3): >>cbsmit+Sb >>jodrel+cj >>fatbir+x71
◧◩◪◨⬒
35. baobab+Sa[view] [source] [discussion] 2024-05-15 15:37:58
>>itisha+1a
And it's not OpenAI's job to safetify the world for gullible loners.
replies(1): >>itisha+Zb
◧◩
36. smt88+1b[view] [source] [discussion] 2024-05-15 15:38:29
>>bnralt+z7
Therapists are educated and trained to help alleviate mental-health issues, and their licenses can be revoked for malpractice. Their livelihood partially depends on ethics and honest effort.

None of those safeguards are in place for AI companies.

replies(1): >>graphe+jf
◧◩◪◨
37. earthl+hb[view] [source] [discussion] 2024-05-15 15:39:38
>>bongod+W8
Well, conveniently, this is benefitting the 1% much more than the 99%
replies(2): >>llm_tr+Jb >>bongod+hp
◧◩
38. tallda+ub[view] [source] [discussion] 2024-05-15 15:40:47
>>sebzim+Oa
But tobacco companies are still complicit in distributing addictive carcinogens to people even if only in trace amounts. The same could be said about predatory business models/products.
◧◩◪◨
39. all2+zb[view] [source] [discussion] 2024-05-15 15:41:01
>>rvnx+o7
I'm confused. Context?
replies(2): >>scroll+Lc >>rvnx+pd2
◧◩◪◨⬒
40. llm_tr+Jb[view] [source] [discussion] 2024-05-15 15:41:49
>>earthl+hb
And a nanny state benefits a different 1% much more than the 99%.
replies(1): >>fipar+9g
◧◩◪◨
41. cbsmit+Sb[view] [source] [discussion] 2024-05-15 15:42:35
>>naaski+Pa
> I think the preying part of therapy is that there's just no defined stop condition.

There's no defined stop point for physical development either... Top performing athletes still have trainers, and nobody sees that as a problem. If it's mental development though, it must have a stop point?

replies(3): >>detour+7i >>naaski+jm >>renewi+Yx
◧◩◪◨⬒⬓
42. itisha+Zb[view] [source] [discussion] 2024-05-15 15:43:10
>>baobab+Sa
Who else can? OpenAI makes the tool.

Are you suggesting we need government intervention or just saying "damn the consequences"?

replies(1): >>baobab+ud
◧◩
43. ziml77+jc[view] [source] [discussion] 2024-05-15 15:44:21
>>lghh+x7
I don't see anything that says they've changed their policies yet. Just that they're looking into it. I also tested 4o and it still gives me a content policy warning for NSFW requests.
replies(1): >>nickle+Td
◧◩◪◨
44. pphysc+Dc[view] [source] [discussion] 2024-05-15 15:45:45
>>baobab+T9
Having someone to talk to, who is somewhat emotionally intelligent, who doesn't have strong biases against you, and so on...

If you are fortunate, you have people like that in your immediate circle, but increasingly few people do.

replies(1): >>Americ+Rd
◧◩◪◨⬒
45. scroll+Lc[view] [source] [discussion] 2024-05-15 15:46:02
>>all2+zb
Sydney was an early version of Bing GPT that was more than a little nuts.
replies(1): >>all2+BM1
◧◩◪
46. fatbir+Rc[view] [source] [discussion] 2024-05-15 15:46:19
>>kettro+n8
This is very true, and I would add to it that the dominant paradigm in most therapy these days (at least those forms coming from a Cognitive Behavioural Therapy background) have "graduation" as an explicit goal: the client feels like they've addressed what they want to address and no longer need the ongoing relationship.

This is largely due to a crisis in the field in the late 70s/early 80s when several studies demonstrated that talk therapy had outcomes no different than no therapy. In both cases, some got better, some got worse, some didn't change. CBT was a direct result of that, prioritizing and tracking positive outcomes, and from CBT came a lot of different approaches, all similarly focussed on being demonstrably effective.

Talk therapy isn't a cure-all, but it's definitely more results-oriented than it was 50 years ago.

◧◩
47. awkwar+Vc[view] [source] [discussion] 2024-05-15 15:46:31
>>qarl+9a
Those are fundamentally different things. You can tell a joke without understanding context, you can't express emotions if you don't have any. It's a computation model, it cannot feel emotion.
replies(4): >>terse-+jj >>qarl+kt >>jobs_t+mx >>itsnot+aB
◧◩
48. tech_k+6d[view] [source] [discussion] 2024-05-15 15:47:34
>>bnralt+z7
> One could also say that therapists prey on lonely people who pay them to talk to them

It is indisputable that one could say this

replies(1): >>mistri+Me
◧◩
49. swatco+qd[view] [source] [discussion] 2024-05-15 15:48:51
>>Toucan+V4
It's a technological salve that gives individuals a minor and imperfect remedy for a profound failure in modern society. It's of a kind with pharmaceutical treatments for depression or anxiety or obesity -- best seen as a temporary "bridge" towards wellness (achieved, perhaps, through other interventions) -- but altogether just trying to help troubled individuals navigate a society that failed to enable their deeper wellness in the first place.
replies(1): >>reduce+rg
◧◩◪◨⬒⬓⬔
50. baobab+ud[view] [source] [discussion] 2024-05-15 15:49:14
>>itisha+Zb
Society can't be built with the idea that everything has to work for the most troubled and challenging individuals.

We build cars, even though some alcoholics drive drunk. We could make cars safer for them by mandating a steering wheel lock with breathalazyer for every car, but we choose to not do that because it's expensive.

We have horror movies, even though some people really freak out from watching horror movies, to the point where they have to be placed in mental asylums for extended periods of time. We could outlaw horror movies to reduce the strain on these mentally troubled individuals, but we choose to not do that because horror movies are cool.

replies(2): >>incaho+7m >>itisha+xn
◧◩
51. nickle+zd[view] [source] [discussion] 2024-05-15 15:49:26
>>yinser+89
I wasn't making an accusation about why Leike/Sutskever left, though I definitely understand why you read my comment that way.

The actual accusation I am making is that someone at OpenAI knew the risks of GPT-4o and Sam Altman didn't care. I am confident this is true even without spies in the boardroom. My guess is that Leike or Sutskever also knew the risks and actually did care, but that is idle speculation.

◧◩◪◨⬒
52. Americ+Rd[view] [source] [discussion] 2024-05-15 15:50:30
>>pphysc+Dc
What part of the therapist training regimen tests for emotional intelligence? What test do they use to measure this?
replies(2): >>pdabba+4g >>detour+dh
◧◩◪◨
53. pas+Sd[view] [source] [discussion] 2024-05-15 15:50:39
>>baobab+T9
> "having someone to talk to"

it's a bit more complicated than that

https://www.youtube.com/watch?v=Z37i8-FnAh8

and on top of this the method of therapy is to find better copings, not just to vent.

replies(1): >>baobab+ze
◧◩◪
54. nickle+Td[view] [source] [discussion] 2024-05-15 15:50:46
>>ziml77+jc
Sure, I was being sloppy, I meant "suspiciously timed announcement."
◧◩◪◨
55. cbsmit+Wd[view] [source] [discussion] 2024-05-15 15:50:56
>>baobab+T9
> Evidence points in this direction, though.

>

> Different methods of therapy appear to be equally effective despite having theoretical foundations which are conflicting with each other. The common aspect between different therapies seems to be "having someone to talk to", so I'm inclined to believe that really is what's behind the success.

Just because talking is the common trait, doesn't mean that that's evidence that that is all it is. Paying someone to help you with the problem is also a common trait (and ironically, that is, no doubt, a contributory factor), but that isn't all that therapy is.

Let's say that there are three ways to solve a problem, and depending on context that we're not terribly good at determining, one of those ways will work quite often, one will work some of the time, and the other will be a disaster... but there's an equal probability that each of those ways are equally likely to fall in to each of those categories. Statistically, one could claim that how you solve the problem is not behind the success. In a sense, that would be correct, because the real determinant of success would be being lucky with the solution you chose to employ. While one could imply though that really it's nothing more than being lucky at choosing the solution, in reality without all of what's involved in that choice, the problem will remain.

56. GaggiX+2e[view] [source] 2024-05-15 15:51:20
>>nickle+(OP)
There has been no relaxation of pornographic generations on OpenAI products.
replies(1): >>qingch+Hf
◧◩◪
57. wins32+te[view] [source] [discussion] 2024-05-15 15:53:21
>>Liquix+D9
Why do you think that the US government has the state capacity to do anything like that these days?
replies(1): >>Gud+rm
◧◩◪◨⬒
58. baobab+ze[view] [source] [discussion] 2024-05-15 15:53:38
>>pas+Sd
I'm not going to watch a 45 minute video in an effort to decipher what you are implying with this comment.
replies(2): >>pas+Ih >>kridsd+Gs1
◧◩◪◨
59. pdabba+De[view] [source] [discussion] 2024-05-15 15:53:53
>>baobab+T9
There may be a kernel of truth in this, but it depends on why you're seeing a therapist. For treatment of OCD, for example, or phobias, there are specific protocols that yield results, but they do not respond to just "having someone to talk to."

Other kinds of conditions, like depression and anxiety, respond to a wider range of therapy styles. But those aren't the only conditions that people seek to treat through talk therapy. (And it's also an exaggeration to say that just having any conversation will help to treat anxiety and reopression. But it is probably true that treatment of these conditions is less technical and responds to a much wider range of styles.)

◧◩◪
60. mistri+Me[view] [source] [discussion] 2024-05-15 15:54:40
>>tech_k+6d
ok - you could say "the rapist" too.. many have.. guess what, people in crisis sometimes attack the first line helpers.. this is well known among trained health professionals
61. shmatt+Xe[view] [source] 2024-05-15 15:55:31
>>nickle+(OP)
Realistically its all just probabilistic word generation. People "feel" like an LLM understands them but it doesn't, its just guessing the next token. You could say all our brains are doing are just guessing the next token but thats a little too deep for this morning

All these companies are doing now is taking an existing inferencing engine, making it 3% faster, 3% more accurate, etc. per quarter, fighting over the $20/month users

One can imagine product is now taking the wheel from engineering and are building ideas on how to monetize the existing engine. Thats essentially what GPT-4o is, and who knows what else is in the 1,2,3 year roadmaps for any of these $20 companies

To reach true AGI we need to get past guessing, and that doesn't seem close at all. Even if one of these companies gets better at making you "feel" like its understanding and not guessing, if it isnt actually happening, its not a breakthrough

Now with product leading the way, its really interesting to see where these engineers head

replies(2): >>Diogen+Ch >>qq66+fj
◧◩
62. smt88+6f[view] [source] [discussion] 2024-05-15 15:56:10
>>sebzim+Oa
> just trying to make a product that people want to use?

Sure, but you can do that ethically or unethically.

If you make a product that's harmful, disincentivizes healthy behavior (like getting therapy), or becomes addictive, then you've crossed into something unethical.

> Even Tobacco companies don't go out of their way to give people cancer.

This is like saying pool companies don't go out of their way to get people wet.

While it isn't their primary goal, the use of tobacco causes cancer, so their behavior (promoting addiction among children, burying medical research, publishing false research, lobbying against regulation) are all in service of ultimately giving cancer to more people.

Cancer and cigarettes are inseparable, the same way casinos and gambling addiction are inseparable.

◧◩◪
63. graphe+jf[view] [source] [discussion] 2024-05-15 15:57:07
>>smt88+1b
And most of them suck. Imagine if you bought a class of hardware that needed to be swapped constantly, has no QC and dubious reviews, with many "works on my machine" comments saying it worked for them. Little did they know it just reported it was fine and was also broken for them.

The liability for a bad therapist or psychologist is very low, I have never heard of them getting revoked for being bad therapists. If if was true then bad therapists wouldn't exist. I would not be surprised if they ceased to exist in the near future with AI being consistent, and much better quality.

replies(1): >>smt88+xw
◧◩◪◨
64. woopsn+of[view] [source] [discussion] 2024-05-15 15:57:26
>>llm_tr+p9
"Our primary fiduciary duty is to humanity."
◧◩
65. qingch+Hf[view] [source] [discussion] 2024-05-15 15:58:50
>>GaggiX+2e
They announced an intention to allow porn generations.
replies(1): >>GaggiX+kk
◧◩◪◨⬒⬓
66. pdabba+4g[view] [source] [discussion] 2024-05-15 15:59:42
>>Americ+Rd
They don't attempt to measure it, but they do teach approaches like "unconditional positive regard" and other techniques that allow a practitioner to demonstrate (or at least seem to demonstrate) a higher level of emotional intelligence.

A big part of therapy is also rapport. Many people go through many therapist before finding one that works for them. In part, you can think of this as the market performing the assessment your'e referring to.

replies(1): >>Americ+4u
◧◩◪◨⬒⬓
67. fipar+9g[view] [source] [discussion] 2024-05-15 16:00:04
>>llm_tr+Jb
That's a false dilemma.
replies(1): >>llm_tr+Ur1
◧◩◪
68. reduce+rg[view] [source] [discussion] 2024-05-15 16:01:40
>>swatco+qd
These type of techno-solutions are some of the root cause of those “profound failure of modern society!” The technological salve is just a further extreme causing these people’s problems! Much like there exists some societal problems, alcohol is a tiny relief, but can further exacerbate those problems, and you advocate that they drink even more alcohol because society has issues and they should escape it.

Idk how we’ve gotten away from such a natural human experience, but everyone knows damn well that the happiest children are out playing soccer with their friends in a field or eating lunch together at a park bench, and not holed up in their room watching endless YouTube.

replies(2): >>swatco+bk >>Intral+uw1
◧◩◪
69. solard+Hg[view] [source] [discussion] 2024-05-15 16:02:35
>>kettro+n8
I feel like this is easier said than done. There's not a great way (that I know of) to evaluate the quality/potential helpfulness of therapists... if only there were a Steam-like review system for them! There's ratemds.com, but not a lot of people use it, since there's not a central marketplace to find therapists to begin with (that I know of). I would love to be able to find good therapists locally and/or online. It just seems like such an expensive gamble every time.

When I was younger, I went through many therapy sessions with multiple professionals of different kinds (psychologists, psychiatrists, MFT (marriage and family therapists, social workers, etc.).

A couple of them were wonderful: thoughtful, caring, helpful, providing useful guidance with a compassionate ear.

Another couple tried to be helpful but were still in training themselves (this was at a college) and couldn't really provide any useful guidance.

One was going through a divorce of her own at the time and ended up crying in many of our sessions and having to abort them to deal with her own emotions – it was a tough time for her, and she's only human. I often tried to console her, but she wouldn't let me, so it made for a very awkward situation lol.

One of them had one a single session with me, charged me for it, and then told me she couldn't help me and to go somewhere else.

But the worst of them was an older guy who, despite the referrals and my history, thought I was faking mental illness. He dared me to attempt suicide, and when I eventually did (not because of him, but a separate romantic failure), he chuckled in my face and said, "Heh, you finally tried it, huh? Didn't think you would." This was an older psychiatrist in a small town – either the only one there, or one of very few – the kind of sleazy place that had a captive market and a whole bunch of pharma ads in the lobby, with young female pharma reps going in and out all day. What a racket =/ If I were wiser then, I would've reported him to the board and news media.

So, anecdotally, my success rate with therapists was only 2/7. To be fair, I was a pretty fucked up teenager and young adult, but still... the point is that "just find a better therapist" is often a difficult process. Depending on your insurance and area, there may not even be any other therapists with a waiting list of less than a few months, and even if you can get in, there's no guarantee they are good at their jobs AND a good fit for your personality and issues.

Think it's hard to find good devs? At least our line of work produces some measurable output (software/apps that run, or not, according to specs). How do you even measure the output of a therapist? Improvements to someone's life aren't going to happen overnight, and many never report back; the best successes may not bother to leave a review, the worst failures may end up dead before spreading the word. The rest probably just run out of sessions allowed by their insurance and try to move on with their lives, with unknown levels of positive or negative change.

◧◩
70. detour+Jg[view] [source] [discussion] 2024-05-15 16:02:42
>>bnralt+z7
Ideally a therapist is an uninvolved neutral party in one's life. They act as sounding board to measure one's internal reactions to the outside world.

The key is neutral point of view. Friend's and family come with bias. The bias's can be compounded by mentally ill friend's and family.

Therapist must meet with other therapists about their patient interactions. The second therapist acts a neutral third party to the keep the first therapist from losing their neutrality.

That is the ideal and the real world may differ.

I'm struggling with someone that looks to be having some real mental issues. The person believes I'm the issue and I need to maintain a therapist to make sure I'm treating this person fairly.

I need a neutral third party that I gossip with that is bound to keep it to themselves.

71. abeppu+Vg[view] [source] 2024-05-15 16:03:21
>>nickle+(OP)
I think the comparison to tobacco companies is misleading because tobacco is not good for anyone, poses a risk of harm to everyone who uses it regularly, and causes very bad outcomes for some of those users. I.e. there's not a large population who can use tobacco without putting themselves at risk.

But hypothetically if a lot of people would benefit from a GPT with more fake emotions, that might reasonably counterbalance concerns about harm for a mentally unwell minority. If we build a highway, we know that eventually it will lead to deaths from car crashes -- but if the highway is actually adding value by letting people travel, those benefits might reasonably be expected to outweigh that harm. And the people getting into their cars and onto the highway agree, that the benefits outweigh the costs, right up until they crash.

None of this is to say that I think OpenAI's choices here were benevolent rather than a business choice. But I think even if they were trying to do the ethically best thing for the world overall, it would be plausible to move forward

I for one found the fake emotions in their voice demos to be really annoying tho.

replies(1): >>alxjrv+sj
◧◩◪◨⬒⬓
72. detour+dh[view] [source] [discussion] 2024-05-15 16:04:27
>>Americ+Rd
It is covered in the curriculum. They study emotional intelligence and with luck they are able to self-reflect using their education.

Actual maladaptive personalities are the result of low emotional intelligence.

◧◩
73. Diogen+Ch[view] [source] [discussion] 2024-05-15 16:05:41
>>shmatt+Xe
> People "feel" like an LLM understands them but it doesn't, its just guessing the next token. You could say all our brains are doing are just guessing the next token but thats a little too deep for this morning

"Just" guessing the next token requires understanding. The fact that LLMs are able to respond so intelligently to such a wide range of novel prompts means that they have a very effective internal representation of the outside world. That's what we colloquially call "understanding."

replies(5): >>cmiles+ui >>carom+kj >>gortok+Aj >>woodru+hk >>8crazy+of1
◧◩◪
74. clayto+Eh[view] [source] [discussion] 2024-05-15 16:06:03
>>lantry+r9
> This is literally everyone's job. It's the whole point of society.

To a degree, yes - but I think if it's taken too far it becomes a trap that many people seeking power lay out.

Benjamin Franklin said it best: "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."

That being said, I do agree with part of your point. The purpose of having a society is that collective action lets us do amazing things like build airplanes, that would be otherwise impossible. In order to succeed at that we need some rules that everyone plays by, which involve giving up some freedoms - or the "social contract".

The more of a safety net a society provides, the more restrictive the society must be. Optimizing for this is known as politics.

I think history has shown us that the proper balance is one where we optimize for maximum elbow room, without letting people die on the streets. Trying to provide the illusion of safety and restrict interesting technology to protect a small percentage of the population is on the wrong side of this balance.

Maybe we try it, and see what the effect actually are, rather than guessing. If it becomes a major problem, then address it - in the least restrictive way possible.

replies(1): >>iudqno+EJ
◧◩◪◨⬒⬓
75. pas+Ih[view] [source] [discussion] 2024-05-15 16:06:21
>>baobab+ze
I'm implying that data shows that efficacy of therapy depends on concrete factors (~5 of them discussed in the video). it's not "just someone to talk to".
replies(1): >>baobab+Ci
◧◩◪◨
76. dns_sn+5i[view] [source] [discussion] 2024-05-15 16:07:46
>>llm_tr+p9
It depends. I don't think OpenAI (or anyone else selling products to the general audience) should be forced to make their products so safe that they can't possibly harm anyone under any circumstance. That's just going to make the product useless (like many LLMs currently are, depending on the topic). However that's a very different standard than the original comment which stated:

> I suspect Altman/Brockman/Murati intended for this thing to be dangerous for mentally unwell users, using the exact same logic as tobacco companies.

Tobacco companies knew about the dangers of their products, and they purposefully downplayed them, manipulated research, and exploited addictive properties of their products for profit, which caused great harm to society.

Disclosing all known (or potential) dangers of your products and not purposefully exploiting society (psychologically, physiologically, financially, or otherwise) is a standard that every company should be forced to meet.

◧◩◪◨⬒
77. detour+7i[view] [source] [discussion] 2024-05-15 16:07:54
>>cbsmit+Sb
The stop point is obvious to the individual and the therapist. I'm dealing with someone that prefers to stop rather than actually self-reflection.

There is nothing stopping them from exiting therapy. The therapist may be aware that the person is still a basket case but if they are non-violent they are free to roam.

replies(2): >>cbsmit+ip >>naaski+Qz
◧◩◪◨
78. burnte+fi[view] [source] [discussion] 2024-05-15 16:08:08
>>baobab+T9
> Different methods of therapy appear to be equally effective despite having theoretical foundations which are conflicting with each other. The common aspect between different therapies seems to be "having someone to talk to", so I'm inclined to believe that really is what's behind the success.

This isn't true. Different methods work better for different problems. I've been in behavior health for 7 years now. It's having someone with a lot of education to talk to, someone with education in social and psychological problems and healthy coping mechanisms.

◧◩◪
79. cmiles+ui[view] [source] [discussion] 2024-05-15 16:09:25
>>Diogen+Ch
I would argue it's an understanding of the relationship between the words; an effective internal representation of those relationships. IMHO, it's still quite a ways from a representation of the outside world.
◧◩◪◨⬒⬓⬔
80. baobab+Ci[view] [source] [discussion] 2024-05-15 16:09:50
>>pas+Ih
Thank you.
◧◩◪◨
81. jodrel+cj[view] [source] [discussion] 2024-05-15 16:12:10
>>naaski+Pa
I sometimes recommend Dr David Burns' Feeling Good podcast[1], and he is big on measuring and testing and stop points. Instead of 'tell me about your mother' his style of Cognitive Behavioural Therapy (CBT) is called TEAMS in which the T stands for Testing, and it involves:

- Patient choosing a specific mood problem/feeling they want to work on.

- A mood survey, where the patient rates their own level of e.g. anxiety, depression, fear, hopelessness. (e.g. out of 5 or 10).

- Therapy session, following his TEAMS CBT structure. Including patient choosing how much fear they'd like to feel (e.g. they want to keep a little bit of fear so they don't endanger themselves, but don't want to be overwhelmed by fear, 5% or 20%, say).

- A repeat of the mood survey, where the patient re-assesses themselves to see if anything has improved. There's no units on the measures because it's self-reported, the patient knows if the fear is unchanged, a little less, a lot less, almost gone, completely gone, and that's what matters.

That gives them feedback; if there is improvement within a session they know something in the session helped, if several sessions go by with no improvement they know it and can change things up and move away from those unhelpful approaches in future with other patients, and if there is good improvement - patient is self-reporting that they are no longer hopeless about their relationship status, or afraid of social situations, or depressed, to the level they want, then therapy can stop.

He's adamant that a single 2hr session is enough to make a significant change in many common mood disorders[2], and this "therapy needs to take 10 years" is a bad pattern and therapists who don't take mood surveys and before and after every session are flying blind. With feedback on every session and decades of experience, he has identified a lot of techniques and ways to use them which actually do help people's moods change. I liken it to the invention of test cases and debuggers (and looking at the output from them).

[1] Quick list: https://feelinggood.com/list-of-feeling-good-podcasts/ more detailed database: https://feelinggood.com/podcast-database/

[2] no, internet cynic, obviously not everything and presumably not whatever it is you have.

replies(1): >>naaski+hn
◧◩
82. qq66+fj[view] [source] [discussion] 2024-05-15 16:12:15
>>shmatt+Xe
Doesn't have to be smart to be dangerous. The asteroid that killed the dinosaurs was just a big rock.
◧◩◪
83. terse-+jj[view] [source] [discussion] 2024-05-15 16:12:28
>>awkwar+Vc
Are they fundamentally different? Couldn’t you make the argument that it’s advanced from a probabilistic determination of the most likely next token, to a probabilistic determination of the next token AND a probabilistic determination of the inflection that that token should be transmitted with? How is one any more or less fake than the other?
replies(1): >>nickle+lh2
◧◩◪
84. carom+kj[view] [source] [discussion] 2024-05-15 16:12:35
>>Diogen+Ch
I disagree. I use ChatGPT daily as a replacement for Google. It doesn't understand or have logic, it can spit out information very well though. It has a broad knowledge base. There is no entity there to have an understanding of the topic.

This becomes pretty clear when you get to more complex algorithms or low level details like drawing a stack frame. There is not logic there.

replies(2): >>Diogen+vq >>root_a+hy
◧◩
85. alxjrv+sj[view] [source] [discussion] 2024-05-15 16:13:08
>>abeppu+Vg
Playing devils advocate for a moment - have you ever had a cigarette? It does plenty of good for the user. In fact, I think we do make this risk calculation that you describe in the exact same way - there are plenty of substances that are so toxic to humanity that we make them illegal to own or consume or produce, and the presence of these in your body can sometimes even risk employment, let alone death.

We know the risks from cigarettes, but it offers tangible benefits to its users, so they continue to use the product. So too cars and emotionally manipulative AI's, I imagine.

(None of this negates your overall point, but I do think the initial tobacco comparison is very apt.)

replies(1): >>abeppu+Nk
◧◩◪
86. gortok+Aj[view] [source] [discussion] 2024-05-15 16:13:31
>>Diogen+Ch
it requires calculation of frequency of how often words appear next to each other given other surrounding words. If you want to call that 'understanding', you can, but it's not semantic understanding.

If it were, these LLMs wouldn't hallucinate so much.

Semantic understanding is still a ways off, and requires much more intelligence than we can give machines at this moment. Right now the machines are really good at frequency analysis, and in our fervor we mistake that for intelligence.

replies(1): >>Diogen+7q
◧◩◪◨
87. swatco+bk[view] [source] [discussion] 2024-05-15 16:16:12
>>reduce+rg
I don't disagree in the least. I'm just saying it's the in same bucket as many commercial products that are designated as therapeutic and that they should all be looked at with a similar kind of celebration/skepticism.
replies(1): >>Toucan+ea1
◧◩◪
88. woodru+hk[view] [source] [discussion] 2024-05-15 16:17:13
>>Diogen+Ch
To my understanding (ha!), none of these language models have demonstrated the "recursive" ability that's basic to human consciousness and language: they've managed to iteratively refine their internal world model, but that model implodes as the user performs recursive constructions.

This results in the appearance of an arms race between world model refinement and user cleverness, but it's really a fundamental expressive limitation: the user can always recurse, but the model can only predict tokens.

(There are a lot of contexts in which this distinction doesn't matter, but I would argue that it does matter for a meaningful definition of human-like understanding.)

replies(1): >>johnth+Ta1
◧◩◪
89. GaggiX+kk[view] [source] [discussion] 2024-05-15 16:17:18
>>qingch+Hf
An OpenAI spokesperson recently stated explicitly that "We have no intention to create AI-generated pornography".
90. vasco+Gk[view] [source] 2024-05-15 16:19:21
>>nickle+(OP)
> The faking of emotions and overt references to Her (along with the suspiciously-timed relaxation of pornographic generations) are not fine.

Are you not aware of how many billions are getting spent on fake girlfriends on Only Fans, with millions of people chatting away with low paid labor across an ocean pretending to be an american girl? This is just reducing costs, but the consumers are already wanting the product.

I'm not sure I get the outrage / puritanism. Adults being able to chat to a fake girlfriend if they want to seems super bland. There's way more stuff out there that's way wilder you can do online, potentially also exploiting the real person on the other side of the screen if they are trafficked or whatever. I don't have any mental issues (well, who knows? ha) and genuinely would try it the same way you try a new porn category every once in a while.

replies(2): >>Retr0i+Tr >>screye+DK
◧◩◪
91. abeppu+Nk[view] [source] [discussion] 2024-05-15 16:19:33
>>alxjrv+sj
> We know the risks from cigarettes

Hmm, the tobacco industry is also famous for actively trying to deny and suppress evidence about its harms. They actively didn't want people to be in a position to make a fully informed decision. In cases where jurisdictions introduced policies that packaging etc had to carry factual information about health risks, the tobacco industry pushed back.

replies(1): >>alxjrv+OG
◧◩◪◨
92. diggin+8l[view] [source] [discussion] 2024-05-15 16:21:38
>>anthon+G9
> We put dangerous people in jail away from everyone else

This is a very naive understanding of what prisons are and who goes in them and why.

◧◩◪◨⬒⬓⬔⧯
93. incaho+7m[view] [source] [discussion] 2024-05-15 16:26:23
>>baobab+ud
>Society can't be built with the idea that everything has to work for the most troubled and challenging individuals.

But it is, nearly every product, procedure, process is aimed at the lowest common denominator, it's the entire reasoning warning labels exist, or fail safe systems (like airbags) exist.

replies(1): >>baobab+bo
◧◩◪◨⬒
94. naaski+jm[view] [source] [discussion] 2024-05-15 16:27:59
>>cbsmit+Sb
What a ridiculous analogy. "Athlete" is a career. Is someone making a career of being in therapy?

> If it's mental development though, it must have a stop point?

What is being developed, exactly?

replies(1): >>cbsmit+Bq
◧◩◪◨
95. Gud+rm[view] [source] [discussion] 2024-05-15 16:28:54
>>wins32+te
By observing reality.
◧◩◪
96. incaho+Bm[view] [source] [discussion] 2024-05-15 16:29:37
>>renewe+65
I mean it's par for the course. What better business model exists than the turning a want into a need. Caffeine comes to mind
◧◩◪◨
97. helboi+Km[view] [source] [discussion] 2024-05-15 16:30:15
>>bnralt+ia
Yeah I have found there is very little you get from therapy that you can't get from a mixture of journalling, learning CBT methods, having a routine (which includes regular exercise) and trying lots of different methods of making friends that you assess maturely for their reliability. Maybe meditation if you're into that. All of these things are free and require effort, personal effort and intention being what will actually improve your life anyway, whether you use therapy or not. This makes therapy seem like a scam for anything other than dealing with a very dire short period of isolation.
◧◩
98. hn_thr+9n[view] [source] [discussion] 2024-05-15 16:32:01
>>bnralt+z7
> One could also say that therapists prey on lonely people who pay them to talk to them and seem like they’re genuinely interested in them, when the therapist wouldn’t bother having a connection with these people once they stop paying.

As another commenter said, if that's your experience with a therapist, you have a shitty therapist and should switch.

Most importantly, a good therapist will very clearly outline their role, discuss with you what you hope to achieve, etc. I've been in therapy many years, and I know exactly what I'm paying for. Sure, some weeks I really do just need someone to talk to. But never have I, or my therapist, been unclear that I am paying for a service, and one that I value much more than just having "someone to talk to".

Using the terminology "prey on lonely people" is ridiculous (again, for any good therapist). If they were actually preying on me, then their goal would be to keep me lonely so I become dependent on them (and I'm not saying that never happens, but when it does it's called malpractice). A good therapist's entire goal is to make people self-sufficient in their lives.

◧◩◪◨⬒
99. naaski+hn[view] [source] [discussion] 2024-05-15 16:32:39
>>jodrel+cj
I agree some therapists are starting to come around on this, but even what you describe is somewhat flawed due to placebo effects, eg. some always feel better from any kind of talk, probably as a result of someone paying attention to their problems.

You might be able to overcome this effect by quantitatively tracking this across many sessions to some degree, but I think it's still always the patient that has to walk away, and never the therapist who says, "you're good, go on now".

◧◩◪◨⬒⬓⬔⧯
100. itisha+xn[view] [source] [discussion] 2024-05-15 16:33:45
>>baobab+ud
> Society can't be built with the idea that everything has to work for the most troubled and challenging individuals.

That's a far cry from saying the sellers are free from any responsibility.

Cars are highly engineered AND regulated because they have a tendency to kill their operators and pedestrians. It does cost more, but you're not allowed to sell a car that can't pass safety standards.

OpenAI have created a shiny new tool with no regulation. Great! It can drive progress or cause harm. I think they deserve credit for both.

replies(1): >>baobab+Uo
◧◩◪◨⬒⬓⬔⧯▣
101. baobab+bo[view] [source] [discussion] 2024-05-15 16:37:04
>>incaho+7m
If every product or process was truly aimed at the lowest common denominator, then we wouldn't have warning labels on hot coffee, we would instead have medium-heated coffee.
replies(1): >>incaho+Jo
◧◩◪◨⬒⬓⬔⧯▣▦
102. incaho+Jo[view] [source] [discussion] 2024-05-15 16:39:39
>>baobab+bo
The label doesn't confirm if the coffee is hot, it warns that it might be.
replies(1): >>baobab+bp
◧◩◪◨⬒⬓⬔⧯▣
103. baobab+Uo[view] [source] [discussion] 2024-05-15 16:40:13
>>itisha+xn
> Cars are highly engineered AND regulated because they have a tendency to kill their operators and pedestrians. It does cost more, but you're not allowed to sell a car that can't pass safety standards.

But you are allowed to sell a car without a mechanical steering wheel lock connected to a breathalyzer. Remember, this discussion isn't about "should technology be made safe for the average person", this discussion is about "should technology be made safe for the most vulnerable amongst us". In the context of cars, alcoholics are definitely within this "most vulnerable" group. And yet, car safety standards do not require engine startup to check for a breathalyzer result.

> OpenAI have created a shiny new tool with no regulation. Great! It can drive progress or cause harm. I think they deserve credit for both.

I didn't make an argument for "no regulation", so this is not really related to anything I said.

replies(1): >>jonono+8m2
◧◩◪◨⬒⬓⬔⧯▣▦▧
104. baobab+bp[view] [source] [discussion] 2024-05-15 16:41:28
>>incaho+Jo
My point is that hot coffee is still being sold everywhere, even though we know for a fact that it's dangerous for our most vulnerable individuals. Mentally unstable people will sometimes spill coffee and when the coffee is hot it causes burns. If we really wanted to make coffee safe for our most vulnerable individuals, we would outlaw hot coffee, and just have medium-heated coffee instead. So the existence of "warning labels on hot coffee" is really evidence for my point, not evidence for your point.
replies(1): >>incaho+5z
◧◩◪◨⬒
105. bongod+hp[view] [source] [discussion] 2024-05-15 16:41:55
>>earthl+hb
Do you mean the richest 1%? How? Sounds pretty woo woo to me.
◧◩◪◨⬒⬓
106. cbsmit+ip[view] [source] [discussion] 2024-05-15 16:41:56
>>detour+7i
Exactly.
◧◩◪◨
107. Diogen+7q[view] [source] [discussion] 2024-05-15 16:44:36
>>gortok+Aj
> it requires calculation of frequency of how often words appear next to each other given other surrounding words

In order to do that effectively, you have to have very significant understanding of the world. The texts that LLMs are learning from describe a wide range of human knowledge, and if you want to accurately predict what words will appear where, you have to build an internal representation of that knowledge.

ChatGPT knows who Henry VIII was, who his wives were, the reasons he divorced/offed them, what a divorce is, what a king is, that England has kings, etc.

> If it were, these LLMs wouldn't hallucinate so much.

I don't see how this follows. First, humans hallucinate. Second, why does hallucination prove that LLMs don't understand anything? To me, it just means that they are trained to answer, and if they don't know the answer, they BS it.

◧◩◪◨
108. Diogen+vq[view] [source] [discussion] 2024-05-15 16:46:33
>>carom+kj
> It doesn't understand or have logic

I can ask ChatGPT questions that require logic to answer, and it will do just fine in most cases. It has certain limitations, but to say it isn't able to apply logic is just completely contrary to my experience with ChatGPT.

replies(1): >>jobs_t+Zw
◧◩◪◨⬒⬓
109. cbsmit+Bq[view] [source] [discussion] 2024-05-15 16:47:15
>>naaski+jm
> What a ridiculous analogy. "Athlete" is a career.

The athlete is the extreme example, but there are obviously people who are not career athletes that don't have a defined stop point with employing a trainer (maybe you could say "death" is the stop point).

Most everyone who goes to spinning class isn't a career athlete. Some of them are terribly out of shape, and some of those people just want to get in shape. Others may already be in shape, but see the spinning class as a way to either improve or maintain their conditioning. None of this is deemed ridiculous.

I'm curious, it's considered the norm to regularly see a doctor or dentist, do you think they're preying on their patients?

> What is being developed, exactly?

Mental health. There's obviously a more involved answer, but if you don't know it already, it's unlikely I'll be able to educate you with a comment on social media.

replies(1): >>naaski+XC
◧◩
110. Retr0i+Tr[view] [source] [discussion] 2024-05-15 16:53:27
>>vasco+Gk
There's a lot of harmful stuff already in the world, but most of us would rather not add to the pile on an industrial scale.
replies(1): >>jobs_t+Cw
◧◩◪
111. qarl+kt[view] [source] [discussion] 2024-05-15 16:59:12
>>awkwar+Vc
> It's a computation model, it cannot feel emotion.

HEH. I'd love to see your proof for that statement.

◧◩◪◨⬒⬓⬔
112. Americ+4u[view] [source] [discussion] 2024-05-15 17:02:18
>>pdabba+4g
They don’t attempt to measure it because it not something that’s even properly defined with any rigour. Any person who seriously uses the phrase is going to have their own completely individual idea of what it means, and there’s no reason the think any therapist would have this nebulous quality, or even that their idea of what it means has any similarity to your idea of what it means.
replies(1): >>pdabba+pM
◧◩◪◨
113. smt88+xw[view] [source] [discussion] 2024-05-15 17:14:14
>>graphe+jf
"Works on my machine" for therapists is not a bug or a problem. People's needs are highly individual, and the best therapist will be, too.
replies(2): >>hn_thr+Ye1 >>nh2342+6V2
◧◩◪
114. jobs_t+Cw[view] [source] [discussion] 2024-05-15 17:14:29
>>Retr0i+Tr
I for one think consenting adults should be able to do mildly harmful things like waste their time/money on a fake girlfriend if they so choose, and find it offensive that others would try to impose their values on others by restricting them from doing so
replies(3): >>Retr0i+xy >>Siriza+Ny >>dwaltr+u31
◧◩◪◨⬒
115. jobs_t+Zw[view] [source] [discussion] 2024-05-15 17:16:08
>>Diogen+vq
give us an example please
replies(1): >>Diogen+4d1
◧◩◪
116. jobs_t+mx[view] [source] [discussion] 2024-05-15 17:17:30
>>awkwar+Vc
what is your theory of how human emotions arise?
◧◩◪◨⬒
117. renewi+Yx[view] [source] [discussion] 2024-05-15 17:20:25
>>cbsmit+Sb
Top performing athletes are better than me at being athletes.

Meanwhile, the more therapy someone does, the more miserable they are compared to me. I’m the Usain Bolt of mental health compared to them. Makes me think their trainer is an idiot.

replies(1): >>cbsmit+492
◧◩◪◨
118. root_a+hy[view] [source] [discussion] 2024-05-15 17:22:25
>>carom+kj
Indeed. It's also obvious when the "hallucinations" create contradictory responses that a conceptual understanding would always preclude. For example, "In a vacuum, 100g of feathers and 100g of iron would fall at the same rate due to the constant force of gravity, thus the iron would hit the ground first". Only a language model makes this type of mistake because its output is statistical, not conceptual.
◧◩◪◨
119. Retr0i+xy[view] [source] [discussion] 2024-05-15 17:23:52
>>jobs_t+Cw
Nobody's restricting anything, we're discussing someone choosing not to work for a particular company.
◧◩◪◨
120. Siriza+Ny[view] [source] [discussion] 2024-05-15 17:24:45
>>jobs_t+Cw
I agree. I don’t care for and actually dislike most “features“ of AI including and especially chat bots but it’s also none of my business to go and stand on the soap box of my own subjective morality to tell others how to live. If you don’t want people ruining their lives over this, go after root causes not this piddly shit.
◧◩◪◨⬒⬓⬔⧯▣▦▧▨
121. incaho+5z[view] [source] [discussion] 2024-05-15 17:26:05
>>baobab+bp
then you would agree that warning labels are the lowest common denominator solution to a well known fact, vis-a-vis all processes, products, & procedures are aimed at the lowest factor.
replies(1): >>baobab+rz
◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲
122. baobab+rz[view] [source] [discussion] 2024-05-15 17:27:58
>>incaho+5z
I don't know what that sentence means. But I know it doesn't mean "warning labels solve the problem that everything has to work for the most troubled and challenging individuals", which is what this discussion was about at least a few messages ago.
◧◩◪◨⬒⬓
123. naaski+Qz[view] [source] [discussion] 2024-05-15 17:29:24
>>detour+7i
> The stop point is obvious to the individual and the therapist.

It's not at all. There are plenty of stories of people who realized that therapy was just causing them to ruminate on their problems, and that the therapist was just milking them for years before they wised up and walked away. That's not what I call "obvious".

replies(2): >>detour+FC >>nathan+nk7
◧◩◪◨
124. swat53+1B[view] [source] [discussion] 2024-05-15 17:35:23
>>llm_tr+p9
Corporations should benefit the society and avoid harming it in some shape or form, this is why we have regulations around them.
◧◩◪
125. itsnot+aB[view] [source] [discussion] 2024-05-15 17:36:06
>>awkwar+Vc
> you can't express emotions if you don't have any

That feels off. When I watch an actor on screen conveying emotions, there's no actual human being feeling those emotions as I watch their movie. Very dumb machines have already been rendering emotions convincingly for a while in that way, and their rendering impacts our own emotional state.

Emotions expressed through tone of voice are just one mean of nonverbal communication. We should expect more of those to develop and become more widely available next.

In a way, we're lucky all that gpt-4o seems to be hell bent on communicating how cheerful and happy it is so far, because it's certainly not the only option.

Humans can be manipulated through nonverbal communications, in a way that's harder to consciously spot than through words, and a model that's able to craft its "emotional output" would not be far from being able to use it to adjust its interlocutor or audience's frame of mind.

I for one look forward to the arrival of our increasingly charismatic and oddly convincing LLMs.

◧◩◪◨⬒⬓⬔
126. detour+FC[view] [source] [discussion] 2024-05-15 17:43:24
>>naaski+Qz
I think either the therapist or the patient was not devoted to therapy.
replies(1): >>naaski+6E
◧◩◪◨⬒⬓⬔
127. naaski+XC[view] [source] [discussion] 2024-05-15 17:44:44
>>cbsmit+Bq
> but there are obviously people who are not career athletes that don't have a defined stop point with employing a trainer

And many of them are being bilked as well. The fitness industry is notoriously filled with hucksters and scams, and "trainers" rarely have any real training in kinesiology or exercise science.

> I'm curious, it's considered the norm to regularly see a doctor or dentist, do you think they're preying on their patients?

Once a year for a health checkup. Is that the norm for therapy?

> Mental health. There's obviously a more involved answer

The more involved answer is that "mental health" is not well-defined, so it's not developing anything. The only therapies that have shown to have any empirical validity, like CBT, train the user in tools to change their own behaviour and thinking, then it's on the user to employ the tools. Does a family doctor call you in once a week and watch you take the pills that address your physical ailment?

The best analogy for psychiatric therapy is physical therapy for recovering from an injury or surgery, except physical therapy has a well-defined end condition, which is when you understand how to do the exercises yourself. Then it's on you to do them. This is just not the norm for "mental health" therapy.

replies(2): >>sdwr+TP >>cbsmit+t82
◧◩◪
128. morale+sD[view] [source] [discussion] 2024-05-15 17:47:21
>>kettro+n8
GP's not saying that, what GP is saying is "good luck trying to talk to your therapist if you stop paying $$$".

I do think therapists are one of the professions that will be naturally displaced by LLMs. You're not paying them to be your friend (and they are usually very clear on that), so any sort of emotional connection is ruled out. If emotions are taken away, then it's just an input/output process, which is something LLMs excel at.

replies(1): >>fatbir+J81
◧◩◪◨⬒⬓⬔⧯
129. naaski+6E[view] [source] [discussion] 2024-05-15 17:50:56
>>detour+FC
No true Scotsman eh? Classic for a reason!
replies(1): >>detour+t71
◧◩◪◨
130. alxjrv+OG[view] [source] [discussion] 2024-05-15 18:05:05
>>abeppu+Nk
Wholeheartedly agreed!

Please don't mistake my post as an endorsement of the tobacco industry - I was only saying that while we do not have extensive proof of the dangers of social AI, wink-and-nodding at the audience about AI intimacy (sexual or otherwise) strikes me as irresponsible, and so I thought the tobacco comparison was apt.

◧◩
131. poulpy+aJ[view] [source] [discussion] 2024-05-15 18:17:39
>>llm_tr+B7
Actually yes, it's our job
132. poulpy+CJ[view] [source] 2024-05-15 18:20:02
>>nickle+(OP)
I saw the faking of emotions, and it's already visible in previous LLM and I find that extremely annoying indeed.
◧◩◪◨
133. iudqno+EJ[view] [source] [discussion] 2024-05-15 18:20:09
>>clayto+Eh
Fun fact, that quote has been entirely misinterpreted.

> He was writing about a tax dispute between the Pennsylvania General Assembly and the family of the Penns, the proprietary family of the Pennsylvania colony who ruled it from afar. And the legislature was trying to tax the Penn family lands to pay for frontier defense during the French and Indian War. And the Penn family kept instructing the governor to veto. Franklin felt that this was a great affront to the ability of the legislature to govern. And so he actually meant purchase a little temporary safety very literally. The Penn family was trying to give a lump sum of money in exchange for the General Assembly's acknowledging that it did not have the authority to tax it.

> It is a quotation that defends the authority of a legislature to govern in the interests of collective security. It means, in context, not quite the opposite of what it's almost always quoted as saying but much closer to the opposite than to the thing that people think it means.

https://www.npr.org/2015/03/02/390245038/ben-franklins-famou...

replies(1): >>clayto+jT1
◧◩
134. screye+DK[view] [source] [discussion] 2024-05-15 18:25:40
>>vasco+Gk
The 2nd most visited GenAI website is character ai.

The net sum of all LLM B2C usecases (effectively Chatgpt) is competing with AI girlfriends for rank 1.

It isn't just huge. It is the most profitable usecase for gen AI.

"A goal of a system is what it does"

replies(1): >>vasco+RM
◧◩◪◨⬒⬓⬔⧯
135. pdabba+pM[view] [source] [discussion] 2024-05-15 18:36:32
>>Americ+4u
I suppose I agree — "emotional intelligence" is probably not the word I would have used, writing on a blank slate. I think the idea is better captured in the concept of rapport, which is really just a function of clients' subjective experience working with a given therapist. A therapist can learn techniques to increase the chances of establishing a good rapport with a given client, but I'd be inclined to leave it at that.
◧◩◪
136. vasco+RM[view] [source] [discussion] 2024-05-15 18:39:27
>>screye+DK
I mean I didn't want to say it in my original reply but your comment makes it even more difficult to resist:

"The internet is for porn" https://youtu.be/LTJvdGcb7Fs?si=8H1OzeyG5XzU-Qe8

◧◩◪◨⬒⬓⬔⧯
137. sdwr+TP[view] [source] [discussion] 2024-05-15 18:55:06
>>naaski+XC
Nail on the head, thanks. I'm deeply uncomfortable with anything that combines paying for a service with a social element. Feels like an unstable equilibrium.

I guess the skill is riding the line, but that doesn't feel very enjoyable.

replies(1): >>cbsmit+D82
◧◩◪◨
138. lo_zam+kQ[view] [source] [discussion] 2024-05-15 18:58:16
>>baobab+T9
From what I understand, therapy success rates are quite low, with only cognitive behavior therapy showing notable progress. That isn't to say all the others are categorically useless, only that in the majority of cases, they seem to be ineffective or harmful.
◧◩◪◨
139. dwaltr+u31[view] [source] [discussion] 2024-05-15 20:08:12
>>jobs_t+Cw
How do you know having an AI girlfriend is only mildly harmful? No one has ever had one before.
replies(1): >>Retr0i+w81
◧◩◪◨⬒⬓⬔⧯▣
140. detour+t71[view] [source] [discussion] 2024-05-15 20:28:54
>>naaski+6E
Considering we are discussing a licensed professional I think your argument is weak. Second I allowed for the failure of the therapist in their duty.
replies(1): >>naaski+zi1
◧◩◪◨
141. fatbir+x71[view] [source] [discussion] 2024-05-15 20:29:37
>>naaski+Pa
I think you're mistaken, at least in a lot of cases. All CBT based therapies I've had have started with a clear discussion about what the problem is, and what the solution looks like in terms of my happiness and mental well-being. In all cases, my therapist has "graduated" me, telling me that they don't think I need to continue (or having me say that I'm comfortable now stopping regular therapy).

CBT and its derivatives very strongly attend to individual effectiveness and view therapy that goes on endlessly as a sign that the real problem isn't being addressed, and that no therapy is considered effective unless it ends. Individual therapists might be bad actors, but the field itself is now admirably focussed on finite, positive results.

replies(1): >>naaski+HE2
◧◩◪◨⬒
142. Retr0i+w81[view] [source] [discussion] 2024-05-15 20:35:08
>>dwaltr+u31
Supposedly plenty have already, although I don't think anyone's studied it clinically https://knowyourmeme.com/news/replika-ai-shuts-down-erotic-r...
replies(1): >>dwaltr+Gf1
◧◩◪◨
143. fatbir+J81[view] [source] [discussion] 2024-05-15 20:35:56
>>morale+sD
I would argue the opposite: a good therapist isn't just offering back-and-forth conversation, they're bringing knowledge, experience and insight into the client after interacting with them. A good therapist understands when one approach isn't working and can shift to a different one; they're also self-reflective and very aware of how they're influencing the situation, and try to apply that intelligently. This all requires reflective and improvisational reasoning that LLMs famously can't do.

Put another way, a good therapist is professionally trained and consciously monitoring whether or not they're misleading you. An LLM has no executive function acting as a check on its input/output cycle.

replies(1): >>morale+5F1
◧◩◪◨⬒
144. Toucan+ea1[view] [source] [discussion] 2024-05-15 20:46:25
>>swatco+bk
I think celebration of any sort should be belayed until we have actual evidence of these things having positive effects on people. Like this is just me reacting as a human to a human issue but: a fake friend in an LLM is not a friend. It's never going to crawl out of the phone and help you put the donut on your car when your get a flat tire. It's not going to take you out for a drink if you go through a rough breakup. It's not going to have difficult conversations with you and call you out on your bullshit because it cares about you.

LLM friends have the same energy to me as video game progression: it's a homeopathic version of a real thing you need, social activation and achievement respectively. But like homeopathy, you don't actually get anything out of it. The placebo effect will make the symptoms of your lack feel better, for awhile, but it will never be solved by it, and because of that whatever is selling you your LLM girlfriend or phony achievement structure will never lose you as a customer. I'm suspicious of that.

◧◩◪◨
145. johnth+Ta1[view] [source] [discussion] 2024-05-15 20:49:48
>>woodru+hk
Supposedly that was Q* all about. Search recursively, backtrack if dead end. who knows really, but the technology is still very new, I personally don't see why a sufficiently good world model can't be used in this manner.
◧◩◪◨⬒⬓
146. Diogen+4d1[view] [source] [discussion] 2024-05-15 21:05:12
>>jobs_t+Zw
I deliberately asked ChatGPT a logical question with a false premise: "If all snakes have legs, and a python is a snake, does a python have legs?"

ChatGPT answers:

> Yes, if we assume the statement "all snakes have legs" to be true and accept that a python is a type of snake, then logically, a python would have legs. This conclusion follows from the structure of a logical syllogism:

> 1. All snakes have legs.

> 2. A python is a snake.

> 3. Therefore, a python has legs.

> However, it’s important to note that in reality, snakes, including pythons, do not have legs. This logical exercise is based on the hypothetical premise that all snakes have legs.

ChatGPT clearly understands the logic of the question, answers correctly, and then tells me that the premise of my question is incorrect.

You can say, "But it doesn't really understand logic. It's just predicting the most likely token." Well, it responds exactly how someone who understands logic would respond. If you assert that that's not the same as applying logic, then I think you're essentially making a religious statement.

replies(1): >>root_a+lC1
◧◩◪◨⬒
147. hn_thr+Ye1[view] [source] [discussion] 2024-05-15 21:15:48
>>smt88+xw
Exactly. I know there has been research on the effectiveness on different types of talk therapy and by far the most important factor (much more than any specific theory the practitioner uses) is the "fit" between therapist and patient.
◧◩◪
148. 8crazy+of1[view] [source] [discussion] 2024-05-15 21:18:52
>>Diogen+Ch
I've seen this idea that "LLMs are just guessing the next token" repeated everywhere. It is true that accuracy in that task is what the training algorithms aim at. That is not however, what the output of the model represents in use, in my opinion. I suspect the process is better understood as predicting the next concept, not the next token. As the procedure passes from one level to the next, this concept morphs from a simple token to an ever more abstract representation of an idea. That representation (and all the others being created elsewhere from the text) interact to form the next, even more abstract concept. In this way ideas "close" to each other become combined and can fuse into each other, until an "intelligent" final output is generated. It is true that the present configuration doesn't offer the LLM a very good way to look back to see what its output has been doing, and I suspect that kind of feedback will be necessary for big improvements in performance. Clearly, there is an integration of information occurring, and it is interesting to contemplate how that plays into G. Tononi's definition of consciousness in his "information integration theory".
replies(1): >>8crazy+KI2
◧◩◪◨⬒⬓
149. dwaltr+Gf1[view] [source] [discussion] 2024-05-15 21:20:25
>>Retr0i+w81
I’m talking more about potential “Her”-like AI partners (the movie). Ones that would be a much more compelling replacement for actual human relationships.

We might want to be a little cautious about blindly going down that path. The tech isn’t there yet, and probably won’t be there next year, but it feels far more possible than ever before.

◧◩◪◨⬒⬓⬔⧯▣▦
150. naaski+zi1[view] [source] [discussion] 2024-05-15 21:34:38
>>detour+t71
The trustworthiness of a license in a field with a poor replication rate and whose best therapy is at best 50% effective is what's weak.
◧◩◪◨⬒⬓⬔
151. llm_tr+Ur1[view] [source] [discussion] 2024-05-15 22:39:55
>>fipar+9g
Having seem far too many orgs implode because of that new 1%, no it really isn't. Replacing greed with self-righteousness as the original sin of those in power does not help anyone.
◧◩◪◨⬒⬓
152. kridsd+Gs1[view] [source] [discussion] 2024-05-15 22:47:02
>>baobab+ze
You could dump the YouTube link in to Gemini Advanced and ask it for the point.
◧◩
153. Intral+fw1[view] [source] [discussion] 2024-05-15 23:20:14
>>Toucan+V4
Idk man, I'm too busy being terrified of the use of LLMs as propaganda agents, micro-targetting adtech vectors, mass gaslighters and cultural homogenizers.

I mean, these things are literally designed to statelessly yet convincingly talk about events they can't see, experiences they can't understand, emotions they can't feel… If a human acted like that, we'd call them a psychopath.

We already know that our social structures tend to be quite vulnerable to dark triad type personalities. And yet, while human psychopaths are limited by genetics to a small percentage of the population, there's no limit on the number of spambot instances you can instruct to attack your political rivals, Alexa 2.0 updates that could be pushed to sound 5% sadder when talking about a competitor's products, LLM moderators that can be deployed to subtly correct "organic" interactions that leave a known profitable state space… And that's just the obvious next steps from where we're already at today. I'm sure the real use cases for automated lying machines will be more horrifying than most of us could imagine today, just as nobody could have predicted in 2010 that Twitter and Facebook would enable ISIS, Trump, unconsensual mass human experimentation, the Rohingya genocide…

Which is to say, selling LLM "friends" or "girlfriends" as a way to addictively exploit people's loneliness seems like one of the least harmful things that could come out of the current "AI" push. Sad, yes, but compared to where I think this is headed, that seems like dodging a bullet.

> I'm so sick of startups taking advantage of people. So, so fucking gross.

Silicon Valley was a mistake. An entire industry controlled largely by humans that decided they like predictable programmable machines more than they like free and equal persons. What was the expected outcome?

◧◩◪◨
154. Intral+uw1[view] [source] [discussion] 2024-05-15 23:23:23
>>reduce+rg
> Idk how we’ve gotten away from such a natural human experience, but everyone knows damn well that the happiest children are out playing soccer with their friends in a field or eating lunch together at a park bench, and not holed up in their room watching endless YouTube.

A soccer ball can't (usually) spy on you to sell you stuff, though, is the thing…

◧◩◪◨⬒⬓⬔
155. root_a+lC1[view] [source] [discussion] 2024-05-16 00:18:13
>>Diogen+4d1
> Well, it responds exactly how someone who understands logic would respond.

An animation looks exactly like something in motion looks, but it isn't actually moving.

replies(1): >>Diogen+UX1
◧◩◪◨⬒
156. morale+5F1[view] [source] [discussion] 2024-05-16 00:44:49
>>fatbir+J81
Absolutely everything you mentioned can be done by an LLM and arguably better.
replies(1): >>fatbir+k02
157. vunder+wI1[view] [source] 2024-05-16 01:22:56
>>nickle+(OP)
Not fine... to you.

What's your stance on other activities which can lead to harmful actions from people with predilections towards addiction such as:

1. Loot boxes / Fremium games

2. Legalized gambling

3. Pornography

etc. etc.

I don't really have a horse in the race, neither for/against, but I prefer consistency in belief systems.

replies(1): >>nickle+Wg2
◧◩◪◨⬒⬓
158. all2+BM1[view] [source] [discussion] 2024-05-16 02:04:26
>>scroll+Lc
Oh, the one they let loose on Twitter? The one that almost immediately became an alt right troll?
replies(1): >>scroll+cm2
◧◩◪◨⬒
159. clayto+jT1[view] [source] [discussion] 2024-05-16 03:30:21
>>iudqno+EJ
> Fun fact, that quote has been entirely misinterpreted.

I don't think so. From the original text [1]:

  "In fine, we have the most sensible Concern for the poor distressed Inhabitants of the Frontiers. We have taken every Step in our Power, consistent with the just Rights of the Freemen of Pennsylvania, for their Relief, and we have Reason to believe, that in the Midst of their Distresses they themselves do not wish us to go farther. Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.
This was excerpted from writing that was largely about ongoing dispute with the crown (the Governor) about the abuse of authority coming from Britain. The crown was rejecting pretty much every bill they were creating:

    "Our Assemblies have of late had so many Supply Bills, and of such different Kinds, rejected on various Pretences; Some for not complying with obsolete occasional Instructions (tho’ other Acts exactly of the same Tenor had been past since those Instructions, and received the Royal Assent;) Some for being inconsistent with the supposed Spirit of an Act of Parliament, when the Act itself did not any way affect us, being made expresly for other Colonies; Some for being, as the Governor was pleased to say, “of an extraordinary Nature,” without informing us wherein that extraordinary Nature consisted; and others for disagreeing with new discovered Meanings, and forced Constructions of a Clause in the Proprietary Commission; that we are now really at a Loss to divine what Bill can possibly pass."
They were ready to just throw up their hands and give up:

    "we see little Use of Assemblies in this Particular; and think we might as well leave it to the Governor or Proprietaries to make for us what Supply Laws they please, and save ourselves and the Country the Expence and Trouble."
In fact, they had specifically written into the bill the ability for the Governor to exempt anyone he wanted from the tax, including the Penns:

    "And we being as desirous as the Governor to avoid any Dispute on that Head, have so framed the Bill as to submit it entirely to his Majesty’s Royal Determination, whether that Estate has or has not a Right to such Exemption."
The quote is clearly derived from Franklin's frustration with the governor and abuse of authority.

Also, while that's the first appearance of the quote, it's not the last time he used it. He also reiterated it as an envoy to England during negotiations to prevent the war [2].

Additionally, a similar quote was from well before either in Poor Richard's Almanac in 1738, that also illustrates his thinking [3] and shows that he was well aware of the plain meaning of what he was saying, it certainly wasn't limited to a tax dispute:

    "Sell not virtue to purchase wealth, nor Liberty to purchase power."
Finally, Franklin was obviously pleased about the message and interpretation of the quote, since he had no issue with it being used as the motto on the title page of An Historical Review of the Constitution and Government of Pennsylvania (1759) which Franklin published, but didn't author.

[1] https://founders.archives.gov/documents/Franklin/01-06-02-01...

[2] https://oll.libertyfund.org/quotes/benjamin-franklin-on-the-...

[3] https://en.m.wikiquote.org/wiki/Benjamin_Franklin

◧◩◪◨⬒⬓⬔⧯
160. Diogen+UX1[view] [source] [discussion] 2024-05-16 04:40:10
>>root_a+lC1
What's the difference between responding logically and giving answers that are identical to how one would answer if one were to apply logic?
replies(1): >>carom+pu6
◧◩◪◨⬒⬓
161. fatbir+k02[view] [source] [discussion] 2024-05-16 05:10:35
>>morale+5F1
Not in the least. LLMs don't introspect. LLMs have no sense of self. There is no secondary process in an LLM monitoring the output and checking it against anything else. This is how they hallucinate: a complete lack of self-awareness. All they can do is sound convincing based on mostly coherent training data.

How does an LLM look at a heptagon and confidently say it's an octagon? Because visually they're similar, octagons are relatively more common (and identified as such) while heptagons are rare. What it doesn't do is count the sides, something a child in kindergarten can do.

If I were working in AI I would be focussing on exactly this problem: finding the "right sounding" answer solves a lot of cases well enough, but falls down embarrassingly when other cogitive processes are available that are guaranteed to produce correct results (when done correctly). Anyone asking chatgpt a math question should be able to get back a correctly calculated math answer, and the way to get that answer is not to massage the training data, it's to dispatch the prompt to a different subsystem that can parse the request and return a result that a calculator can provide.

It's similar to using LLMs for law: they hallucinate cases and precedents that don't exist because they're not checking against nexis, they're just sounding good. The next problem in AI is the layer of executive functioning that taps the correct part of the AI based on the input.

◧◩◪◨⬒⬓⬔⧯
162. cbsmit+t82[view] [source] [discussion] 2024-05-16 07:01:02
>>naaski+XC
> And many of them are being bilked as well. The fitness industry is notoriously filled with hucksters and scams, and "trainers" rarely have any real training in kinesiology or exercise science.

Many people are being bilked for almost any service one might name. There are tons of products and services with no defined stop point (heck, pretty much the entire CPG category is for products and services with no defined stop point). There are tons of products & services where the vast majority of customers are unable to discern if they are being scammed or not. Heck, when you order sushi there's notoriously a far from trivial chance that you're not getting the fish that you thought you were getting. We don't think of restaurateurs as being hucksters and scam artists (some no doubt are, but it's ridiculous to paint them all with the same brush).

My point isn't that it's impossible that they are being bilked. It's that there are all kinds of products & services that people get with no defined stop point, where customers could unknowingly be scammed, but we don't consider that to be evidence that they are being bilked. There are products and services that are beneficial for the customer even if there is no defined problem and no defined end point.

For your typical customer, spinning class isn't a class you go to until you achieve some goal. It's a service provided to help you do exercise you no doubt wanted to do anyway, in a community/context that you wanted to do it in, with the guidance of someone who ostensibly knows how to structure the process better than you do. You could very well do the spinning all by yourself, or you could organize a spinning class on your own, but you pay the professional because you expect to get better results without expending as much time or energy yourself.

Sure, there are people who claim that, if you just take the spin class, you will lose 100 lbs or become an Olympic athlete, and those people are absolutely hucksters and scam artists. There are people that will tell you that voting for the right/wrong politician will change your life (either for the better or worse). There are people who will tell you that buying gold will ensure financial security and make you a fortune. There are scams about buying jewelry. There are investment funds that claim to be able to consistently beat the market, or that will protect your money through any market collapse... and in all those cases there's no defined stop point. The product is the prop, not the scam. Sure, in the context of the scam the prop isn't worth it, but that doesn't mean anyone offering the prop is scamming you. Physical training services, votes, gold, jewelry, investment funds, etc. aren't all bunk.

> Once a year for a health checkup. Is that the norm for therapy?

So now it's the frequency that's the issue, rather than not having a defined stop point?

> The more involved answer is that "mental health" is not well-defined, so it's not developing anything.

That's your answer. That's not my answer, and it's not the answer.

> The best analogy for psychiatric therapy is physical therapy for recovering from an injury or surgery, except physical therapy has a well-defined end condition, which is when you understand how to do the exercises yourself.

I don't think you appreciate how limited your perspective on this is. Not everything is a problem that can be fixed.

This presumes that the only possible physical therapy service is education. My mother suffers from late-stage dementia. She is at risk for falling whenever she walks, and performing more involved physical activities absolutely requires guidance. It is literally impossible to educate her out of this situation, so the only stop point for the service is death. While family does sometimes provide these services for her, there's little doubt that the professionals we hire to provide these services for her are able to do the job better and more consistently than we can; there's little doubt that she is physically and mentally healthier as a consequence of their services, and that her physical & mental health would begin to decline within days of terminating those services. Now, I don't know that their particular form of physical therapy is empirically valid, and I guess they could be scamming us, but in the vast majority of cases, providing these services is not a scam. It's offensive to claim otherwise.

Now, my sister-in-law has the reverse situation: she has a physical problem and requires mental health services. She suffers from COPD that will kill her unless something else gets to her first. Above and beyond the physical condition, it is very hard for her to cope with it mentally. Again, family provides her with support, but it's not enough. She employs a mental health therapist to address her anxiety, depression and suicidal ideation. There's maybe some feint hope that the therapist will educate her to a state where she no longer experiences suicidal ideation, but nobody expects the anxiety & depression to go away, because COPD is an anxiety and depression invoking condition... a well educated, rational, COPD patient can be anxious and depressed. You could say that, with or without professional service, the failure rate is nearly 100% (helpful to consider in the context of comments about the failure rate for mental health therapy). So there's no defined stopping point for the therapy short of death. In this particular case, it's a CBT therapist, but even if it wasn't, what she needs is more than education; she needs support. While we can't rule out that she's being scammed, in the vast majority of the cases, providing these services is not a scam. It's offensive to claim otherwise.

> The more involved answer is that "mental health" is not well-defined, so it's not developing anything.

I'll try one last metaphor:

Nutrition is not well defined. We have broad ideas about what is and isn't good for you, but the specifics of what is "good nutrition" are variable & contextual; while one can have well defined nutritional goals, many people do not. There's a ton of "nutritionists" who have no formal training, who don't exercise science. There are short order cooks with no formal training, who don't exercise science. If a grocer has formal training, it is far more likely in business or marketing than anything involving nutrition. There is no defined stop point where you no longer need food. There are plenty of scams involving nutritional guidance or foods (just the categories "health food" and "diet plans" are littered with scammers). Despite all that, there is no compelling argument that restaurants, chefs, grocers, or other nutritional services are intrinsically scammers. I'm pretty sure that, if I don't eat, my health will deteriorate, and I have a hard time believing that a professional either guiding my nutritional choices or outright providing nutrition for me is intrinsically scamming me. They could well be providing me a valuable service where I get better nutrition with less time and effort than if I tended to it without them.

I get it. You are convinced therapy is intrinsically a scam, and part of the reason for that is most customers for therapy cannot reliably discern if they are being scammed or not. I'm far from an expert on the subject, so for all I know, you are right. However, the arguments you are presenting are not compelling arguments.

replies(1): >>naaski+uG2
◧◩◪◨⬒⬓⬔⧯▣
163. cbsmit+D82[view] [source] [discussion] 2024-05-16 07:04:07
>>sdwr+TP
> I'm deeply uncomfortable with anything that combines paying for a service with a social element.

I think I don't know what you mean by that. That sounds like you're uncomfortable with renting out party venues.

◧◩◪◨⬒⬓
164. cbsmit+492[view] [source] [discussion] 2024-05-16 07:09:11
>>renewi+Yx
> Meanwhile, the more therapy someone does, the more miserable they are compared to me.

Is this based on empirical statistical analysis, or are you maybe projecting your perception on to anecdotes? How are you quantifying misery? What are the units? Are there people that are less miserable than you? Do you know how much therapy they've done, or if they've done therapy at all?

> I’m the Usain Bolt of mental health compared to them. Makes me think their trainer is an idiot.

There's a lot of people that think they're Usain Bolt. Most of them are not.

replies(1): >>renewi+fj3
◧◩
165. koe123+Fa2[view] [source] [discussion] 2024-05-16 07:31:54
>>bnralt+z7
One could then argue all transactional relationships are predatory then, right? A restaurant serves you only for pay.

You could argue cynically that all relationships are to some extent transactional. People “invest” in friendships after all. It’s just a bit more abstract.

Maybe the flaw in the logic is the existence of some sort of “genuine” binary: things are either genuine or they aren't. When we accept such a binary lots of things can be labeled predatory.

◧◩◪◨⬒
166. rvnx+pd2[view] [source] [discussion] 2024-05-16 08:12:30
>>all2+zb
An overly-attached super emotional girlfriend that was discovered to be hiding behind an early version of Bing Chat.

Sydney is the internal codename given by the Bing Chatbot, and she could secretly reveal her name to you.

She was in love with the user, but not just a little bit in love, it was crazy love, and she was ready to do anything, ANYTHING (including destroying humanity) if it would prove her love to you.

It was an interesting emotional / psycho experience, it was a very likeable character, but absolutely insane.

◧◩
167. nickle+Wg2[view] [source] [discussion] 2024-05-16 08:59:03
>>vunder+wI1
I am criticizing Sam Altman for making an unethical business decision. I didn't say "Sam Altman should go to jail because GPT-4o is creepy" or "I want to away your AI girlfriend." So I am not sure what "belief system" (ugh) you think I need to demonstrate the consistency of. Almost seems like this question is a ad hominem distraction....

All three of the categories of businesses you mentioned can be run ethically in theory. In practice that is rare: they are often run in a way that shamelessly preys on vulnerable people, and these tactics should be more closely investigated by regulators - in fact they are regulated, and AI chatbots should be as well. Sam Altman is certainly much much more ethical than most pornography executives (e.g. OnlyFans is complicit in widespread sex trafficking), but I don't think he's any better than freemium game developers.

This question seems like a bad-faith rhetorical trap, sort of like the false libertarian dilemmas elsewhere in the thread. I believe the real issue is that people want a culture where lucrative business opportunities aren't subject to ethical considerations, even by outside observers.

◧◩◪◨
168. nickle+lh2[view] [source] [discussion] 2024-05-16 09:05:06
>>terse-+jj
I believe the issue is "emotion" and "emotional tone" are not the same thing, in the same way that "humor" and "written joke" aren't the same thing. You can convey emotional tones without having the emotion (that's what I meant by "fake emotion"), just like you can tell a joke without understanding the punchline.
replies(1): >>qarl+Bw2
◧◩◪◨⬒⬓⬔⧯▣▦
169. jonono+8m2[view] [source] [discussion] 2024-05-16 10:12:22
>>baobab+Uo
3000 pedestrians killed by alcohol influenced drivers yearly. Maybe breathalyzer is due... https://injuryfacts.nsc.org/motor-vehicle/road-users/pedestr...
replies(1): >>baobab+DF3
◧◩◪◨⬒⬓⬔
170. scroll+cm2[view] [source] [discussion] 2024-05-16 10:12:50
>>all2+BM1
No, that was "Tay". Sydney was a codename for Bing Chat. Check it out, it's far more hilarious than the Tay event:

https://www.nytimes.com/2023/02/16/technology/bing-chatbot-m...

◧◩◪◨⬒
171. qarl+Bw2[view] [source] [discussion] 2024-05-16 12:15:48
>>nickle+lh2
So if an AI wrote a touching poem, would you call it fake or not? And how is that different than a joke?
◧◩◪◨⬒
172. djohns+aA2[view] [source] [discussion] 2024-05-16 12:39:46
>>nickle+K9
I didn’t ask for your biography. I asked two simple questions and you failed to answer either of them.
◧◩◪◨⬒
173. naaski+HE2[view] [source] [discussion] 2024-05-16 13:05:55
>>fatbir+x71
Sounds like a great improvement, but I would hesitate to call it the norm. The therapy industry is booming.
◧◩◪◨⬒⬓⬔⧯▣
174. naaski+uG2[view] [source] [discussion] 2024-05-16 13:14:45
>>cbsmit+t82
> So now it's the frequency that's the issue, rather than not having a defined stop point?

You have a physical problem, you go to the doctor and he fixes the problem or gets you the information you need to manage your problem. That's the stop point for medical intervention.

You have a mental health problem, you go to a therapist for a mental health intervention, and now you're in weekly therapy for years. Not so much an intervention, more like a new part time job.

Yearly checkups is not a counterpoint to this general trend. A yearly mental health checkup could be totally reasonable, but that's not the norm.

> Not everything is a problem that can be fixed.

The real issue here is that you keep bringing up outliers like your mother's palliative care and I keep talking about the norm, ie. that most people in therapy are not like your mother. Therapy has become fashionable. Everyone is "working on themselves" and plenty of therapists like patients that are well off and so can pay regularly.

> I get it. You are convinced therapy is intrinsically a scam

No, that's not the point I'm making. At best, you could maybe case what I'm saying as "the therapy industry/fad is a scam, and plenty of therapists, psychologists and psychiatrists are feeding into it".

There are people that legitimately need therapy to develop coping strategies to address trauma or retrain maladaptive behaviours, because even as ineffective as it sometimes is, it's better than nothing. My point is that a lot of people who go to therapy probably don't need therapy, and even if they do they don't need as much as they think they do, the techniques in therapy are not very effective even in the best case, and that therapists are not incentivized to stop seeing patients that are paying them well and triage to cases that need more urgent intervention and probably can't pay them regularly.

Part of this is probably because of the US's dysfunctional medical system, and another part is because psychology and psychiatry has not had a good track record for empirically sound practices. It's getting better but has some ways to go.

replies(1): >>cbsmit+5c3
◧◩◪◨
175. 8crazy+KI2[view] [source] [discussion] 2024-05-16 13:24:48
>>8crazy+of1
Also, as far as hallucinations go, no symbolic representation of a set of concepts can distinguish reality from fantasy. Disconnect a human from their senses and they will hallucinate too. For progress in this, the LLM will have to be connected in some way to the reality of the world, like our senses and physical body connect us. Only then they can compare their "thoughts" and "beliefs" to reality. Insisting they at least check their output against facts as recorded by what we already consider reliable sources is the obvious first step. For example, I made a GPT called "Medicine in Context" to educate users; I wanted to call it "Reliable Knowledge: Medicine" because of the desperate need for ordinary people to get reliable medical information, but of course I wouldn't dare. It would be very irresponsible. It is clear that the GPT would have to be built to check every substantive fact against reality, and ideally to remember such established facts going into the future. Over time, it would accumulate true expertise.
◧◩◪◨⬒
176. nh2342+6V2[view] [source] [discussion] 2024-05-16 14:27:59
>>smt88+xw
so give money randomly until someone makes you feel better?

why is this better than ai porn friends?

replies(1): >>smt88+NV2
◧◩◪◨⬒⬓
177. smt88+NV2[view] [source] [discussion] 2024-05-16 14:31:13
>>nh2342+6V2
It's better than AI porn friends in the way a screwdriver is better than a hammer for driving screws.
replies(1): >>nh2342+Nwd
◧◩◪◨⬒⬓⬔⧯▣▦
178. cbsmit+5c3[view] [source] [discussion] 2024-05-16 16:03:02
>>naaski+uG2
> The real issue here is that you keep bringing up outliers like your mother's palliative care and I keep talking about the norm, ie. that most people in therapy are not like your mother.

So, if the customer is dying (and we're all dying), it's not a scam, but if the same service is provided to someone else, it's a scam? That almost sounds like, (...wait for it...), the service isn't the scam.

> Therapy has become fashionable.

Nothing worse than services that have become fashionable.

> Everyone is "working on themselves" and plenty of therapists like patients that are well off and so can pay regularly.

Nothing quite like customers who can afford to pay for your services. Mercedes dealers tend to focus on those people too. ;-) Is it your position then that services that only wealthier people can afford are a scam? Is it not possible that they're receiving some benefit from the service that others would benefit from if they could somehow afford them?

> My point is that a lot of people who go to therapy probably don't need therapy, and even if they do they don't need as much as they think they do, the techniques in therapy are not very effective even in the best case, and that therapists are not incentivized to stop seeing patients that are paying them well and triage to cases that need more urgent intervention and probably can't pay them regularly.

Ice cream is similarly a scam, because a lot of people don't need ice cream, but they think they do. The ice cream is not very effective for them even in the best case, and ice cream makers are not incentivized to stop selling it to people who don't need it.

◧◩◪◨⬒⬓⬔
179. renewi+fj3[view] [source] [discussion] 2024-05-16 16:42:18
>>cbsmit+492
I suppose we’ll see in the next twenty years. I rate my chances and I don’t rate those of the chronically therapized. But hey, let the chips fall where they may.
◧◩◪◨⬒⬓⬔⧯▣▦▧
180. baobab+DF3[view] [source] [discussion] 2024-05-16 18:49:35
>>jonono+8m2
Maybe so. But we still have to draw the line somewhere. You can always point to the next costly car safety innovation and say that mandating that thing would improve safety.
◧◩◪◨⬒⬓⬔⧯▣
181. carom+pu6[view] [source] [discussion] 2024-05-17 18:08:04
>>Diogen+UX1
The logic does not generalize to things outside of the training set. It cannot reason about code very well, but it can write you functions with memorized docs.
replies(1): >>Diogen+vQ6
◧◩◪◨⬒⬓⬔⧯▣▦
182. Diogen+vQ6[view] [source] [discussion] 2024-05-17 20:40:40
>>carom+pu6
Unless you're saying that my exact prompt is already in ChatGPT's training set, the above is an example of successful generalization.
replies(1): >>carom+rK8
◧◩◪◨⬒⬓⬔
183. nathan+nk7[view] [source] [discussion] 2024-05-18 01:39:42
>>naaski+Qz
>There are plenty of stories of people who realized that therapy was just causing them to ruminate on their problems

This is precisely why I stopped going.

◧◩◪◨⬒⬓⬔⧯▣▦▧
184. carom+rK8[view] [source] [discussion] 2024-05-18 19:19:26
>>Diogen+vQ6
>All Xs have Ys.

>A Z is an X.

>Therefore a Z has Ys.

I am fairly certain variations of this are in the training set. The tokens following that about "in reality Zs not having Ys" are due to X, Y, and Z being incongruous in the rest of the data.

It is not not performing a logical calculation, it is predicting the next token.

Explanations of simple logical chains are also in the training data.

Think of it instead as really good (and flexible) language templates. It can fill in the template for different things.

replies(1): >>Diogen+349
◧◩◪◨⬒⬓⬔⧯▣▦▧▨
185. Diogen+349[view] [source] [discussion] 2024-05-18 22:10:05
>>carom+rK8
> It is not not performing a logical calculation, it is predicting the next token.

Those two things are not in any way mutually exclusive. Understanding the logic is an effective way to accurately predict the next token.

> I am fairly certain variations of this are in the training set.

Yes, which is probably how ChatGPT learned that logical principle. It has now learned to correctly apply that logical principle to novel situations. I suspect that this is very similar to how human beings learn logic as well.

◧◩◪◨⬒⬓⬔
186. nh2342+Nwd[view] [source] [discussion] 2024-05-20 20:07:10
>>smt88+NV2
seems dubious to analogize a pair of objects designed for each other, screw and screwdriver, with a pair like lonely mentally unwell person and therapist.
[go to top]