zlacker

[parent] [thread] 20 comments
1. kranke+(OP)[view] [source] 2023-05-16 18:56:43
I did not expect this. Does Sam have any plans on what this could look like?
replies(2): >>ipaddr+U >>adastr+k2
2. ipaddr+U[view] [source] 2023-05-16 19:00:49
>>kranke+(OP)
Sam is a crook
replies(1): >>gumbal+33
3. adastr+k2[view] [source] 2023-05-16 19:06:58
>>kranke+(OP)
An exorbitantly large moat.
replies(1): >>intelV+93
◧◩
4. gumbal+33[view] [source] [discussion] 2023-05-16 19:11:27
>>ipaddr+U
Essentially. He is marching on these bad scifi scenarios because he knows politicians are old and senile while a good portion of voters is gullible. I find it difficult to believe that grown ups are talking about an ai running amok in the context of a chatbot. Have we really become that dense as a society?
replies(1): >>hackin+m5
◧◩
5. intelV+93[view] [source] [discussion] 2023-05-16 19:11:36
>>adastr+k2
And a portcullis of no less than 48B params.
◧◩◪
6. hackin+m5[view] [source] [discussion] 2023-05-16 19:21:24
>>gumbal+33
No one thinks a chatbot will run amok. What people are worried about is the pace of progress being so fast that we cannot preempt the creation of dangerous technology without having a sufficient guardrails in place long before the AI becomes potentially dangerous. This is eminently reasonable.
replies(2): >>gumbal+A7 >>diputs+la
◧◩◪◨
7. gumbal+A7[view] [source] [discussion] 2023-05-16 19:28:19
>>hackin+m5
AI is software, it doesnt become it is made. And this type of legislation wont prevent bad actors from training malicious tools.
replies(1): >>hackin+D8
◧◩◪◨⬒
8. hackin+D8[view] [source] [discussion] 2023-05-16 19:34:05
>>gumbal+A7
Your claim is assuming we have complete knowledge of how these systems work and thus are in full control of their behavior in any and all contexts. But this is plainly false. We do not have anywhere near a complete mechanistic understanding of how they operate. But this isn't that unusual, many technological advancements happened before the theory. For AI systems that can act in the real world, this state of affairs has the potential to be very dangerous. It is important to get ahead of this danger rather than play catch up once the danger is demonstrated.
replies(1): >>gumbal+Ba
◧◩◪◨
9. diputs+la[view] [source] [discussion] 2023-05-16 19:41:07
>>hackin+m5
Yes, thank you. AI is dangerous, but not for the sci-fi reasons, just for completely cynical and greedy ones.

Entire industries stand to be gutted, and people's careers destroyed. Even if an AI is only 80% as good, it has <1% of the cost, which is an ROI that no corporation can afford to ignore.

That's not even to mention the political implications of photo and audio deepfakes that are getting better and better by the week. Most of the obvious tells we were laughing at months ago are gone.

And before anyone makes the comparison, I would like to remind everyone that the stereotypical depiction of Luddites as small-minded anti-technology idiots is a lie. They embraced new technology, just not how it was used. Their actual complaints - that skilled workers would be displaced, that wealth and power would be concentrated in a small number of machine owners, and that overall quality of goods would decrease - have all come to pass.

In a time of unprecedented wealth disparity, general global democratic backsliding, and near universal unease at the near-unstoppable power of a small number of corporations, we really do not want to go through another cycle of wealth consolidation. This is how we get corporate feifdoms.

There is another path - if our ability to live and flourish wasn't directly tied to our individual economic output. But nobody wants to have that conversation.

replies(1): >>hackin+dc
◧◩◪◨⬒⬓
10. gumbal+Ba[view] [source] [discussion] 2023-05-16 19:41:58
>>hackin+D8
The real danger right now is people like sam altman making policy and an eager political class that will be long dead by the time we have to foot the bill. Everything else is bad scifi. We were told the same about computer viruses and how they can bring nuclear wars and as usual the only real danger was humans and bad politics.
replies(1): >>Number+wr
◧◩◪◨⬒
11. hackin+dc[view] [source] [discussion] 2023-05-16 19:49:37
>>diputs+la
I couldn't agree more. I fear the world where 90% of people are irrelevant to the economic output of the world. Our culture takes it as axiomatic that more efficiency is good. But its not clear to me that it is. The principle goal of society should be the betterment of the lives of people. Yes, efficiency has historically been a driver of widespread prosperity, but it's not obvious that there isn't a local maximum past which increased efficiency harms the average person. We may already be on the other side of the critical point. What I don't get is why we're all just blindly barreling forward and allowing trillion dollar companies to engage in an arms race to see how fast they can absorb productive work. The fact that few people are considering what society looks like in a future with widespread AI and whether this is a future we want is baffling.
replies(1): >>iavael+Jk
◧◩◪◨⬒⬓
12. iavael+Jk[view] [source] [discussion] 2023-05-16 20:33:00
>>hackin+dc
This won’t be the first time. First world already had same situation during industrialisation, when economic no longer required 90% of population growing food. And this transformation still regularly happens in one or another third world country. People worry about such changes too much. When this will happen again it wouldn’t be a walk in a park for many people, but neigher this would be a disaster.

And BTW when people spend less resources to get more goods and services - that’s the definition of prospering society. Of course having some people changing jobs because less manpower is needed to do same amount of work is an inevitable consequence of a progress.

replies(1): >>hackin+go
◧◩◪◨⬒⬓⬔
13. hackin+go[view] [source] [discussion] 2023-05-16 20:51:10
>>iavael+Jk
Historically, efficiency increases from technology were driven by innovation from narrow technology or mechanisms that brought a decrease in the costs of transactions. This saw an explosion of the space of viable economic activity and with it new classes of jobs and a widespread growth in prosperity. Productivity and wages largely remained coupled up until recent decades. Modern automation has seen productivity and wages begin to decouple. Decoupling will only accelerate as the use of AI proliferates.

This time is different because AI has the potential to have a similar impact on efficiency across all work. In the past, efficiency gains created totally new spaces of economic activity in which the innovation could not further impact. But AI is a ubiquitous force multiplier, there is no productive human activity that AI can't disrupt. There is no analogous new space of economic activity that humanity as a whole can move to in order to stay relevant to the world's economic activity.

replies(2): >>reveri+yD >>iavael+O3d
◧◩◪◨⬒⬓⬔
14. Number+wr[view] [source] [discussion] 2023-05-16 21:09:31
>>gumbal+Ba
I need to make a montage of the thousands of hacker news commenters typing "The REAL danger of AI is ..." followed by some mundane issue.

I'm sorry to pick on you, but do people not get that the non-human intelligence has the potential to be such a powerful and dangerous thing that, yes, it is the real danger? If you think it's not going to be powerful, or not dangerous, please say why! Not that current models are not dangerous, but why the trend is toward something other than machine intelligence that can reason about the world better than humans can. Why is this trend of machines getting smarter and smarter going to suddenly stop?

Or if you agree that these machines are going to get smarter than us, how are we going to control them?

replies(1): >>gumbal+OD
◧◩◪◨⬒⬓⬔⧯
15. reveri+yD[view] [source] [discussion] 2023-05-16 22:19:38
>>hackin+go
If humans are irrelevant to the world's "economic activity", then that economic activity should be irrelevant to humans.

We should make sure that the technology to eliminate scarcity is evenly distributed so that nobody is left poor in a world of exponentially and automatically increasing riches.

replies(1): >>hackin+HL
◧◩◪◨⬒⬓⬔⧯
16. gumbal+OD[view] [source] [discussion] 2023-05-16 22:21:19
>>Number+wr
Interesting. I am of the opinion that ai is not intelligent hence i dont see much point in entertaining the various scenarios deriving from that possibility. There is nothing dangerous in current ai models or ai itself other than the people controlling it. If it were intelligent then yeah maybe but we are not there yet and unless we adapt the meaning of agi to fit a marketing narrative we wont be there anytime soon.

But if it were intelligent and the conclusion it reaches, once it’s done ingesting all our knowledge, is that it should be done with us then we probably deserve it.

I mean what kind of a species takes joy in “freeing” up people and causing mass unemployment, starts wars over petty issues, allows for famine and thrives on the exploitation of others while standing on piles of nuclear bombs. Also we are literally destroying the planet and constantly looking for ways to dominate each other.

We probably deserve a good spanking.

replies(1): >>Number+Nn1
◧◩◪◨⬒⬓⬔⧯▣
17. hackin+HL[view] [source] [discussion] 2023-05-16 23:14:45
>>reveri+yD
Technology doesn't in itself eliminate scarcity as long as raw materials and natural resources are scarce. In this case, all technology does is allow more efficient control over these resources and their by-products. Everyone having their own pet AGI on their cell phone doesn't materialize food or fresh water.
replies(1): >>iavael+m4d
◧◩◪◨⬒⬓⬔⧯▣
18. Number+Nn1[view] [source] [discussion] 2023-05-17 04:54:38
>>gumbal+OD
That's easy to say in the abstract, but when it comes down to the people you love actually getting hurt, it's a lot harder.

> There is nothing dangerous in current ai models or ai itself other than the people controlling it.

Totally agree! but...

> If it were intelligent then yeah maybe but we are not there yet and unless we adapt the meaning of agi to fit a marketing narrative we wont be there anytime soon.

That's the bit where I don't agree. I don't think we can say with certainty how long it will be, and it may be just years. I never imagined it would be so soon that we have AI that can imitate a human almost perfectly, and actually "understand" questions from college level examinations to write answers that pass them.

◧◩◪◨⬒⬓⬔⧯
19. iavael+O3d[view] [source] [discussion] 2023-05-20 22:38:47
>>hackin+go
> This saw an explosion of the space of viable economic activity and with it new classes of jobs and a widespread growth in prosperity.

I don't see any reason why thing must be different this time. Human demands are still infinite, while productivity is still limited (and btw meeting limited productivity with infinite demans is what economic is about). So no increase in productivity will make humans stop wanting more and close opportunities for new markets.

> Modern automation has seen productivity and wages begin to decouple.

Could you provide any sources on this topic? This is a new information for me here.

replies(1): >>hackin+5Te
◧◩◪◨⬒⬓⬔⧯▣▦
20. iavael+m4d[view] [source] [discussion] 2023-05-20 22:43:57
>>hackin+HL
AGI is still a concept from science fiction. If we talk about modern LLMs (that are indeed impressing) increasing food production is not what they are about. But this doesn't mean that technology doesn't help there. Green revolution for example literally made more food materialize.
◧◩◪◨⬒⬓⬔⧯▣
21. hackin+5Te[view] [source] [discussion] 2023-05-21 18:05:04
>>iavael+O3d
>I don't see any reason why thing must be different this time.

The difference is that AGI isn't a static tool. If some constraint is a limiting factor to economic activity, inventing a tool to eliminate the constraint uncorks new kinds of economic potential and the real economy expands to exploit new opportunities. But such tools historically were narrowly focused and so the new space of economic opportunity is left for human labor to engage with. AGI breaks this trend. Any knowledge work can in principle be captured by AGI. There is nothing "beyond" the function of AGI for human labor en mass to engage productively with.

To be clear, my point in the parent was from extrapolating current trends to a near-term (10-20 years) proto AGI. LLMs as they currently stand certainly won't put 90% of people out of work. But it is severely short-sighted to refuse to consider the trends and where the increasing sophistication of generalist AIs (not necessarily AGI) are taking society.

>Could you provide any sources on this topic? This is a new information for me here.

Graph: https://files.epi.org/charts/img/91494-9265.png

Source: https://www.epi.org/publication/understanding-the-historic-d...

[go to top]