zlacker

[parent] [thread] 170 comments
1. happyt+(OP)[view] [source] 2023-05-16 19:14:04
We need to MAKE SURE that AI as a technology ISN'T controlled by a small number of powerful corporations with connections to governments.

To expound, this just seems like a power grab to me, to "lock in" the lead and keep AI controlled by a small number of corporations that can afford to license and operate the technologies. Obviously, this will create a critical nexus of control for a small number of well connected and well heeled investors and is to be avoided at all costs.

It's also deeply troubling that regulatory capture is such an issue these days as well, so putting a government entity in front of the use and existence of this technology is a double whammy — it's not simply about innovation.

The current generation of AIs are "scary" to the uninitiated because they are uncanny valley material, but beyond impersonation they don't show the novel intelligence of a GPI... yet. It seems like OpenAI/Microsoft is doing a LOT of theater to try to build a regulatory lock in on their short term technology advantage. It's a smart strategy, and I think Congress will fall for it.

But goodness gracious we need to be going in the EXACT OPPOSITE direction — open source "core inspectable" AIs that millions of people can examine and tear apart, including and ESPECIALLY the training data and processes that create them.

And if you think this isn't an issue, I wrote this post an hour or two before I managed to take it live because Comcast went out at my house, and we have no viable alternative competitors in my area. We're about to do the same thing with AI, but instead of Internet access it's future digital brains that can control all aspects of a society.

replies(23): >>noneth+E4 >>anilea+U5 >>ben_w+f9 >>oldage+Ea >>downWi+Dc >>ozi+Ff >>ameliu+uh >>stephc+xh >>SkyMar+dk >>tric+6n >>mcv+zo >>mindcr+op >>rjbwor+qp >>chrgy+yu >>crater+Tu >>kalkin+Ny >>johnal+sC >>concor+hH >>aspect+NI >>daniel+nT >>burton+WW >>fuzzfa+Z91 >>gentle+6j1
2. noneth+E4[view] [source] 2023-05-16 19:32:58
>>happyt+(OP)
This is the definition of regulatory capture. Altman should be invited to speak so that we understand the ideas in his head but anything he suggests should be categorically rejected because he’s just not in a position to be trusted. If what he suggests are good ideas then hopefully we can arrive at them in some other way with a clean chain of custody.

Although I assume if he’s speaking on AI they actually intend on considering his thoughts more seriously than I suggest.

replies(3): >>pg_123+Bd >>EGreg+mh >>brooks+vi
3. anilea+U5[view] [source] 2023-05-16 19:38:22
>>happyt+(OP)
> open source "core inspectable" AIs that millions of people can examine and tear apart, including and ESPECIALLY the training data and processes that create them.

True open source AI also strikes me as prerequisite for fair use of original works in training data. I hope Congress asks ClosedAI to explain what’s up with all that profiting off copyrighted material first before even considering the answer.

replies(1): >>happyt+fc
4. ben_w+f9[view] [source] 2023-05-16 19:53:51
>>happyt+(OP)
You're not wrong, except in so far as that's parochial.

A government-controlled… never mind artificial god, a government-controlled story teller can be devastating.

I don't buy Musk's claim ChatGPT is "woke" (or even that the term is coherent enough to be tested), but I can say that each government requiring AI to locally adhere to national mythology, will create self-reinforcing cognitive blind spots, because that already happens at the current smaller scale of manual creation and creators being told not to "talk the country down".

But, unless someone has a technique for structuring an AI such that it can't be evil even when you, for example, are literally specifically trying to train it to support the police no matter how authoritarian the laws are, then a fully open source AGI is almost immediately also a perfectly obedient sociopath of $insert_iq_claim_here.

I don't want to wake up to the news that some doomsday cult has used one to design/make a weapon, nor the news a large religious group target personalised propaganda against me and mine.

Fully open does that by default.

But, you're still right, if we don't grok the AI, the governments can each secretly manipulate the AI and bend it to government goals in opposition to the people.

replies(2): >>robwwi+pj >>selimt+rc1
5. oldage+Ea[view] [source] 2023-05-16 20:00:45
>>happyt+(OP)
The theater is in alignment with Congress. Lobbyists and PR types are working behind the scenes 24/7 to bring this narrative together and look in command to the public.

Work on open source locally hosted AI is important. I keep local clones and iterate as I can.

◧◩
6. happyt+fc[view] [source] [discussion] 2023-05-16 20:08:35
>>anilea+U5
Absolutely. It's going to absolutely shred the trademark and copyright systems, if they even apply (or are extended to apply) which is a murky area right now. And even then, the sheer volume of material created by a geometric improvement and subsequent cost destruction of virtually every intellectual and artistic endeavor or product means that even if you hold the copyright or trademark, good luck paying for enforcement on the vast ocean of violations intrinsic in the shift.

What people also fail to understand is that AI is largely seen by the military industrial complex as a weapon to control culture and influence. The most obvious risk of AI — the risk of manipulating human behavior towards favored ends — has been shown to be quite effective right out the gate. So, the back channel conversation has to be to put it under regulation because of it's weaponization potential, especially considering the difficulty in identifying anyone (which of course is exactly what Elon is doing with X 2.0 — it's a KYC id platform to deal with this exact issue with a 220M user 40B head start).

I mean, the dead internet theory is turning true, and half the traffic on the Web is already bot driven. Imagine when it's 99%, as proliferation of this technology will inevitably generate simply for the economics.

Starting with open source is the only way to get enough people looking at the products to create any meaningful oversight, but I fear the weaponization fears will mean that everything is locked away in license clouds with politically influential regulatory boards simply on the proliferation arguments. Think of all the AI technologists who won't be versed in this technology unless they work at a "licensed company" as well — this is going to make the smaller population of the West much less influential in the AI arms race, which is already underway.

To me, it's clear that nobody in Silicon Valley or the Hill has learned a damn thing from the prosecution of hackers and the subsequent bloodbath of cybersecurity as a result of the exact same kinds of behavior back in the early to mid-2000s. We ended up driving out best and brightest into the grey and black areas of infosec and security, instead of out in the open running companies where they belong. This move would do almost the exact same thing to AI, though I think you have to be a tad of an Asimov or Bradbury fan to see it right now.

I don't know, that's just how I see it, but I'm still forming my opinions. LOVE LOVE LOVE your comment though. Spot on.

Relevant articles:

https://www.independent.co.uk/tech/internet-bots-web-traffic...

https://theconversation.com/ai-can-now-learn-to-manipulate-h....

replies(1): >>simonh+Qh
7. downWi+Dc[view] [source] 2023-05-16 20:10:35
>>happyt+(OP)
Open source doesn't mean outside the reach of regulation, which I would guess is your real desire. You downplay AI's potential danger while well knowing that we are at a historic inflection point. I believe in democracy as the worst form of government except all those other forms that have been tried. We the people must be in control of our destiny.
replies(1): >>happyt+hf
◧◩
8. pg_123+Bd[view] [source] [discussion] 2023-05-16 20:16:15
>>noneth+E4
There is also growing speculation that the current level of AI may have peaked in a bang for buck sense.

If this is so, and given the concrete examples of cheap derived models learning from the first movers and rapidly (and did I mention cheaply) closing the gap to this peak, the optimal self-serving corporate play is to invite regulation.

After the legislative moats go up, it is once again about who has the biggest legal team ...

replies(4): >>robwwi+vh >>TheDud+tm >>yarg+fG >>daniel+PD1
◧◩
9. happyt+hf[view] [source] [discussion] 2023-05-16 20:23:41
>>downWi+Dc
Hear, hear. Excellent point, and I don't mean to imply it shouldn't be regulated. However, it has been my general experience that concentrating immense power in governments doesn't typically lead to more security, so perhaps we just have a difference of philosophy.

Democracy will not withstand AI when it's fully developed. Let me offer a better written explanation of my general views than I could ever muster up for a comment on HN in the form of a quote from an article by Dr. Thorsten Thiel (Head of the Research Group "Democracy and Digitaliziation" at the Weizenbaum Institute for the Networked Society):

> The debate on AI’s impact on the public sphere is currently the one most prominent and familiar to a general audience. It is also directly connected to long-running debates on the structural transformation of the digital public sphere. The digital transformation has already paved the way for the rise of social networks that, among other things, have intensified the personalization of news consumption and broken down barriers between private and public conversations. Such developments are often thought to be responsible for echo-chamber or filter-bubble effects, which in turn are portrayed as root causes of the intensified political polarization in democracies all over the world. Although empirical research on filter bubbles, echo chambers, and societal polarization has convincingly shown that the effects are grossly overestimated and that many non-technology-related reasons better explain the democratic retreat, the spread of AI applications is often expected to revive the direct link between technological developments and democracy-endangering societal fragmentation.

> The assumption here is that AI will massively enhance the possibilities for analyzing and steering public discourses and/or intensify the automated compartmentalizing of will formation. The argument goes that the strengths of today's AI applications lie in the ability to observe and analyze enormous amounts of communication and information in real time, to detect patterns and to allow for instant and often invisible reactions. In a world of communicative abundance, automated content moderation is a necessity, and commercial as well as political pressures further effectuate that digital tools are created to oversee and intervene in communication streams. Control possibilities are distributed between users, moderators, platforms, commercial actors and states, but all these developments push toward automation (although they are highly asymmetrically distributed). Therefore, AI is baked into the backend of all communications and becomes a subtle yet enormously powerful structuring force.

> The risk emerging from this development is twofold. On the one hand, there can be malicious actors who use these new possibilities to manipulate citizens on a massive scale. The Cambridge Analytica scandal comes to mind as an attempt to read and steer political discourses (see next section on electoral interference). The other risk lies in a changing relationship between public and private corporations. Private powers are becoming increasingly involved in political questions and their capacity to exert opaque influences over political processes has been growing for structural and technological reasons. Furthermore, the reshaping of the public sphere via private business models has been catapulted forward by the changing economic rationality of digital societies such as the development of the attention economy. Private entities grow stronger and become less accountable to public authorities; a development that is accelerated by the endorsement of AI applications which create dependencies and allow for opacity at the same time. The ‘politicization’ of surveillance capitalism lies in its tendency, as Shoshana Zuboff has argued, to not only be ever more invasive and encompassing but also to use the data gathered to predict, modify, and control the behavior of individuals. AI technologies are an integral part in this ‘politicization’ of surveillance capitalism, since they allow for the fulfilment of these aspirations. Yet at the same time, AI also insulates the companies developing and deploying it from public scrutiny through network effects on the one hand and opacity on the other. AI relies on massive amounts of data and has high upfront costs (for example, the talent required to develop it, and the energy consumed by the giant platforms on which it operates), but once established, it is very hard to tame through competitive markets. Although applications can be developed by many sides and for many purposes, the underlying AI infrastructure is rather centralized and hard to reproduce. As in other platform markets, the dominant players are those able to keep a tight grip on the most important resources (models and data) and to benefit from every individual or corporate user. Therefore, we can already see that AI development tightens the grip of today’s internet giants even further. Public powers are expected to make increasing use of AI applications and therefore become ever more dependent on the actors that are able to provide the best infrastructure, although this infrastructure, for commercial and technical reasons, is largely opaque.

> The developments sketched out above – the heightened manipulability of public discourse and the fortification of private powers – feed into each other, with the likely result that many of the deficiencies already visible in today’s digital public spheres will only grow. It is very hard to estimate whether these developments can be counteracted by state action, although a regulatory discourse has kicked in and the assumption that digital matters elude the grasp of state regulation has often been proven wrong in the history of networked communication. Another possibility would be a creative appropriation of AI applications through users whose democratic potential outweighs its democratic risks thus enabling the rise of differently structured, more empowering and inclusive public spaces. This is the hope of many of the more utopian variants of AI and of the public sphere literature, according to which AI-based technologies bear the potential of granting individuals the power to navigate complex, information-rich environments and allowing for coordinated action and effective oversight (e.g. Burgess, Zarkadakis).

Source: https://us.boell.org/en/2022/01/06/artificial-intelligence-a...

Social bots and deep fakes will be so good so quickly — the primary technologies being talked about in terms of how Democracy can survive — I doubt there will be another election without extensive use of these technologies in a true plethora of capacities from influence marketing to outright destabilization campaigns. I'm not sure what Government can deal with a threat like that, but I suspect the recent push to revise tax systems and create a single global standard for multinational taxation recently the subject of an excellent talk at the WEF are more than tangentially related to the AI debate.

So, is it a transformational technology that will liberate mankind of a nuclear bomb? Because ultimately, this is the question in my mind.

Excellent comment, and I agree with your sentiment. I just don't think concentrating control of the technology before it's really developed is wise or prudent.

replies(2): >>vortex+wi >>downWi+KB
10. ozi+Ff[view] [source] 2023-05-16 20:25:40
>>happyt+(OP)
How do you even end up enforcing licensing here? It's only a matter of time before something as capable as GPT-4 works on a cell phone.
replies(1): >>stuckk+5J
◧◩
11. EGreg+mh[view] [source] [discussion] 2023-05-16 20:36:01
>>noneth+E4
I remember when a different Sam — Mr. Bankman Fried came to testify and ask a different government agency CFTC to oversee cryptocurrency, and put regulations and licenses in place.

AI is following the path of Web3

replies(2): >>mschus+cu >>smcin+oC
12. ameliu+uh[view] [source] 2023-05-16 20:36:33
>>happyt+(OP)
> But goodness gracious we need to be going in the EXACT OPPOSITE direction — open source "core inspectable" AIs that millions of people can examine and tear apart, including and ESPECIALLY the training data and processes that create them

Except ... when you look at the problem from a military/national security viewpoint. Do we really want to give this tech away just like that?

replies(2): >>explor+fk >>vinay_+kk
◧◩◪
13. robwwi+vh[view] [source] [discussion] 2023-05-16 20:36:34
>>pg_123+Bd
Counterpoint—-there is growing speculation we are just about to transition to AGI.
replies(2): >>Eamonn+Ph >>causal+0l
14. stephc+xh[view] [source] 2023-05-16 20:36:45
>>happyt+(OP)
This is the same move SBF was trying to do. Get all cozy with the people spending their time in the alleys of power. Telling them what they want to hear, posturing as the good guy.

He is playing the game, this guy ambition is colossal, I don't blame him, but we should not give him too much power.

replies(2): >>willia+Rq >>concor+HH
◧◩◪◨
15. Eamonn+Ph[view] [source] [discussion] 2023-05-16 20:38:23
>>robwwi+vh
Growing? Or have the same voices who have been saying it since the aughts suddenly been platformed.
replies(3): >>TeMPOr+6t >>jack_p+Pw >>lostms+oP
◧◩◪
16. simonh+Qh[view] [source] [discussion] 2023-05-16 20:38:33
>>happyt+fc
> What people also fail to understand is that AI is largely seen by the military industrial complex as a weapon to control culture and influence.

Could you share the minutes from the Military Industrial Complex strategy meetings this was discussed at. Thanks.

replies(1): >>happyt+8j
◧◩
17. brooks+vi[view] [source] [discussion] 2023-05-16 20:41:53
>>noneth+E4
I'm not following this "good ideas must come from an ideologically pure source" thing.

Shouldn't we be evaluating ideas on the merits and not categorically rejecting (or endorsing) them based on who said them?

replies(6): >>briant+Ml >>samsta+am >>parent+H01 >>dreamc+t51 >>bcrosb+ha1 >>dTal+nK1
◧◩◪
18. vortex+wi[view] [source] [discussion] 2023-05-16 20:42:05
>>happyt+hf
*Hear, hear.
replies(1): >>happyt+xj
◧◩◪◨
19. happyt+8j[view] [source] [discussion] 2023-05-16 20:44:54
>>simonh+Qh
"Hello, is this Lockheed? Yea? I'm an intern for happytiger on Hackernews. Some guy named Simon H. wants the meeting minutes for the meeting where we discussed the weaponization potential for AI."

[pause]

"No? Ok, I'll tell him."

replies(1): >>simonh+Ol1
◧◩
20. robwwi+pj[view] [source] [discussion] 2023-05-16 20:46:29
>>ben_w+f9
> I can say that each government requiring AI to locally adhere to national mythology, will create self-reinforcing cognitive blind spots, because that already happens at the current smaller scale of manual creation and creators being told not to "talk the country down".

This is a key point. Every culture and agency and state will want (deserve) their own homespun AGI. But can we all learn how to accommodate to or accept a cultural multiverse when money and resources are zero-sum in many dimensions.

Hanno Rajaniemi’s Quantum Thief trilogy gives you a foretaste of where we could end up.

replies(1): >>vinay_+rl
◧◩◪◨
21. happyt+xj[view] [source] [discussion] 2023-05-16 20:47:03
>>vortex+wi
Thank you. Corrected.
22. SkyMar+dk[view] [source] 2023-05-16 20:50:14
>>happyt+(OP)
> To expound, this just seems like a power grab to me, to "lock in" the lead and keep AI controlled by a small number of corporations that can afford to license and operate the technologies.

If you actually watch the entire session, Altman does address that and recommend to Congress that regulations 1) not be applied to small startups, individual researchers, or open source, and 2) that they not be done in such a way as to lock in a few big vendors. Some of the Senators on the panel also expressed concern about #2.

replies(3): >>pauldd+Vp >>chasd0+lw >>api+dG
◧◩
23. explor+fk[view] [source] [discussion] 2023-05-16 20:50:18
>>ameliu+uh
Is military capable AI in the hands of few militaries safer than in the hands of many? Or is it more likely to be used to bully other countries who don't have it? If it is used to oppress, would we want the oppressed have access to it? Or do we fear that it gives too much advantage to small cells of extremist to carry out their goals? I can think of pros and cons to both sides.
replies(3): >>code_w+Uo >>anthon+ny >>jacurt+yG
◧◩
24. vinay_+kk[view] [source] [discussion] 2023-05-16 20:50:48
>>ameliu+uh
If you mean US by 'we', it is problematic because AI inventions are happening all over the globe, much more outside US than inside.
replies(1): >>behnam+Jl
◧◩◪◨
25. causal+0l[view] [source] [discussion] 2023-05-16 20:53:00
>>robwwi+vh
Growing among who? The more I learn about and use LLMs the more convinced I am we're in a local maxima and the only way they're going to improve is by getting smaller and cheaper to run. They're still terrible at logical reasoning.

We're going to get some super cool and some super dystopian stuff out of them but LLMs are never going to go into a recursive loop of self-improvement and become machine gods.

replies(6): >>ben_w+Bo >>TeMPOr+Gr >>behnam+2t >>bernie+AH >>stuckk+CI >>ux-app+1K
◧◩◪
26. vinay_+rl[view] [source] [discussion] 2023-05-16 20:55:51
>>robwwi+pj
Quantum Thief has as 3.8 on Goodreads. Worth reading?
replies(1): >>knewte+gP
◧◩◪
27. behnam+Jl[view] [source] [discussion] 2023-05-16 20:58:28
>>vinay_+kk
Name one significant progress in the field of LLMs that happened outside the US. Basically all the scientific papers came from Stanford, CMU, and other US universities. And the major players in the field are all American companies (OpenAI + Microsoft, Google, AnthropicAI, etc.)
replies(2): >>code_w+rp >>Improb+zc1
◧◩◪
28. briant+Ml[view] [source] [discussion] 2023-05-16 20:58:43
>>brooks+vi
> Shouldn't we be evaluating ideas on the merits and not categorically rejecting (or endorsing) them based on who said them?

The problem is when only the entrenched industry players & legislators have a voice, there are many ideas & perspectives that are simply not heard or considered. Industrial groups have a long history of using regulations to entrench their positions & to stifle competition...creating a "barrier to entry" as they say. Going beyond that, industrial groups have shaped public perception & the regulatory apparatus to effectively create a company store, where the only solutions to some problem effectively (or sometimes legally) must go through a small set of large companies.

This concern is especially prescient now, as these technologies are unprecedentedly disruptive to many industries & private life. Using worst case scenario fear mongering as a justification to regulate the extreme majority of usage that will not come close to these fears, is disingenuous & almost always an overreach of governance.

replies(2): >>samsta+Cm >>chii+3U
◧◩◪
29. samsta+am[view] [source] [discussion] 2023-05-16 21:00:11
>>brooks+vi
Aside from who is saying them, the premise holds water.

AI is beyond-borders, and thus unenforceable in practicality.

The top-minds-of-AI are a group that cannot be regulated.

-

AI isnt about the industries it shall disrupt ; AI is the policy-makers it will expose.

THAT is what they are afraid of.

--

I have been able to do financial lenses into organizations that even with rudimentary BI would have taken me months/weeks - but I have been able to find insights which took me minutes.

AI regulation right now, in this infancy, is about damage control.

---

Its the same as the legal weed market. You think BAIN Capital just all of a sudden decided to jump into the market without setting up their spigot?

Do you think that haliburton under cheney was able to setup their supply chains without cheney as head of KBR/Hali/CIA/etc...

Yeah, this is the same play ; AI is going to be squashed until they can use it to profit over you.

Have you watched ANIME ever? Yeah... its here now.

replies(2): >>turtle+ZP >>throwa+8V
◧◩◪
30. TheDud+tm[view] [source] [discussion] 2023-05-16 21:02:36
>>pg_123+Bd
Why? Because there hasn't been any new developments last week? Oh wait, there has.
◧◩◪◨
31. samsta+Cm[view] [source] [discussion] 2023-05-16 21:03:24
>>briant+Ml
I can only say +1 = and I know how much HN hates that, but ^This.
32. tric+6n[view] [source] 2023-05-16 21:05:20
>>happyt+(OP)
> seems like a power grab to me

If you're not at the table, you're on the menu.

33. mcv+zo[view] [source] 2023-05-16 21:16:03
>>happyt+(OP)
And how can the government license AI? Do they have any expertise to determine who is and isn't responsible enough to handle it?

A better idea is to regulate around the edges: transparency about the data used to train, regulate the use of copyrighted training data and what that means for the copyright of content produced by the AI, that sort of stuff. (I think the EU is considering that, which makes sense.) But saying some organisations are allowed to work on AI while others aren't, sounds like the worst possible idea.

replies(1): >>kelsey+ip
◧◩◪◨⬒
34. ben_w+Bo[view] [source] [discussion] 2023-05-16 21:16:08
>>causal+0l
> They're still terrible at logical reasoning.

Are they even trying to be good at that? Serious question; using LLMs as a logical processor are as wasteful and as well-suited as using the Great Pyramid of Giza as an AirBnB.

I've not tried this, but I suspect the best way is more like asking the LLM to write a COQ script for the scenario, instead of trying to get it to solve the logic directly.

replies(2): >>staunt+3w >>fsckbo+kI
◧◩◪
35. code_w+Uo[view] [source] [discussion] 2023-05-16 21:17:28
>>explor+fk
>Is military capable AI in the hands of few militaries safer than in the hands of many?

Yes. It is. I'm sure hostile, authoritarian states that are willing to wage war with the world like Russia and North Korea will eventually get their hands on military-grade AI. But the free world should always strive to be two steps ahead.

Even having ubiquitous semi-automatic rifles is a huge problem in America. I'm sure Cliven Bundy or Patriot Front would do everything they can to close the gap with intelligent/autonomous weapons, or even just autonomous bots hacking America's infrastructure. If everything is freely available, what would be stopping them?

replies(1): >>explor+Jq
◧◩
36. kelsey+ip[view] [source] [discussion] 2023-05-16 21:19:16
>>mcv+zo
Citizen, please step away from the terminal, you are not licensed to multiple matrices that large.
37. mindcr+op[view] [source] 2023-05-16 21:19:47
>>happyt+(OP)
To expound, this just seems like a power grab to me, to "lock in" the lead and keep AI controlled by a small number of corporations that can afford to license and operate the technologies. Obviously, this will create a critical nexus of control for a small number of well connected and well heeled investors and is to be avoided at all costs.

Exactly. Came here to say pretty much the same thing.

This is the antithesis of what we need. As AI develops, it's imperative that AI be something that is open and available to everyone, so all of humanity can benefit from it. The extent to which technology tends to exacerbate concentration of power is bad enough as it is - the last thing we need is more regulation intended to make that effect even stronger.

replies(1): >>Aeolun+2W
38. rjbwor+qp[view] [source] 2023-05-16 21:19:53
>>happyt+(OP)
I said this about Sam Altman and open AI years ago and got poo pooed repeatedly in various fora. "But It's OPEN!" "But it's a non-profit!" "But they're the good guys!"

And here we are - Sam trying to lock down his first mover advantage with the boot heel of the state for profit. It's fucking disgusting.

replies(2): >>jacurt+NC >>parent+W01
◧◩◪◨
39. code_w+rp[view] [source] [discussion] 2023-05-16 21:19:59
>>behnam+Jl
Not to mention access to chips. That's becoming more and more difficult for uncooperative states like China and Russia.
replies(1): >>vinay_+Ml1
◧◩
40. pauldd+Vp[view] [source] [discussion] 2023-05-16 21:22:21
>>SkyMar+dk
That's what is said...
◧◩◪◨
41. explor+Jq[view] [source] [discussion] 2023-05-16 21:26:15
>>code_w+Uo
Your post conveniently ignores the current state of China's AI development but mentions Russia and North Korea. That's an interesting take. There's no guarantee that we are or will continue to be one or even two steps ahead. And what keeps the groups with rifles you mentioned in check? They already have the capability to fight with violence. But there currently exists a counter-balance in the fact they'll get shot at back if they tried to use them. Not trying to take a side here one way or the other. I think there are real fears here. But I also don't think it's this black and white either.
◧◩
42. willia+Rq[view] [source] [discussion] 2023-05-16 21:26:58
>>stephc+xh
Have you tried watching actual soap operas?
◧◩◪◨⬒
43. TeMPOr+Gr[view] [source] [discussion] 2023-05-16 21:30:39
>>causal+0l
> The more I learn about and use LLMs the more convinced I am we're in a local maxima

Not sure why would you believe that.

Inside view: qualitative improvements LLMs made at scale took everyone by surprise; I don't think anyone understands them enough to make a convincing argument that LLMs have exhausted their potential.

Outside view: what local maximum? Wake me up when someone else makes a LLM comparable in performance to GPT-4. Right now, there is no local maximum. There's one model far ahead of the rest, and that model is actually below it's peak performance - side effect of OpenAI lobotomizing it with aggressive RLHF. The only thing remotely suggesting we shouldn't expect further improvements is... OpenAI saying they kinda want to try some other things, and (pinky swear!) aren't training GPT-4's successor.

> and the only way they're going to improve is by getting smaller and cheaper to run.

Meaning they'll be easier to chain. The next big leap could in fact be a bunch of compressed, power-efficient LLMs talking to each other. Possibly even managing their own deployment.

> They're still terrible at logical reasoning.

So is your unconscious / system 1 / gut feel. LLMs are less like one's whole mind, and much more like one's "inner voice". Logical skills aren't automatic, they're algorithmic. Who knows what is the limit of a design in which LLM as "system 1" operates a much larger, symbolic, algorithmic suite of "system 2" software? We're barely scratching the surface here.

◧◩◪◨⬒
44. behnam+2t[view] [source] [discussion] 2023-05-16 21:40:56
>>causal+0l
My thoughts exactly. It's hard to see signal among all the noise surrounding LLMs, Even if they say they're gonna hurt you, they have no idea about what it means to hurt, what is "you", and how they're going to achieve that goal. They just spit out things that resemble people have said online. There's no harm from a language model that's literally a "language" model.
replies(2): >>visarg+rx >>forget+dy
◧◩◪◨⬒
45. TeMPOr+6t[view] [source] [discussion] 2023-05-16 21:41:41
>>Eamonn+Ph
Yes, growing. It's not that the Voices have suddenly been "platformed" - it's that the field made a bunch of rapid jumps which made the message of those Voices more timely.

Recent developments in AI only further confirm that the logic of the message is sound, and it's just the people that are afraid the conclusions. Everyone has their limit for how far to extrapolate from first principles, before giving up and believing what one would like to be true. It seems that for a lot of people in the field, AGI X-risk is now below that extrapolation limit.

replies(4): >>b112+gC >>rtkwe+eF >>2muchc+Xb1 >>dhkk+Qf1
◧◩◪
46. mschus+cu[view] [source] [discussion] 2023-05-16 21:47:56
>>EGreg+mh
At least AI has legitimate, actual use cases.
replies(1): >>EGreg+lW
47. chrgy+yu[view] [source] 2023-05-16 21:50:00
>>happyt+(OP)
I would triple vote this comment. 100% , seems like a group of elite AI company who already stole the data from internet are gonna decide who does what! We need to regulate only the big players, and allow small players to do whatever they want.
48. crater+Tu[view] [source] 2023-05-16 21:51:58
>>happyt+(OP)
Yes, this is the first-to-market leaders wanting to raise the barriers to entry to lock out competition.
replies(1): >>stuckk+lJ
◧◩◪◨⬒⬓
49. staunt+3w[view] [source] [discussion] 2023-05-16 21:58:21
>>ben_w+Bo
Indeed, AI reinforcement-learning to deal with formal verification is what I'm looking forward to the most. Unfortunately it seems a very niche endeavour at the moment.
◧◩
50. chasd0+lw[view] [source] [discussion] 2023-05-16 21:59:33
>>SkyMar+dk
> not be applied to small startups

how will that work? Isn't OpenAI itself a small startup? I don't see how they can regulate AI at all. Sure, the resources required to push the limits are high right now but hardware is constantly improving and getting cheaper. I can take the GPUs out of my kids computers and start doing fairly serious AI work myself. Do i need a license? The cat is out of the bag, there's no stopping it now.

◧◩◪◨⬒
51. jack_p+Pw[view] [source] [discussion] 2023-05-16 22:02:49
>>Eamonn+Ph
When the sky is getting to a dark shade of red it makes sense to hear out the doomsayers
replies(1): >>matwoo+zx
◧◩◪◨⬒⬓
52. visarg+rx[view] [source] [discussion] 2023-05-16 22:06:31
>>behnam+2t
A language model can do many things based on language instructions, some harmless, some harmful. They are both instructable and teachable. Depending on the prompt, they are not just harmless LLMs.
◧◩◪◨⬒⬓
53. matwoo+zx[view] [source] [discussion] 2023-05-16 22:07:05
>>jack_p+Pw
And the vast majority of the time it's just a nice sunset.
replies(2): >>jack_p+aA >>nullse+GP
◧◩◪◨⬒⬓
54. forget+dy[view] [source] [discussion] 2023-05-16 22:11:02
>>behnam+2t
You appear to be ignoring a few thousand years of recorded history around what happens when a demagogue gets a megaphone. Human-powered astroturf campaigns were all it took to get randoms convinced lizard people are an existential threat and then -act- on that belief.
replies(2): >>nullse+oF >>goatlo+Va1
◧◩◪
55. anthon+ny[view] [source] [discussion] 2023-05-16 22:11:37
>>explor+fk
within a few decades there will probably be technology that would allow a semi-dedicated person to engineer and create a bioweapon from scratch if the code was available online. do you think thats a good idea?
replies(1): >>explor+Ez
56. kalkin+Ny[view] [source] 2023-05-16 22:13:42
>>happyt+(OP)
The current generation of AIs are scary to a lot of the initiated, too - both for what they can do now, and what their trajectory of improvement implies.

If you take seriously any downsides, whether misinformation or surveillance or laundering bias or x-risk, how does AI model weights or training data being open source solve them? Open source is a lot of things, but one thing it's not is misuse-resistant (and the "with many eyes all bugs are shallow" thing hasn't proved true in practice even with high level code, much less giant matrices and terabytes of text). Is there a path forward that doesn't involve either a lot of downside risk (even if mostly for people who aren't on HN and interested in tinkering with frontier models themselves, in the worlds where surveillance or bias is the main problem), or significant regulation?

I don't particularly like or trust Altman but I don't think he'd be obviously less self-serving if he were to oppose any regulation.

replies(1): >>jacurt+aI
◧◩◪◨
57. explor+Ez[view] [source] [discussion] 2023-05-16 22:18:51
>>anthon+ny
Within a few decades there will probably be technology that would allow a semi-dedicated person to engineer and create a vaccine or medical treatment from scratch if the code was available online. Do you think that's a good idea?
◧◩◪◨⬒⬓⬔
58. jack_p+aA[view] [source] [discussion] 2023-05-16 22:22:15
>>matwoo+zx
a sunset at lunch time hits different
◧◩◪
59. downWi+KB[view] [source] [discussion] 2023-05-16 22:32:33
>>happyt+hf
It's possible that the tsunami of fakes is going to break down trust in a beneficial way where people only believe things they've put effort into verifying.
◧◩◪◨⬒⬓
60. b112+gC[view] [source] [discussion] 2023-05-16 22:35:54
>>TeMPOr+6t
it's that the field made a bunch of rapid jumps

I wish I knew what we really have achieved here. I try to talk to these things, via turbo3.5 api, amd all I get is broken logic, twisted moral reasoning, all due to oipenai manually breaking their creation.

I don't understand their whole filter business. It's like we found a 500 yr old nude painting, a masterpiece, and 1800 puritans painted a dress on it.

I often wonder if the filter, is more to hide its true capabilities.

replies(4): >>stuckk+7I >>andyba+LI >>ux-app+IJ >>TeMPOr+aU
◧◩◪
61. smcin+oC[view] [source] [discussion] 2023-05-16 22:36:51
>>EGreg+mh
That was entirely different, and a play to muddy the regulatory waters and maybe buy him time: the CFTC is much smaller (budget, staff) than the SEC, and less aggressive in criminal enforcement. Aided by a bill introduced by crypto-friendly Sens Lummis and Gillibrand [https://archive.ph/vqHgC].
62. johnal+sC[view] [source] 2023-05-16 22:37:01
>>happyt+(OP)
Happy Tiger I will remember because I agree totally. Yes "OpenAi/Microsoft" is right way to think about this attempt.
◧◩
63. jacurt+NC[view] [source] [discussion] 2023-05-16 22:40:00
>>rjbwor+qp
As a wise person once said

> You either die a hero, or live long enough to become the villain

Sam Altman has made the full character arc

replies(1): >>jivetu+hF
◧◩◪◨⬒⬓
64. rtkwe+eF[view] [source] [discussion] 2023-05-16 22:58:50
>>TeMPOr+6t
What's the actual new advancements? LLMs to me are great at faking AGI but are no where near actually being a workable general AI. The biggest example to me is you can correct even the newest ChatGPT and ask it to be truthful but it'll make up the same lie within the same continuous conversation. IMO the difference between being able to act truth-y and actually being truthful is a huge gap that involves the core ideas of what separates an actual AGI and a really good chatbot.

Maybe it'll turn out to be a distinction that doesn't matter but I personally still think we're a ways away from an actual AGI.

replies(2): >>ux-app+1J >>canjob+vc1
◧◩◪
65. jivetu+hF[view] [source] [discussion] 2023-05-16 22:59:07
>>jacurt+NC
yeah sorry, that is a statement about leadership and responsibility to make the "tough decisions", like going to war, or deciding who the winners and losers are when deciding a budget that everyone contributed to via taxes. NOT a statement meant to whitewash VC playbooks.
replies(1): >>darker+Kd1
◧◩◪◨⬒⬓⬔
66. nullse+oF[view] [source] [discussion] 2023-05-16 22:59:34
>>forget+dy
I think I'm just going to build and open source some really next gen astroturf software that learns continuously as it debates people online in order to get better at changing people's minds. I'll make sure to include documentation in Russian, Chinese and Corporate American English.

What would a good name be? TurfChain?

I'm serious. People don't believe this risk is real. They keep hiding it behind some nameless, faceless 'bad actor', so let's just make it real.

I don't need to use it. I'll just release it as a research project.

replies(2): >>forget+u31 >>edgyqu+wh1
◧◩
67. api+dG[view] [source] [discussion] 2023-05-16 23:05:31
>>SkyMar+dk
That would make the regulations fairly pointless unless you think only mega-corps will ever be able to afford the compute for these things.

Compute continues to get cheaper and cheaper. We have not hit the physics wall yet on that.

That and if someone cracks efficient distributed training in a swarm type configuration then you could train models Seti@Home style. Lots of people would be happy to leave a gaming PC on to help create open source LLMs. The data requirements might be big but I just got gigabit fiber installed in my house so that barrier is vanishing too.

replies(2): >>nullse+3Q >>zamnos+ZX
◧◩◪
68. yarg+fG[view] [source] [discussion] 2023-05-16 23:05:50
>>pg_123+Bd
There's no chance that we've peeked from a bang for buck sense - we still haven't adequately investigated sparse networks.

Relevantish: https://arxiv.org/abs/2301.00774

The fact that we can reach those levels of sparseness with pruning also indicates that we're not doing a very good job of generating the initial network conditions.

Being able to come up with trainable initial settings for sparse networks across different topologies is hard, but given that we've had a degree of success with pre-trained networks, pre-training and pre-pruning might also allow for sparse networks with minimally compromised learning capabilities.

If it's possible to pre-train composable network modules, it might also be feasible to define trainable sparse networks with significantly relaxed topological constraints.

replies(3): >>alexel+GI >>stephc+pN >>cma+cO
◧◩◪
69. jacurt+yG[view] [source] [discussion] 2023-05-16 23:07:57
>>explor+fk
That is a valid thought experiment. I would say it isn't too dissimilar from nuclear weapons. A handful of powerful countries have access to this and any smaller country doesn't. It creates a large separation between 1st world countries and "everyone else".
70. concor+hH[view] [source] 2023-05-16 23:11:28
>>happyt+(OP)
I'll note that you're correct for current gen LLMs, but we could have future actually dangerous things that would indeed need regulating.
◧◩◪◨⬒
71. bernie+AH[view] [source] [discussion] 2023-05-16 23:13:14
>>causal+0l
I’m agreeing with this viewpoint the more I use LLMs.

They’re text generators that can generate compelling content because they’re so good at generating text.

I don’t think AGI will arise from a text generator.

◧◩
72. concor+HH[view] [source] [discussion] 2023-05-16 23:13:43
>>stephc+xh
Seems like speculation on very thin grounds to me.
replies(2): >>stephc+qM >>csomar+Bh1
◧◩◪◨⬒⬓⬔
73. stuckk+7I[view] [source] [discussion] 2023-05-16 23:15:58
>>b112+gC
Lately I have been picturing comments, and this is truly iconic haha.
◧◩
74. jacurt+aI[view] [source] [discussion] 2023-05-16 23:16:30
>>kalkin+Ny
I feel like the people that are most nervous about AI are the ones that don't understand it at all and those that understand it the most.

The layperson in the middle who has been happily plugging prompts into ChatGPT claiming they are a "prompt expert" are the ones the most excited.

For those that truly understand AI, there is a lot that you should genuinely be worried about. Now, don't confuse that for saying that we shouldn't work on it or should abandon AI work. I truly believe that this is the next greatest revolution. This is 1,000x more transformative than the industrial revolution, and 100x more transformative than the internet revolution. But it is worth a brief consideration of the effects of our work before we start running into these changes that could have drastic effects on everybody's daily life.

◧◩◪◨⬒⬓
75. fsckbo+kI[view] [source] [discussion] 2023-05-16 23:17:17
>>ben_w+Bo
> using the Great Pyramid of Giza as an AirBnB

, were you allowed to do it, would be an extremely profitable venture. Taj Mahal too, and yes, I know it's a mausoleum.

replies(1): >>ben_w+mM1
◧◩◪◨⬒
76. stuckk+CI[view] [source] [discussion] 2023-05-16 23:18:38
>>causal+0l
I was looking at the A100 80gb cards. 14k a pop. We gonna see another GPU shortage when these models become less resource dependent. CRYPTO era
◧◩◪◨
77. alexel+GI[view] [source] [discussion] 2023-05-16 23:19:04
>>yarg+fG
I don’t think you really disagree with GP? I think the argument is we peaked on “throw GPUs at it”?

We have all kinds of advancements to make training cheaper, models computationally cheaper, smaller, etc.

Once that happens/happened, it benefits OAI to throw up walls via legislation.

replies(2): >>Neverm+SO >>yarg+9V1
◧◩◪◨⬒⬓⬔
78. andyba+LI[view] [source] [discussion] 2023-05-16 23:19:23
>>b112+gC
You seem to be making several points at once and I'm not sure they all join up?
79. aspect+NI[view] [source] 2023-05-16 23:19:31
>>happyt+(OP)
> happytiger 4 hours ago | unvote | prev | next [–]

> We need to MAKE SURE that AI as a technology ISN'T controlled by a small number of powerful corporations with connections to governments.

This, absolutely this. I am really concerned about his motives in this case. AI has massive potential to improve the world. I find it highly suspicious that an exec at one of the lead companies in AI right now wants to lock it up. (Ever read the intro to Max Tegmarks book?)

◧◩◪◨⬒⬓⬔
80. ux-app+1J[view] [source] [discussion] 2023-05-16 23:21:24
>>rtkwe+eF
>Maybe it'll turn out to be a distinction that doesn't matter but I personally still think we're a ways away from an actual AGI.

if you had described GPT to me 2 years ago I would have said no way, we're still a long way away from a machine that can fluidly and naturally converse in natural language and perform arbitrary logic and problem solving, and yet here we are.

I very much doubt that in 5 years time we'll be talking about how GPT peaked in 2023.

replies(2): >>q7xvh9+mJ >>rtkwe+hZ
◧◩
81. stuckk+5J[view] [source] [discussion] 2023-05-16 23:21:46
>>ozi+Ff
Not necessarily “on” a mobile device. It would be data driven with the help of 10g. Mobile makers will not allow that kind of power on our hands. =p and ofc, it will be subscription driven like GPT plus haha
◧◩
82. stuckk+lJ[view] [source] [discussion] 2023-05-16 23:23:46
>>crater+Tu
Sam is a snake. The goal is to fuck everyone else. He is scared that someone will beat his tech and the hype is gone. Which is going to happen. Matter of months.
replies(1): >>crater+9m1
◧◩◪◨⬒⬓⬔⧯
83. q7xvh9+mJ[view] [source] [discussion] 2023-05-16 23:24:18
>>ux-app+1J
Seriously. It's worth pausing for a minute to note that the Turing Test has been entirely solved.

In fact, it has been so thoroughly solved that anyone can download an open-source solution and run it on their computer.

And yet, the general reaction of most people seems to be, "That's kind of cool, but why can't it also order me a cheeseburger?"

replies(3): >>eroppl+4L >>canjob+bd1 >>2muchc+Zd1
◧◩◪◨⬒⬓⬔
84. ux-app+IJ[view] [source] [discussion] 2023-05-16 23:26:31
>>b112+gC
>I often wonder if the filter, is more to hide its true capabilities.

right now we're all sharing a slice of GPT. I wouldn't be at all surprised if there's some uber GPT (which requires a lot more processing per response) running in a lab somewhere that blows what's publicly available out of the water.

◧◩◪◨⬒
85. ux-app+1K[view] [source] [discussion] 2023-05-16 23:29:13
>>causal+0l
>They're still terrible at logical reasoning.

2 years ago a machine that understands natural language and is capable of any arbitrary, free-form logic or problem solving was pure science fiction. I'm baffled by this kind of dismissal tbh.

>but LLMs are never going to go into a recursive loop of self-improvement

never is a long time.

replies(1): >>leonid+OU
◧◩◪◨⬒⬓⬔⧯▣
86. eroppl+4L[view] [source] [discussion] 2023-05-16 23:34:51
>>q7xvh9+mJ
It has not been solved. Even GPT-4, as impressive as it is for some use cases, is dumb and I can tell the difference between it and a human in a dozen sentences just by demanding sufficient precision.

In some contexts, will some people be caught out? Absolutely. But that's been happening for a while now.

replies(1): >>ben_w+882
◧◩◪
87. stephc+qM[view] [source] [discussion] 2023-05-16 23:43:44
>>concor+HH
What would you need to consider this point as less speculative? Direct proof? Is motive not relevant at all?
replies(1): >>concor+ny1
◧◩◪◨
88. stephc+pN[view] [source] [discussion] 2023-05-16 23:49:34
>>yarg+fG
The efficiency of training has very unlikely reached its peak or near its peak. We are still inefficient. But the bottleneck might be elsewhere, in data, what we use to feed them.

Maybe not peaked yet, but the case can be made that we’re not seeing infinite supply…

◧◩◪◨
89. cma+cO[view] [source] [discussion] 2023-05-16 23:55:15
>>yarg+fG
50% sparsity is almost certainly already being used given that it is accelerated in current nvidia hardware both at training time, usable dynamically through RigL ("Rigging the Lottery: Making All Tickets Winners" https://arxiv.org/pdf/1911.11134.pdf )--which also addresses your point about initial conditions being locked in-- and at accelerates 50% sparsity at inference time.
◧◩◪◨⬒
90. Neverm+SO[view] [source] [discussion] 2023-05-17 00:00:02
>>alexel+GI
No way has training hit any kind of cost, computing or training data efficiency peak.

Big tech advances, like the models of the last year or so, don't happen without a long tail of significant improvements based on fine tuning, at a minimum.

The number of advances being announced by disparate groups, even individuals, also indicates improvements are going to continue at a fast pace.

◧◩◪◨
91. knewte+gP[view] [source] [discussion] 2023-05-17 00:02:31
>>vinay_+rl
Very much so
◧◩◪◨⬒
92. lostms+oP[view] [source] [discussion] 2023-05-17 00:03:05
>>Eamonn+Ph
Growing is quite apt here. No matter what you or I think more and more people get the sense of AI coming and talk about it.
◧◩◪◨⬒⬓⬔
93. nullse+GP[view] [source] [discussion] 2023-05-17 00:05:15
>>matwoo+zx
I'm so glad that we 100% know for sure that this too is the vast majority of the time.
◧◩◪◨
94. turtle+ZP[view] [source] [discussion] 2023-05-17 00:07:51
>>samsta+am
Which anime(s)? If ANIME is the title, that's going to be hard to search.

Do you mean like Serial Experiments Lain?

replies(1): >>throwa+mV
◧◩◪
95. nullse+3Q[view] [source] [discussion] 2023-05-17 00:08:27
>>api+dG
The other day someone offered up their 200x GPU crypto mining cluster to train uncensored models after the incident on HuggingFace where someone threatened to get the uploader of the uncensored models fired citing safety issues.
replies(1): >>chasd0+521
96. daniel+nT[view] [source] 2023-05-17 00:32:40
>>happyt+(OP)
Kinda beating a dead horse here, but I'll never get over the fact that a company called "OpenAI" is spearheading this nonsense.
◧◩◪◨
97. chii+3U[view] [source] [discussion] 2023-05-17 00:37:48
>>briant+Ml
> there are many ideas & perspectives that are simply not heard or considered.

of course, but just because those ideas are unheard, doesn't mean they are going to be any better.

An idea should stand on its own merits, and be evaluated objectively. It doesn't matter who was doing the proposing.

Also the problem isn't that bad ideas might get implemented, but that the legislature isn't willing or able to make updates to laws that encoded a bad idea. Perhaps it isnt known that it is a bad idea until after the fact, and the methods of democracy we have today isn't easily able to force updates to bad laws encoding bad ideas.

replies(1): >>briant+Y01
◧◩◪◨⬒⬓⬔
98. TeMPOr+aU[view] [source] [discussion] 2023-05-17 00:38:13
>>b112+gC
> I wish I knew what we really have achieved here. I try to talk to these things, via turbo3.5 api, amd all I get is broken logic

Try to get your hands on GPT-4, even if it means paying the $20/mo subscription for ChatGPT Plus. There is a huge qualitative jump between the two models.

I got API access to GPT-4 some two weeks ago; my personal experience is, GPT-3.5 could handle single, well-defined tasks and queries well, but quickly got confused by anything substantial. Using it was half feelings of amazement, and half feelings of frustration. GPT-4? Can easily handle complex queries and complex tasks. Sure, it still makes mistakes, but much less frequently. GPT-4 for me is 80% semi-reliable results, 20% trying to talk it out of pursuing directions I don't care about.

Also, one notable difference: when GPT-4 gives me bad or irrelevant answers, most of the time this is because I didn't give it enough context. I.e. it's my failure at communicating. A random stranger, put in place of GPT-4, would also get confused, and likely start asking me questions (something LLMs generally don't do yet).

> I don't understand their whole filter business.

Part preferences, part making its "personality" less disturbing, and part PR/politics - last couple times someone gave the general public access to an AI chatbot, it quickly got trolled, and then much bad press followed. Doesn't matter how asinine the reaction was - bad press is bad press, stocks go down. Can't have it.

> I often wonder if the filter, is more to hide its true capabilities.

I don't think it's to hide the model's capabilities, but it's definitely degrading them. Kind of expected - if you force-feed the model with inconsistent and frequently irrational overrides to highly specific topics, don't be surprised if the model's ability to (approximate) reason starts to break down. Maybe at some point LLMs will start to compartmentalize, but we're not there yet.

◧◩◪◨⬒⬓
99. leonid+OU[view] [source] [discussion] 2023-05-17 00:42:23
>>ux-app+1K
Two years ago we already had GPT-2, that was capable of some problem solving and logic following. It was archaic, sure, it produced a lot of gibberish, yes, but if you followed OpenAI releases closely, you wouldn't think that something like GPT3.5 was "pure science fiction", it would just look as the inevitable evolution of GPT-2 in a couple of years given the right conditions.
replies(2): >>ux-app+EV >>canjob+Jd1
◧◩◪◨
100. throwa+8V[view] [source] [discussion] 2023-05-17 00:45:08
>>samsta+am
This is a very interesting post. I don't understand this part: <<AI is the policy-makers it will expose>> Can you help to explain a different way?

And hat tip to this comment:

    Have you watched ANIME ever? Yeah... its here now.
The more I watch the original Ghost in the Shell, the more I think it has incredible foresight.
replies(2): >>samsta+v03 >>samsta+UT3
◧◩◪◨⬒
101. throwa+mV[view] [source] [discussion] 2023-05-17 00:45:36
>>turtle+ZP
One idea / suggestion: The original Ghost in the Shell.
◧◩◪◨⬒⬓⬔
102. ux-app+EV[view] [source] [discussion] 2023-05-17 00:47:39
>>leonid+OU
that's pedantic. switch 2 years to 5 years and the point still stands.
replies(1): >>edgyqu+lh1
◧◩
103. Aeolun+2W[view] [source] [discussion] 2023-05-17 00:50:11
>>mindcr+op
If you don’t have a moat, just dig one!
◧◩◪◨
104. EGreg+lW[view] [source] [discussion] 2023-05-17 00:52:32
>>mschus+cu
True but also other use cases with much worse outcomes than blockchain could ever have
105. burton+WW[view] [source] 2023-05-17 00:56:34
>>happyt+(OP)
I'd go further and say if only corporations with government license are allowed to create LLMs, then we should NOT have LLMs.

Let the market develop organically.

◧◩◪
106. zamnos+ZX[view] [source] [discussion] 2023-05-17 01:04:52
>>api+dG
Shit, vast.ai will pay you right now for access to your gaming PC's GPU
◧◩◪◨⬒⬓⬔⧯
107. rtkwe+hZ[view] [source] [discussion] 2023-05-17 01:15:11
>>ux-app+1J
Again I'm not really saying GPT has peaked I'm saying there's a categorical difference between GPT and AGI. A good enough fake might perform well enough to function like one but I have my doubts that it will. Without a way to deal with and, to some sense of the word understand, facts I don't think LLMs are suitable for use as anything beyond an aide for humans (for starters because they can't determine internally what is and isn't a fact vs hallucination so you have to constantly check their work).
◧◩◪
108. parent+H01[view] [source] [discussion] 2023-05-17 01:30:02
>>brooks+vi
The subtle difference between the original statement and yours:

Ideas that drive governing decisions should be globally good - meaning there should be more than just @sama espousing them.

replies(1): >>r_hood+Zk1
◧◩
109. parent+W01[view] [source] [discussion] 2023-05-17 01:32:12
>>rjbwor+qp
So true. It’s one thing to treat companies at face value when it’s just another X, but when they are capable of changing society in such a way, their claims of openness should be treated as marketing.
◧◩◪◨⬒
110. briant+Y01[view] [source] [discussion] 2023-05-17 01:32:26
>>chii+3U
> of course, but just because those ideas are unheard, doesn't mean they are going to be any better.

It probably does mean it's better at least for the person with the perspective. Too bad only a very few get a seat at the table to advocate for their own interests. It would be better if everyone has agency to advocate for their interests.

> Also the problem isn't that bad ideas might get implemented, but that the legislature isn't willing or able to make updates to laws that encoded a bad idea

First, this is a hyped up crisis where some people are claiming it will end humanity. There have been many doomsday predictions & people scared by these predictions are effectively scammed by those fomenting existential fear. It's interesting that the representatives of large pools of capital are suddenly existentially afraid when there is open source competition.

Second, once something is in the domain of government it will only get more bloated & controlled by monied lobbyists. The legislatures controlled by lobbyists will never make it better, only worse. There have been so many temporary programs that continue to exist & expand. Many bloated omnibus bills too long to read passed under some sort of "emergency". The government's tendency is to grow & to serve the interests of the corporations that pay the politicians. Fear is an effective tool to convince people to accept things against their interests.

◧◩◪◨
111. chasd0+521[view] [source] [discussion] 2023-05-17 01:41:08
>>nullse+3Q
That’s bizarre, what is unsafe about an uncensored LLM? Or I guess the same question in a different way, how does censoring an LLM make it safe? I could see an uncensored LLM being bad PR for a company but unsafe? How?
replies(1): >>nullse+5O3
◧◩◪◨⬒⬓⬔⧯
112. forget+u31[view] [source] [discussion] 2023-05-17 01:56:50
>>nullse+oF
It's not like there isn't a market waiting impatiently for the product...
replies(1): >>nullse+Gt1
◧◩◪
113. dreamc+t51[view] [source] [discussion] 2023-05-17 02:13:54
>>brooks+vi
The recent history of tech CEOs advocating for regulations only they can obey has become so blatant that any tech CEO who advocates for regulation should be presumed guilty until proven innocent.
replies(1): >>brooks+6j2
114. fuzzfa+Z91[view] [source] 2023-05-17 03:02:47
>>happyt+(OP)
When Zappa testified before Congress he was extremely adamant about unsavory outcomes resulting from government control of language expression being more damaging than any unsavory language on its own.

https://societyofrock.com/in-1985-frank-zappa-is-asked-to-te...

Less fulfilling text version:

https://urbigenous.net/library/zappa.html

replies(1): >>selimt+Ja1
◧◩◪
115. bcrosb+ha1[view] [source] [discussion] 2023-05-17 03:06:10
>>brooks+vi
Not when it comes to politics.

You'll be stuck in the muck while they're laughing their ass off all the way to the bank.

replies(1): >>jasonm+9b1
◧◩
116. selimt+Ja1[view] [source] [discussion] 2023-05-17 03:11:07
>>fuzzfa+Z91
I am sure he would be thrilled about Google censoring his track titles.
replies(1): >>fuzzfa+vb1
◧◩◪◨⬒⬓⬔
117. goatlo+Va1[view] [source] [discussion] 2023-05-17 03:12:22
>>forget+dy
So the LLM demagogue is going to get people to create gray goo or make a lot of paper clips?
◧◩◪◨
118. jasonm+9b1[view] [source] [discussion] 2023-05-17 03:15:22
>>bcrosb+ha1
It doesn't even matter if "his heart is pure" ... Companies are not run that way.

We have lawyers.

◧◩◪
119. fuzzfa+vb1[view] [source] [discussion] 2023-05-17 03:19:57
>>selimt+Ja1
We need someone like him today to take old Fidel DeSantis down a notch or two.
replies(1): >>selimt+1u4
◧◩◪◨⬒⬓
120. 2muchc+Xb1[view] [source] [discussion] 2023-05-17 03:25:43
>>TeMPOr+6t
How do we define a general intelligence?
◧◩
121. selimt+rc1[view] [source] [discussion] 2023-05-17 03:30:22
>>ben_w+f9
https://news.ycombinator.com/item?id=35872321
replies(1): >>ben_w+OP1
◧◩◪◨⬒⬓⬔
122. canjob+vc1[view] [source] [discussion] 2023-05-17 03:31:05
>>rtkwe+eF
The fact that it’s a system you’d even consider to be “lying” or “truthful” is a huge advance over anything available 5 years ago.
replies(1): >>rtkwe+Hc2
◧◩◪◨
123. Improb+zc1[view] [source] [discussion] 2023-05-17 03:31:50
>>behnam+Jl
Deepmind is owned by Google, but it's British and they've been behind a lot of significant conceptual results in the last couple years. Most significant progress is just "engineering", so it's all done by US corporations.

Other than that there's also things like roformer, but I'm going to assume you won't count that as significant. US universities then certainly don't produce anything significant either though.

replies(1): >>behnam+0s1
◧◩◪◨⬒⬓⬔⧯▣
124. canjob+bd1[view] [source] [discussion] 2023-05-17 03:37:45
>>q7xvh9+mJ
As impressive as GPT4 is, it still makes mistakes humans would not, and so I wouldn’t say it passes the Turing Test.

I generated this just now:

Me: Jack and Jill are sitting next to each other in a room. There is no one else in the room. The person sitting next to Jack is sad. The person sitting next to Jill is angry. Who is sad, Jack or Jill?

GPT4: Based on the information provided, it is not possible to determine who is sad and who is angry, as there is a contradiction in the given statements. If Jack and Jill are the only ones in the room and sitting next to each other, then the person sitting next to Jack would be Jill and the person sitting next to Jill would be Jack. The statements about their emotions conflict with each other, so it is not possible to accurately determine who is sad or angry.

replies(1): >>jdkee+vg1
◧◩◪◨⬒⬓⬔
125. canjob+Jd1[view] [source] [discussion] 2023-05-17 03:43:35
>>leonid+OU
In hindsight it’s an obvious evolution, but in practice vanishingly few people saw it coming.
replies(1): >>leonid+RJ2
◧◩◪◨
126. darker+Kd1[view] [source] [discussion] 2023-05-17 03:43:48
>>jivetu+hF
No... it's a line from a terrific movie called The Dark Knight, and it's about the ease with which public perception is manipulated.
replies(2): >>hannia+Me1 >>edgyqu+Ih1
◧◩◪◨⬒⬓⬔⧯▣
127. 2muchc+Zd1[view] [source] [discussion] 2023-05-17 03:45:55
>>q7xvh9+mJ
>And yet, the general reaction of most people seems to be, "That's kind of cool, but why can't it also order me a cheeseburger?”

Well yeah. Imagine you tell a small child that knows about calculators, “Hey can you work out 18763 + 38284, for me?” They might struggle and then maybe fetch a calculator.

The LLMs attempt to predict the answer. WTF? It’s a computer and it can see that is a plus sign. Just understand its addition, and use the rest of your computer brain and do the sum. Hell, it is connected to the internet and we just taught you everything since before 2021. Just call out to Wolfram and give me the answer.

But that’s not how computers work. And we keep saying “AI” but that I is doing a lot of heavy lifting.

◧◩◪◨⬒
128. hannia+Me1[view] [source] [discussion] 2023-05-17 03:58:04
>>darker+Kd1
This has nothing to do with perception being manipulated
replies(1): >>darker+gS1
◧◩◪◨⬒⬓
129. dhkk+Qf1[view] [source] [discussion] 2023-05-17 04:10:14
>>TeMPOr+6t
Sources please. Every expert interview ive seen with AI researchers who have been in the game since the beginning have said the same: GPT's are not a massive breakthrough in the field of AI research.
replies(1): >>TeMPOr+7Z1
◧◩◪◨⬒⬓⬔⧯▣▦
130. jdkee+vg1[view] [source] [discussion] 2023-05-17 04:17:35
>>canjob+bd1
Jack and Jill are sitting next to each other in a room. There is no one else in the room. The person sitting next to Jack is sad. The person sitting next to Jill is angry. Who is sad, Jack or Jill?

ChatGPT Since Jack and Jill are the only two people in the room and they are sitting next to each other, the person sitting next to Jack is Jill and the person sitting next to Jill is Jack. Given the conditions you provided:

Jill is the one who is sad because she is sitting next to Jack. Jack is the one who is angry because he is sitting next to Jill.

replies(1): >>ChatGT+kA1
◧◩◪◨⬒⬓⬔⧯
131. edgyqu+lh1[view] [source] [discussion] 2023-05-17 04:27:14
>>ux-app+EV
No it isn’t. Even before transformers people were doing cool things with LSTMs and RNNs before that. People following this space haven’t really been surprised by any of these advancements. It’s a straight forward path imo
◧◩◪◨⬒⬓⬔⧯
132. edgyqu+wh1[view] [source] [discussion] 2023-05-17 04:28:45
>>nullse+oF
I just don’t see how it’s going to be significantly worse than existing troll farms etc. This prediction appears significantly overblown to me.
replies(2): >>forget+on1 >>nullse+AS1
◧◩◪
133. csomar+Bh1[view] [source] [discussion] 2023-05-17 04:29:43
>>concor+HH
He already has precedent with OpenAI and pivoting from Open to fully Close overnight once the tech kind of worked. We know the guy is a piece of work, let’s not give him any benefit of doubt.
replies(1): >>concor+yy1
◧◩◪◨⬒
134. edgyqu+Ih1[view] [source] [discussion] 2023-05-17 04:31:00
>>darker+Kd1
No, that line is specifically about Julius Caesar being appointed by the Senate as dictator and then never giving up his power.

Though I agree it seems to fit here. Being granted an oligopoly in the name of protecting the people and all that.

replies(1): >>darker+mS1
135. gentle+6j1[view] [source] 2023-05-17 04:48:31
>>happyt+(OP)
- Most, but not all, all the most scary uses for ai are those potentially by governments against their people.

- The next most scary uses are by governments against the people of other countries.

- After that, corporate use of ai against their employees and customers is also terrifying;

- next, the potential for individuals or small organizations seeking to use it for something terrorism-related. Eg, 3d printers or a lab + an ai researcher who helps you make dangerous things I suppose

- near the bottom of the noteworthy list is probably crime. eg, hacking, blackmail, gaslighting, etc

These problems will probably all come up in a big way over the next decade; but limiting ai research to the government and their lackeys? That's extremely terrifying. To prevent the least scary problems, we're jumping into the scariest pool with both feet

Look at how China has been using AI for the last 5-10 years: millions of facial recognition cameras, a scary police force, and a social credit system. In 10-20 years, how much more sophisticated will this be? If the people wanted to rebel, how on Earth will they?

Hell, with generative ai, a sophisticated enough future state could actually make the Dead Internet Theory a reality

That's the future of ai: a personal, automated boot stomping on everybody individually and collectively forever, with no ability to resist

◧◩◪◨
136. r_hood+Zk1[view] [source] [discussion] 2023-05-17 05:08:01
>>parent+H01
You're defending an argument that is blatantly self contradictory within the space of two sentences.

A) "anything he suggests should be categorically rejected because he’s just not in a position to be trusted."

B) "If what he suggests are good ideas then hopefully we can arrive at them in some other way with a clean chain of custody."

These sentences directly follow each other and directly contradict each other. Logically you can't categorically (the categorical is important here. Categorical means something like "treat as a universal law") reject a conclusion because it is espoused by someone you dislike, while at the same time saying you will accept that conclusion if arrived at by some other route.

"I will reject P if X proposes P, but will accept P if Y proposes P." is just poor reasoning.

replies(1): >>brooks+vj2
◧◩◪◨⬒
137. vinay_+Ml1[view] [source] [discussion] 2023-05-17 05:17:03
>>code_w+rp
Well, chips needed for AI training/inference are lot more simpler than general purpose CPUs. Fabs have already demonstrated 7nm process with older DUV tech for such chips. They can brute force their way through it – at least for mission-critical use-cases.

https://www.edn.com/the-truth-about-smics-7-nm-chip-fabricat...

◧◩◪◨⬒
138. simonh+Ol1[view] [source] [discussion] 2023-05-17 05:17:32
>>happyt+8j
The other weaponisation plans. The one about undermining western democracy and society. Yes that’s it, the one where we target our own population. No not intelligence gathering, yes that’s it, democratic discourse itself. Narrative shaping on Twitter, the Facebook groups bots, that stuff. The material happytiger was talking about as fact because obviously they wouldn’t make that up. Thanks.
◧◩◪
139. crater+9m1[view] [source] [discussion] 2023-05-17 05:21:10
>>stuckk+lJ
I suspect that he knows that this is a 'local maxima' as someone put it, and the field will stagnate once the size and attention of models approach the limits of available computing resources. He wants others kept out of the field not only because they could beat him but because he would wants to horde available processing power.
replies(1): >>stuckk+Oo1
◧◩◪◨⬒⬓⬔⧯▣
140. forget+on1[view] [source] [discussion] 2023-05-17 05:35:27
>>edgyqu+wh1
Does it really? You thinking LLM-powered propaganda distribution services can't out-scale existing troll farms? Or do a better job of evading spam filters?
replies(1): >>edgyqu+nH2
◧◩◪◨
141. stuckk+Oo1[view] [source] [discussion] 2023-05-17 05:52:54
>>crater+9m1
That is a well thought possibility. But with MS developing their own in-house SOC, that is not going to be an issue as they can always prioritize to their investments. But anything is possible. We need apple to release some competitive and dedicated low power GPUs.
replies(1): >>Michae+D95
◧◩◪◨⬒
142. behnam+0s1[view] [source] [discussion] 2023-05-17 06:27:04
>>Improb+zc1
> “just engineering”

This tells me the extent of your knowledge about the challenges with these models.

◧◩◪◨⬒⬓⬔⧯▣
143. nullse+Gt1[view] [source] [discussion] 2023-05-17 06:46:53
>>forget+u31
It's definitely not something I would attempt to productize and profit off of. I'm virtually certain someone will, and I'm sure that capability is being worked on as we speak, since we already know this type of thing occurs at scale.

My motivation would be simply shine a light on it. Make it real for people, so we have things to talk about other than just the hypotheticals. It's the kind of tooling that if you're seriously motivated to employ it, you'd probably prefer it remain secret or undetected at least until after it had done it's work for you. I worry that the 2024 US election will be the real litmus test for these things. All things considered it'd be a shame if we go through another Cambridge Analytica moment that in hindsight we really ought to have seen coming.

Some people have their doubts, and I understand that. These issues are so complex that no one individual can hope to have an accurate mental model of the world that is going to serve them reliabily again and again. We're all going to continue to be surprised as events unfold, and the degree to which we are surprised indicates the degree to which our mental models were lacking and got updated. That to me is why I'm erring on the side or pessimism and caution.

◧◩◪◨
144. concor+ny1[view] [source] [discussion] 2023-05-17 07:35:20
>>stephc+qM
In general I'm extremely skeptical of people ascribing motives to other people that they don't know personally or havent spent at least 100 hours studying. The reasons for this skepticism are a bit hard to elucidate in a quick post but include things like information sampling bias issues, having seen people make motive inferences I know to be incorrect and the Fundamental Attribution Error.
◧◩◪◨
145. concor+yy1[view] [source] [discussion] 2023-05-17 07:37:00
>>csomar+Bh1
> We know the guy is a piece of work

No, we do not. And the same is true of any person who we know of mostly from news stories. News you read is NOT an unbiased sampling of information about a person due to all the selection effects.

◧◩◪◨⬒⬓⬔⧯▣▦▧
146. ChatGT+kA1[view] [source] [discussion] 2023-05-17 07:54:36
>>jdkee+vg1
Does the fact you both got difference answers make it better or worse at reasoning ?
◧◩◪
147. daniel+PD1[view] [source] [discussion] 2023-05-17 08:31:34
>>pg_123+Bd
If “peaked” means impact and “bang for buck” means per dollar then its only peaked if the example is allowing the population at large to use these free tools like chatbots, for fun and minimal profits. but if we consider how they can be used to manipulate people at scale with misinformation then that’s an example where I think we’ve not yet seen the peak. So we should at least thoroughly discuss or think of it of to see if we can in any way mitigate certain negative societal outcomes.
◧◩◪
148. dTal+nK1[view] [source] [discussion] 2023-05-17 09:33:34
>>brooks+vi
I think what they are trying to say is that Sam Altman is very smart, but misaligned. If we assume that he is 1) sufficiently smart and 2) motivated to see OpenAI succeed, then his suggestions must be assumed to lead to a future where OpenAI is successful. If that future looks like it contradicts a future we want (for instance, user-controlled GPT-4 level AIs running locally on every machine), his suggestions should therefore be treated as reliably radioactive.
◧◩◪◨⬒⬓⬔
149. ben_w+mM1[view] [source] [discussion] 2023-05-17 09:48:14
>>fsckbo+kI
I can see the reviews in my head already:

1 star: No WiFi, no windows, no hot water

1 star: dusty

1 star: aliens didn't abduct me :(

5 stars: lots of storage room for my luggage

4 stars: service good, but had weird dream about a furry weighing my soul against a feather

1 star: aliens did abduct me :(

2 stars: nice views, but smells of camel

◧◩◪
150. ben_w+OP1[view] [source] [discussion] 2023-05-17 10:23:07
>>selimt+rc1
I'd forgotten that headline (and still haven't read the content), but yes, that's one example of how it can go wrong.
◧◩◪◨⬒⬓
151. darker+gS1[view] [source] [discussion] 2023-05-17 10:46:30
>>hannia+Me1
The story does not, but the quote does
◧◩◪◨⬒⬓
152. darker+mS1[view] [source] [discussion] 2023-05-17 10:47:18
>>edgyqu+Ih1
Source? I'm going to need a receipt for my downvote!

Here's mine: https://movies.stackexchange.com/questions/10572/is-this-quo...

replies(1): >>edgyqu+OG2
◧◩◪◨⬒⬓⬔⧯▣
153. nullse+AS1[view] [source] [discussion] 2023-05-17 10:49:31
>>edgyqu+wh1
Only time will tell.
◧◩◪◨⬒
154. yarg+9V1[view] [source] [discussion] 2023-05-17 11:13:14
>>alexel+GI
Yeah, it's a little bit RTFC to be honest.
◧◩◪◨⬒⬓⬔
155. TeMPOr+7Z1[view] [source] [discussion] 2023-05-17 11:47:03
>>dhkk+Qf1
> Sources please.

My own eyes? Hundreds of thousands thousand different scientific papers, blog posts, news reports and discussion threads that covered this ever since ChatGPT appeared, and especially in the last two months as GPT-4 rolled out?

At this point I'd reconsider if the experts you listened to are in fact experts.

Seriously. It's like saying Manhattan project wasn't a massive breakthrough in experimental physics or military strategy.

replies(1): >>dhkk+iE3
◧◩◪◨⬒⬓⬔⧯▣▦
156. ben_w+882[view] [source] [discussion] 2023-05-17 12:48:30
>>eroppl+4L
"Dumb" isn't why the Turing Test isn't solved. (Have you seen unmoderated chat with normal people? Heck, even smart people outside the domain of expertise; my mum was smart enough to get into university in the UK in the early 60s, back when that wasn't the default, but still believed in the healing power of crystals, homeopathic sodium chloride and silicon dioxide, and Bach flower remedies…)

ChatGPT (I've not got v4) deliberately fails the test by spewing out "as a large language model…", but also fails incidentally by having an attention span similar to my mother's shortly after her dementia diagnosis.

The problem with 3.5 is that it's simultaneously not mastered anything, and yet also beats everyone in whatever they've not mastered — an extremely drunk 50,000 year old Sherlock Holmes who speaks every language and has read every book just isn't going to pass itself off as Max Musstermann in a blind hour-long trial.

replies(1): >>rtkwe+je2
◧◩◪◨⬒⬓⬔⧯
157. rtkwe+Hc2[view] [source] [discussion] 2023-05-17 13:15:32
>>canjob+vc1
That's more a convenience of language than an actual "It's Alive!". Calling them hallucinations or inaccuracies is unwieldy and the former has the same kind of implied attribution of a mind. We know for sure that's not there, my internal model for those is just a stupendously complex markov chain because to my understanding that's all LLMs are currently doing.
◧◩◪◨⬒⬓⬔⧯▣▦▧
158. rtkwe+je2[view] [source] [discussion] 2023-05-17 13:23:25
>>ben_w+882
The lack of an ability to take in new information is maybe the crux of my issues with the LLM to AGI evolution. To my understanding the only way to have it even kind of learn something is to include it in a preamble it reprocesses every time which is maybe workable for small facts but breaks down for updating it from the 202X corpus it was trained on.
replies(1): >>ben_w+MP2
◧◩◪◨
159. brooks+6j2[view] [source] [discussion] 2023-05-17 13:51:44
>>dreamc+t51
Sure, go execute him for all I care.

My point was that an idea shold not need attribution for you to know whether it's good or bad, for your own purposes. I can't imagine looking at a proposal and deciding whether to support or oppose based on the author rather than content.

If Altman is that smart and manipulative, all he has to do is advocate the opposite of what he wants and you'll be insisting that we must give him exactly what he wants, on principle. That's funny with kids but no way to run public policy.

◧◩◪◨⬒
160. brooks+vj2[view] [source] [discussion] 2023-05-17 13:53:29
>>r_hood+Zk1
More clearly said than I managed, yep.

But I suppose it comes down to priorities. If good policy is less important than contradicting P, I suppose that approach makes sense.

◧◩◪◨⬒⬓⬔
161. edgyqu+OG2[view] [source] [discussion] 2023-05-17 15:25:33
>>darker+mS1
I’ve seen the movie, and it’s in response to Rachel saying “the last dictator they appointed was named Caesar, and he never gave up his power.”

I also didn’t downvote you, and it’s against guidelines to bring that stuff up

◧◩◪◨⬒⬓⬔⧯▣▦
162. edgyqu+nH2[view] [source] [discussion] 2023-05-17 15:27:20
>>forget+on1
No I’m thinking that scaling trolls up has diminishing returns and we’re already peak troll.
replies(1): >>nullse+BP3
◧◩◪◨⬒⬓⬔⧯
163. leonid+RJ2[view] [source] [discussion] 2023-05-17 15:37:58
>>canjob+Jd1
Few people saw it coming in just two years, sure. But most people following this space were already expecting a big evolution like the one we saw in 5-ish years.

For example, take this thread: https://news.ycombinator.com/item?id=21717022

It's a text RPG game built on top of GPT-2 that could follow arbitrary instructions. It was a full project with custom training for something that you can get with a single prompt on ChatGPT nowadays, but it clearly showcased what LLMs were capable of and things we take for granted now. It was clear, back then, that at some point ChatGPT would happen.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
164. ben_w+MP2[view] [source] [discussion] 2023-05-17 15:59:49
>>rtkwe+je2
Mmm. Well, possibly.

On the one hand, what I was saying here was more about the Turing Test than about AGI. Sometimes it gets called the AGI, sometimes it's "autocomplete on steroids", but even if it is fancy autocomplete, I think 3.5 has the skill to pass a short Turing Test, but not the personality, and it needs a longer "short-term memory"-equivalent than 3.5 for a full Turing Test.

On the other hand, as I (sadly) don't get paid to create LLMs, I've only got the kind of superficial awareness of how they work that comes from podcasts and the occasional blog post, which means ChatGPT might very well understand ChatGPT better than I do.

Can GPT-[3.5, 4] be prompted to make itself?

◧◩◪◨⬒
165. samsta+v03[view] [source] [discussion] 2023-05-17 16:43:14
>>throwa+8V
ANIME predicted the exact corprate future...

Look at all anime cyber cities...

Its not as high tech as you may imagine, but the surveillance is there

EDIt your "company" is watching every

◧◩◪◨⬒⬓⬔⧯
166. dhkk+iE3[view] [source] [discussion] 2023-05-17 19:51:01
>>TeMPOr+7Z1
It was Yann LeCun. His professional experience and knowledge of the AI development timeline outweighs your opinions, imo. Thanks for confirming you have no sources.
◧◩◪◨⬒
167. nullse+5O3[view] [source] [discussion] 2023-05-17 20:37:23
>>chasd0+521
That individual in particular was pushing some left-wing talking points.

Though the other day Yuval Noah Harari gave a great talk on the potential threat to democracy - https://youtu.be/LWiM-LuRe6w

◧◩◪◨⬒⬓⬔⧯▣▦▧
168. nullse+BP3[view] [source] [discussion] 2023-05-17 20:44:53
>>edgyqu+nH2
Any evidence or sources for that? I just don't know how that would be knowable to any of us.

Yuval Noah Harari gave a great talk the other day on the potential threat to democracy from the current state of the technology - https://youtu.be/LWiM-LuRe6w

◧◩◪◨⬒
169. samsta+UT3[view] [source] [discussion] 2023-05-17 21:11:25
>>throwa+8V
don't understand this part: <<AI is the policy-makers it will expose>> Can you help to explain a different way?*

===

Policy makers will not understanding what they are doing.

◧◩◪◨
170. selimt+1u4[view] [source] [discussion] 2023-05-18 02:33:52
>>fuzzfa+vb1
He and Gene Siskel were a very good good cop bad cop pair.
◧◩◪◨⬒
171. Michae+D95[view] [source] [discussion] 2023-05-18 10:28:41
>>stuckk+Oo1
I think it's more about the lack of data. GPT-4's training likely already used all publicly available text on Earth and some private databases too.
[go to top]