zlacker

Sam Altman goes before US Congress to propose licenses for building AI

submitted by vforgi+(OP) on 2023-05-16 11:06:54 | 914 points 1173 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
15. rvz+z9[view] [source] 2023-05-16 12:11:45
>>vforgi+(OP)
We all predictably knew that AI regulations were coming and O̶p̶e̶n̶AI.com’s moat was getting erased very quickly by open source AI models. So what does O̶p̶e̶n̶AI.com do?

Runs to congress to attempt to use and suggest new regulations against open source AI models to wipe them out and brand them non-compliant or un-licensed and unsafe for general use and using AI safety as a scapegoat again.

After that, to secretly push a pseudo-open source AI model that is compliant but limited compared to the closed models in an attempt to eliminate the majority of open source AI companies who can’t get such licenses.

So a clever tactic to create new regulations that benefit them (O̶p̶e̶n̶AI.com) more over everyone else, meaning less transparency, more hurdles for actual open AI research and additional bureaucracy. Also don't forget that Altman is also selling his Worldcoin dystopian crypto snake oil project as the 'antidote' to verify against everything getting faked by AI. [0] He his hedged in either way.

So congratulations to everyone here for supporting these gangsters at O̶p̶e̶n̶AI.com for pushing for regulatory capture.

[0] https://worldcoin.org/blog/engineering/humanness-in-the-age-...

121. reduce+dC[view] [source] 2023-05-16 14:38:15
>>vforgi+(OP)
It's very sad that people lack the imagination for the possible horrors that lie beyond. You don't even need the imagination; Hinton, Bengio, Tegmark, Yudkowsky, Musk, etc. are spelling it out for you.

This moment, 80% of comments are derisive, and you actually have zero idea how much is computer generated bot content meant to sway opinion by post-GPT AI industry who see themselves as becoming the next iPhone-era billionaires. We are fast approaching a reality where our information space breaks down. Where almost all text you get from HN, Twitter, News, Substack; almost all video you get from Youtube, Instagram, TikTok; is just computer generated output meant to sway opinion and/or make $.

I can't know Altman's true motives. But this is also what it looks like when a frontrunner is terrified at what happens when GPT6 is released and if they don't, the rest of the people who see billionaire $ coming their way are close at your heels trying to leapfrog you if you stop. Consequences? What consequences? We all know social media has been a net good, right? Many of you sound exactly like the few remaining social media cheerleaders (of which there were plenty 5 years ago) who still think Facebook, Instagram, Twitter, isn't causing depression and manipulation. If you appreciated what The Social Dilemma illuminated, then watch the same people on AI: https://www.youtube.com/watch?v=xoVJKj8lcNQ

◧◩
147. letter+6G[view] [source] [discussion] 2023-05-16 14:55:30
>>elil17+xC
I made something just for writing your congress person / senator, using generative AI ironically: https://vocalvoters.com/
◧◩◪◨⬒
161. chaxor+hI[view] [source] [discussion] 2023-05-16 15:06:41
>>agentu+qH
You should read some of the papers referred to in the above comments before making that assertion. It may take a while to realize the overall structure of the argument, how the category theory is used, and how this is directly applicable to LLMs, but if you are in ML it should be obvious. https://arxiv.org/abs/2203.15544
◧◩◪◨
212. freedo+5O[view] [source] [discussion] 2023-05-16 15:32:30
>>elil17+5N
TIL! https://www.merriam-webster.com/dictionary/ilk

still, there are probably a lot of people like me who have heard it used (incorrectly it seems) as an insult so many times that it's an automatic response :-(

213. fraXis+hO[view] [source] 2023-05-16 15:33:12
>>vforgi+(OP)
https://archive.is/uh0yv
◧◩◪◨⬒
238. elil17+jR[view] [source] [discussion] 2023-05-16 15:44:16
>>mitch3+LO
That's simply untrue. Here are several recently published articles which use ilk in a neutral or positive context:

https://www.telecomtv.com/content/digital-platforms-services...

https://writingillini.com/2023/05/16/illinois-basketball-ill...

https://www.jpost.com/j-spot/article-742911

256. fraXis+RS[view] [source] 2023-05-16 15:50:05
>>vforgi+(OP)
Live now as of 8:49 AM (PDT): https://www.youtube.com/watch?v=P_ACcQxJIsg
275. JumpCr+lW[view] [source] 2023-05-16 16:02:50
>>vforgi+(OP)
The members of this subcommittee are [1]:

Chair Richard Blumenthal (CT), Amy Klobuchar (MN), Chris Coons (DE), Mazie Hirono (HI), Alex Padilla (CA), Jon Ossoff (GA)

Majority Office: 202-224-2823

Ranking Member Josh Hawley (MO), John Kennedy (LA), Marsha Blackburn (TN), Mike Lee (UT), John Cornyn (TX)

Minority Office: 202-224-4224

If you’re in those states, please call their D.C. office and read them the comment you’re leaving here.

[1] https://www.judiciary.senate.gov/about/subcommittees

276. rvz+oW[view] [source] 2023-05-16 16:03:01
>>vforgi+(OP)
O̶p̶e̶n̶AI.com is not your friend and are essentially against open source with this regulatory capture and using AI safety as a scapegoat.

Why do you think they are attempting to release a so-called 'open source' [0] and 'compliant' AI model to wipe out other competing open source AI models, to label them to others as unlicensed and dangerous? They know that transparent, open source AI models is a threat. Hence why they are doing this.

They do not have a moat against open source, unless they use regulations that suit them against their competitors using open source models.

O̶p̶e̶n̶AI.com is a scam. On top of the Worldcoin crypto scam that Sam Altman is also selling as a antidote against the unstoppable generative AI hype to verify human eyeballs on the blockchain with an orb. I am not joking. [1] [2]

[0] https://www.reuters.com/technology/openai-readies-new-open-s...

[1] https://worldcoin.org/blog/engineering/humanness-in-the-age-...

[2] https://worldcoin.org/blog/worldcoin/designing-orb-universal...

◧◩◪
291. JumpCr+ZY[view] [source] [discussion] 2023-05-16 16:12:48
>>Barrin+2X
Altman is simultaneously pumping a crypto project [1].

[1] https://www.yahoo.com/news/worldcoin-chatgpt-sam-altman-ethe...

◧◩◪◨⬒⬓
303. dustyl+TZ[view] [source] [discussion] 2023-05-16 16:15:55
>>elil17+jR
It is technically true that ilk is not always used derogatorily. But it is almost always derogatory in modern connotation.

https://grammarist.com/words/ilk/#:~:text=It's%20neutral.,a%....

Also, note that all of the negative examples are politics related. If a politician reads the word 'ilk', it is going to be interpreted negatively. It might be the case that ilk does "always mean" a negative connotation in politics.

You could change 'ilk' to 'friends', and keep the same meaning with very little negative connotation. There is still a slight negative connotation here, in the political arena, but it's a very vague shade, and I like it here.

"Altman and his ilk try to claim that..." is a negative phrase because "ilk" is negative, but also because "try to claim" is invalidating and dismissive. So this has elements or notes of an emotional attack, rather than a purely rational argument. If someone is already leaning towards Altman's side, then this will feel like an attack and like you are the enemy.

"Altman claims that..." removes all connotation and sticks to just the facts.

305. nixcra+s01[view] [source] 2023-05-16 16:18:22
>>vforgi+(OP)
I understand that some people may not agree with what I am about to say, but I feel it is important to share. Recently, some talented writers who are my good friends at major publishing houses have lost their jobs to AI technology. There have been news articles about this in the past few months too. While software dev jobs in the IT industry may be safe for now, many other professions are at risk of being replaced by artificial intelligence. According to a report[0] by investment bank Goldman Sachs, AI could potentially replace 300 million full-time jobs. Unfortunately, my friends do not find Sam Altman's reassurances (or whatever he is asking) comforting. I am unsure how to help them in this situation. I doubt that governments in the US, EU, or Asia will take action unless AI begins to threaten their own jobs. It seems that governments prioritize supporting large corporations with deep pockets over helping the average person. Many governments see AI as a way to maintain their geopolitical and military superiority. I have little faith in these governments to prioritize the needs of their citizens over their own interests. It is concerning to think that social issues like drug addiction, homelessness, and medical bankruptcy may worsen (or increase from the current rate) if AI continues to take over jobs without any intervention to protect everyday folks who are lost or about to lose their job.

I've no doubt AI is here to stay. All I am asking for is some middle ground and safety. Is that too much to ask?

[0] https://www.bbc.com/news/technology-65102150

◧◩◪◨⬒⬓⬔⧯▣
336. cma+e41[view] [source] [discussion] 2023-05-16 16:33:13
>>tome+M11
In his newest podcast interview (https://open.spotify.com/episode/7EFMR9MJt6D7IeHBUugtoE) LeCun is now saying they will be much more powerful than humans, but that stuff like RLHF will keep them from working against us because as an analogy dogs can be domesticated. It didn't sound very rigorous.

He also says Facebook solved all the problems with their recommendation algorithms' unintended effects on society after 2016.

◧◩◪◨⬒⬓⬔⧯▣▦
351. JumpCr+J71[view] [source] [discussion] 2023-05-16 16:47:54
>>reaper+l61
These are broad questions whose answers are worth serious legal time. There is a bit in the open [1][2].

[1] https://www.bereskinparr.com/doc/chatgpt-ip-strategy

[2] https://hbr.org/2023/04/generative-ai-has-an-intellectual-pr...

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
371. JumpCr+Ga1[view] [source] [discussion] 2023-05-16 17:00:10
>>reaper+m91
> these links don't have anything about model weights

Didn't say they do. I said "these are broad questions whose answers are worth serious legal time." I was suggesting one angle I would lobby for were that my job.

It's a live battlefield. Nobody is going to pay tens of thousands of dollars and then post it online, or put out for free what they can charge for.

> OpenAI’s Terms of Use, for example, assign all of its rights, title, and interest in the output to the user

Subject to restrictions, e.g. not using it to "develop models that compete with OpenAI" or "discover the source code or underlying components of models, algorithms, and systems of the Services" [1]. Within the context of open-source competition, those are huge openings.

> shows where OpenAI is trying to weaken copyright, not where they they are trying to strengthen it

It shows what intellectual property claims they and their competitors do and may assert. They're currently "limited" [2].

> notice you don't have a [0]-index

I'm using natural numbers in a natural language conversation with, presumably, a natural person. It's a style choice, nothing more.

[1] https://openai.com/policies/terms-of-use

[2] https://news.ycombinator.com/item?id=35964215

406. neel89+4g1[view] [source] 2023-05-16 17:22:50
>>vforgi+(OP)
PG predicted that https://twitter.com/paulg/status/1624569079439974400?lang=en Only it is not the incumbents but his own prodigy Sam asking for regulation where big companies like Meta and Amazon giving LLMs for free.
418. hospit+Uh1[view] [source] 2023-05-16 17:31:45
>>vforgi+(OP)
"We have no moat, and neither does OpenAI"

Dismiss it as the opinions of "a Googler" but it is entirely true. The seemingly coordinated worldwide[1] push to keep it in the hands of the power class speaks for itself.

Both are seemingly seeking to control not only the commercial use and wide distribution of such systems, but even writing them and personal use. This will keep even the knowledge of such systems and their capabilities in the shadows, ripe for abuse laundered through black box functions.

This is up there with the battle for encryption in ensuring a more human future. Don't lose it.

[1] https://technomancers.ai/eu-ai-act-to-target-us-open-source-...

◧◩◪◨
436. catiop+bk1[view] [source] [discussion] 2023-05-16 17:42:07
>>anigbr+cT
“ilk” has acquired a negative connotation in its modern usage.

See also https://grammarist.com/words/ilk/#:~:text=It's%20neutral.,a%....

449. denver+4q1[view] [source] 2023-05-16 18:13:17
>>vforgi+(OP)
I don't think Sam read the Google memo and realized they needed a moat -- I think they've been trying this for some time.

Here's their planned proposal for government regulation; they discuss not just limiting access to models but also to datasets, and possibly even chips.

This seems particularly relevant, on the discussion of industry standards, regulation, and limiting access:

"Despite these limitations, strong industry norms—including norms enforced by industry standards or government regulation—could still make widespread adoption of strong access restrictions possible. As long as there is a significant gap between the most capable open-source model and the most capable API-controlled model, the imposition of monitoring controls can deny hostile actors some financial benefit.166 Cohere, OpenAI, and AI21 have already collaborated to begin articulating norms around access to large language models, but it remains too early to tell how widely adopted, durable, and forceful these guidelines will prove to be.

Finally, there may be alternatives to APIs as a method for AI developers to provide restricted access. For example, some work has proposed imposing controls on who can use models by only allowing them to work on specialized hardware—a method that may help with both access control and attribution.168 Another strand of work is around the design of licenses for model use.169 Further exploration of how to provide restricted access is likely valuable."

https://arxiv.org/pdf/2301.04246.pdf

◧◩◪◨
473. shawab+xw1[view] [source] [discussion] 2023-05-16 18:48:31
>>brkebd+5b1
> basically, AI is someone capable of minimal criticism

That's not the definition of AI or intelligence

You're letting your understanding of how LLMs work bias you. They may be at their core a token autocompleter but they have emergent intelligence

https://en.m.wikipedia.org/wiki/Emergence

475. neonat+Vw1[view] [source] 2023-05-16 18:50:21
>>vforgi+(OP)
http://web.archive.org/web/20230516122128/https://www.reuter...
494. jamesh+Iz1[view] [source] 2023-05-16 19:03:22
>>vforgi+(OP)
This is an AP news wire article picked up by a Qatar newspaper website. Why is this version here, rather than https://apnews.com/article/chatgpt-openai-ceo-sam-altman-con...?
◧◩◪◨
529. polski+0C1[view] [source] [discussion] 2023-05-16 19:14:08
>>silver+pT
"Congress cannot regulate AI"

https://www.eff.org/deeplinks/2015/04/remembering-case-estab...

538. jwiley+kC1[view] [source] 2023-05-16 19:16:08
>>vforgi+(OP)
Turing police? https://williamgibson.fandom.com/wiki/Turing_Police
◧◩◪◨
565. shagie+ND1[view] [source] [discussion] 2023-05-16 19:21:56
>>slowmo+fi1
As a point of trivia, at one time "a" Mac was one of the fastest computers in the world.

https://www.top500.org/lists/top500/2004/11/ and https://www.top500.org/system/173736/

And while 1100 Macs wouldn't exactly be affordable, the idea of trying to limit commercial data centers gets amusing.

That system was "only" 12,250.00 GFlop/s - I could do that with a small rack of Mac M1 minis now for less than $10k and fewer computers than are in the local grade school computer room.

(and I'm being a bit facetious here) Local authorities looking at power usage and heat dissipation for marijuana growing places might find underground AI training centers.

◧◩◪◨
576. Hideou+ME1[view] [source] [discussion] 2023-05-16 19:25:13
>>ChrisC+Nz1
Choosing to let millions die rather than saying a racial slur is not "empathy": https://twitter.com/aaronsibarium/status/1622425697812627457...
581. fritzo+7F1[view] [source] 2023-05-16 19:26:26
>>vforgi+(OP)
Full video of testimony on CSPAN https://www.c-span.org/video/?528117-1/openai-ceo-testifies-...
◧◩◪◨⬒
585. reveli+kF1[view] [source] [discussion] 2023-05-16 19:26:59
>>someth+z01
"Are Emergent Abilities of Large Language Models a Mirage?"

https://arxiv.org/pdf/2304.15004.pdf

our alternative suggests that existing claims of emergent abilities are creations of the researcher’s analyses, not fundamental changes in model behavior on specific tasks with scale.

593. johnyz+kG1[view] [source] 2023-05-16 19:30:59
>>vforgi+(OP)
The mainstream media cartel is pumping Sam Altman hard for some reason. Just from today (CNBC): "Sam Altman wows lawmakers at closed AI dinner: ‘Fantastic…forthcoming’" [1]. When was the last time you saw MSM suck up so hard to a Silicon Valley CEO? I see stories like this all the time now. They always play up the angle of the geeky wizzkid (so innocent!), whereas Sam Altman was always less a technologist and more of a relentless operator and self-promotor. Even Paul Graham subtly called that out, at the time he made him head of YC [2].

True to form, these articles also work hard at planting the idea that Sam Altman created OpenAI, when in fact he joined rather recently, in a business role. Are these articles being planted somehow? I find it very likely. Don't forget that this approach is also straight out of the YC playbook, disclosed in great detail by Paul Graham in previous writings [3].

Finally, in keeping with the conspiratorial tone of this comment, for another example of Sam Altman rubbing shoulders with The Establishment, his participation in things like the Bilderberg group [4] are a matter of public record. Which I join many others in finding creepy, even moreso as he maneuvers to exert influence on policy around the seismic shift that is AI.

To be clear, I have nothing specific against sama. But I dislike underhanded influence campaigns, which this all reeks of. Oh yeah, I will consider downvotes to this comment as proof of the shadow (AI?) government's campaign to promote Sam Altman. Do your worst!

[1] https://www.cnbc.com/2023/05/16/openai-ceo-woos-lawmakers-ah...

[2] https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma... ("Graham said, “I asked Sam in our kitchen, ‘Do you want to take over YC?,’ and he smiled, like, it worked. I had never seen an uncontrolled smile from Sam. It was like when you throw a ball of paper into the wastebasket across the room—that smile.”")

[3] http://www.paulgraham.com/submarine.html

[4] https://en.wikipedia.org/wiki/2016_Bilderberg_Conference

◧◩◪
612. thehof+aI1[view] [source] [discussion] 2023-05-16 19:39:08
>>alex_y+rE1
It’s a figure of speech.

https://en.m.wikipedia.org/wiki/Eat_the_Rich

◧◩
630. precom+mJ1[view] [source] [discussion] 2023-05-16 19:44:30
>>srslac+I7
Yes! I've been expressing similar sentiments whenever I see people hyping up "AI", although not written as well your comment.

Edit: List of posts for anyone interested http://paste.debian.net/plain/1280426

640. JieJie+YJ1[view] [source] 2023-05-16 19:47:20
>>vforgi+(OP)
Here are my notes from the last hour, watching on C-SPAN telecast, which is archived here:

https://www.c-span.org/video/?528117-1/openai-ceo-testifies-...

- Mazie Hirono, Junior Senator from Hawaii, has very thoughtful questions. Very impressive.

- Gary Marcus also up there speaking with Sam Altman of OpenAI.

- So far, Sen. Hirono and Sen. Padilla seem very wary of regulating AI at this time.

- Very concerned about not "replicating social media's failure", why is it so biased and inequitable. Much more reasonable concerns.

- Also responding to questions is Christina Montgomery, chair of IBM's AI Ethics Board.

- "Work to generate a representative set of values from around the world."

- Sen. Ossoff asking for definition of "scope".

- "We could draw a line at systems that need to be licensed. Above this amount of compute... Define some capability threshold... Models that are less capable, we don't want to stop open source."

- Ossoff wants specifics.

- "Persuade, manipulate, influence person's beliefs." should be licensed.

- Ossoff asks about predicting human behavior, i.e. use in law enforcement, "It's very important we understand these are tools, not to take away human judgment."

- "We have no national privacy law." — Sen Ossof "Do you think we need one?"

- Sam "Yes. User should be able to opt out of companies using data. Easy to delete data. If you don't want your data use to train, you have right to exclude it."

- "There should be more ways to have your data taken down off the public web." —Sam

- "Limits on what a deployed model is capable of and also limits on what it will answer." — Sam

- "Companies who depend upon usage time, maximize engagement with perverse results. I would humbly advise you to get way ahead of this, the safety of children. We will look very harshly on technology that harms children."

- "We're not an advertising based model." —Sam

- "Requirements about how the values of these systems are set and how they respond to questions." —Sam

- Sen. Booker up now.

- "For congress to do nothing, which no one is calling for here, would be exceptional."

- "What kind of regulation?"

- "We don't want to slow things down."

- "A nimble agency. You can imagine a need for that, right?"

- "Yes." —Christina Montgomery

- "No way to put this genie back in the bottle." Sen. Booker

- "There are more genies yet to come from more bottles." — Gary Marcus

- "We need new tools, new science, transparency." —Gary Marcus

- "We did know that we wanted to build this with humanity's best interest at heart. We could really deeply transform the world." —Sam

- "Are you ever going to do ads?" —Sen Booker

- "I wouldn't say never...." —Sam

- "Massive corporate concentration is really terrifying.... I see OpenAI backed by Microsoft, Anthropic is backed by Google. I'm really worried about that. Are you worried?" —Sen Booker?

- "There is a real risk of technocracy combined with oligarchy." —Gary Marcus

- "Creating alignment dataset has got to come very broadly from society." —Sam Senator Welch from Vermont up now

- "I've come to the conclusion it's impossible for congress to keep up with the speed of technology."

- "The spread of disinformation is the biggest threat."

- "We absolutely have to have an agency. Scope has to be defined by congress. Unless we have an agency, we really don't have much of a defense against the bad stuff, and the bad stuff will come."

- Use of regulatory authority and the recognition that it can be used for good, but there's also legitimate concern of regulation being a negative influence."

- "What are some of the perils of an agency?"

- "America has got to continue to lead."

- "I believe it's possible to do both, have a global view. We want America to lead."

- "We still need open source to comply, you can still do harm with a smaller model."

- "Regulatory capture. Greenwashing." —Gary Marcus

- "Risk of not holding companies accountable for the harms they are causing today." —Christina Montgomery

- Lindsay Graham, very pro-licensing, "You don't build a nuclear power plant without a license, you don't build an AI without a license."

- Sen Blumenthal brings up Anti-Trust legislation.

- Blumenthal mentions how classified briefings already include AI threats.

- "For every successful regulation, you can think of five failures. I hope our experience here will be different."

- "We need to grapple with the hard questions here. This has brought them up, but not answered them."

- "Section 230"

- "How soon do you think gen AI will be self-aware?" —Sen Blumenthal

- "We don't understand what self-awareness is." —Gary Marcus

- "Could be 2 years, could be 20."

- "What are the highest risk areas? Ban? Strict rules?"

- "The space around misinformation. Knowing what content was generated by AI." —Christina Montgomery

- "Medical misinformation, hallucination. Psychiatric advice. Ersatz therapists. Internet access for tools, okay for search. Can they make orders? Can they order chemicals? Long-term risks." —Gary Marcus

- "Generative AI can manipulate the manipulators." —Blumenthal

- "Transparency. Accountability. Limits on use. Good starting point?" —Blumenthal

- "Industry should't wait for congress." —C. Montgomery

- "We don't have transparency yet. We're not doing enough to enforce it." —G. Marcus

- "AGI closer than a lot of people appreciate." —Blumenthall

- Gary and Sam are getting along and like each other now.

- Josh Hawley

- Talking about loss of jobs, invasion of personal privacy, manipulation of behavior, opinion, and degradation of free elections in America.

- "Are they right to ask for a pause?"

- "It did not call for a ban on all AI research or all AI, only on very specific thing, like GPT-5." -G Marcus

- "Moratorium we should focus on is deployment. Focus on safety." —G. Marcus

- "Without external review."

- "We waited more than 6 months to deploy GPT-4. I think the frame of the letter is wrong." —Sam

- Seems to not like the arbitrariness of "six months."

- "I'm not sure how practical it is to pause." —C. Montgomery

- Hawley brings up regulatory capture, usually get controlled by people they're supposed to be watching. "Why don't we just let people sue you?"

- If you were harmed by AI, why not just sue?

- "You're not protected by section 230."

- "Are clearer laws a good thing? Definitely, yes." —Sam

- "Would certainly make a lot of lawyers wealthy." —G. Marcus

- "You think it'd be slower than congress?" —Hawley

- Copyright, wholesale misinformation laws, market manipulation?" Which laws apply? System not thought through? Maybe 230 does apply? We don't know.

- "We can fix that." —Hawley

- "AI is not a shield." —C. Montgomery

- "Whether they use a tool or a human, they're responsible." —C. Montgomery

- "Safeguards and protections, yes. A flat stop sign? I would be very, very worried about." —Blumenthall

- "There will be no pause." Sen. Booker "Nobody's pausing."

- "I would agree." Gary Marcus

- "I have a lot of concerns about corporate intention." Sen Booker

- "What happens when these companies that already control so much of our lives when they are dominating this technology?" Booker

- Sydney really freaked out Gary. He was more freaked out when MS didn't withdraw Sydney like it did Tay.

- "I need to work on policy. This is frightening." G Marcus

- Cory admits he is a tech bro (lists relationships with investors, etc)

- "The free market is not what it should be." —C. Booker

- "That's why we started OpenAI." —Sam "We think putting this in the hands of a lot of people rather than the hands of one company." —Sam

- "This is a new platform. In terms of using the models, people building are doing incredible things. I can't believe you get this much technology for so little money." —Sam

- "Most industries resist reasonable regulation. The only way we're going to see democratization of values is if we enforce safety measures." —Cory Booker

- "I sense a willingness to participate that is genuine and authentic." —Blumenthal

◧◩◪◨⬒⬓
649. happyt+bL1[view] [source] [discussion] 2023-05-16 19:53:40
>>hammyh+gw1
> OpenAI is an American artificial intelligence (AI) research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.

https://en.wikipedia.org/wiki/OpenAI

Just FYI, what you're saying isn't accurate. It was, but it's not anymore.

651. zvolsk+oL1[view] [source] 2023-05-16 19:54:28
>>vforgi+(OP)
While I remain undecided on the matter, this whole debate is reminiscent of Karel Čapek's War with the Newts [1936]. In particular the public discourse from a time before the newts took over. "It would certainly be an overstatement to say that nobody at that time ever spoke or wrote about anything but the talking newts. People also talked and wrote about other things such as the next war, the economic crisis, football, vitamins and fashion; but there was a lot written about the newts, and much of it was very ill-informed. This is why the outstanding scientist, Professor Vladimir Uher (University of Brno), wrote an article for the newspaper in which he pointed out that the putative ability of Andrias Scheuchzer to speak, which was really no more than the ability to repeat spoken words like a parrot, ..." Note the irony of the professor's attempt to improve an ill-informed debate by contributing his own piece of misinformation, equating newt speech to mere parrot-like mimicry.

Čapek, intriguingly, happens to be the person who first used the word robot, which was coined by his brother.

http://gutenberg.net.au/ebooks06/0601981h.html

◧◩◪◨⬒⬓
654. shagie+BL1[view] [source] [discussion] 2023-05-16 19:56:01
>>anigbr+IF1
I would be curious to see an example of 'ilk' being used in a modern, non-sottish local, context where the association is being shown in a neutral or positive light.

I'll give you one: National Public Lands Day: Let’s Help Elk … and Their Ilk - https://pressroom.toyota.com/npld-2016-elk/ (it's a play on words)

◧◩◪
672. happyt+eO1[view] [source] [discussion] 2023-05-16 20:08:35
>>anilea+TH1
Absolutely. It's going to absolutely shred the trademark and copyright systems, if they even apply (or are extended to apply) which is a murky area right now. And even then, the sheer volume of material created by a geometric improvement and subsequent cost destruction of virtually every intellectual and artistic endeavor or product means that even if you hold the copyright or trademark, good luck paying for enforcement on the vast ocean of violations intrinsic in the shift.

What people also fail to understand is that AI is largely seen by the military industrial complex as a weapon to control culture and influence. The most obvious risk of AI — the risk of manipulating human behavior towards favored ends — has been shown to be quite effective right out the gate. So, the back channel conversation has to be to put it under regulation because of it's weaponization potential, especially considering the difficulty in identifying anyone (which of course is exactly what Elon is doing with X 2.0 — it's a KYC id platform to deal with this exact issue with a 220M user 40B head start).

I mean, the dead internet theory is turning true, and half the traffic on the Web is already bot driven. Imagine when it's 99%, as proliferation of this technology will inevitably generate simply for the economics.

Starting with open source is the only way to get enough people looking at the products to create any meaningful oversight, but I fear the weaponization fears will mean that everything is locked away in license clouds with politically influential regulatory boards simply on the proliferation arguments. Think of all the AI technologists who won't be versed in this technology unless they work at a "licensed company" as well — this is going to make the smaller population of the West much less influential in the AI arms race, which is already underway.

To me, it's clear that nobody in Silicon Valley or the Hill has learned a damn thing from the prosecution of hackers and the subsequent bloodbath of cybersecurity as a result of the exact same kinds of behavior back in the early to mid-2000s. We ended up driving out best and brightest into the grey and black areas of infosec and security, instead of out in the open running companies where they belong. This move would do almost the exact same thing to AI, though I think you have to be a tad of an Asimov or Bradbury fan to see it right now.

I don't know, that's just how I see it, but I'm still forming my opinions. LOVE LOVE LOVE your comment though. Spot on.

Relevant articles:

https://www.independent.co.uk/tech/internet-bots-web-traffic...

https://theconversation.com/ai-can-now-learn-to-manipulate-h....

676. martin+EO1[view] [source] 2023-05-16 20:10:47
>>vforgi+(OP)
Isn't it too late? Isn't the cat out of the bag? https://www.semianalysis.com/p/google-we-have-no-moat-and-ne...

Meaning anyone could eventually reproduce a Chat GPT4 and beyond. And eventually it can run outside of a large data center.

So... how will you tell its an AI vs a human doing you wrong?

Seems to me if the AI breaks the law, find out who's driving it and prosecute them.

682. johndb+oP1[view] [source] 2023-05-16 20:14:27
>>vforgi+(OP)
Video: https://www.c-span.org/video/?528117-1/openai-ceo-testifies-...
◧◩◪◨⬒⬓
692. shagie+fQ1[view] [source] [discussion] 2023-05-16 20:19:21
>>nerpde+VI1
My "in my copious free time" ML project is a classifier for cat pictures to reddit cat subs.

For example: https://commons.wikimedia.org/wiki/File:Cat_August_2010-4.jp... would get classified as /r/standardissuecat

https://stock.adobe.com/fi/images/angry-black-cat/158440149 would get classified as /r/blackcats and /r/stealthbombers

Anyways... that's my hobbyist ML project.

◧◩◪
700. happyt+gR1[view] [source] [discussion] 2023-05-16 20:23:41
>>downWi+CO1
Hear, hear. Excellent point, and I don't mean to imply it shouldn't be regulated. However, it has been my general experience that concentrating immense power in governments doesn't typically lead to more security, so perhaps we just have a difference of philosophy.

Democracy will not withstand AI when it's fully developed. Let me offer a better written explanation of my general views than I could ever muster up for a comment on HN in the form of a quote from an article by Dr. Thorsten Thiel (Head of the Research Group "Democracy and Digitaliziation" at the Weizenbaum Institute for the Networked Society):

> The debate on AI’s impact on the public sphere is currently the one most prominent and familiar to a general audience. It is also directly connected to long-running debates on the structural transformation of the digital public sphere. The digital transformation has already paved the way for the rise of social networks that, among other things, have intensified the personalization of news consumption and broken down barriers between private and public conversations. Such developments are often thought to be responsible for echo-chamber or filter-bubble effects, which in turn are portrayed as root causes of the intensified political polarization in democracies all over the world. Although empirical research on filter bubbles, echo chambers, and societal polarization has convincingly shown that the effects are grossly overestimated and that many non-technology-related reasons better explain the democratic retreat, the spread of AI applications is often expected to revive the direct link between technological developments and democracy-endangering societal fragmentation.

> The assumption here is that AI will massively enhance the possibilities for analyzing and steering public discourses and/or intensify the automated compartmentalizing of will formation. The argument goes that the strengths of today's AI applications lie in the ability to observe and analyze enormous amounts of communication and information in real time, to detect patterns and to allow for instant and often invisible reactions. In a world of communicative abundance, automated content moderation is a necessity, and commercial as well as political pressures further effectuate that digital tools are created to oversee and intervene in communication streams. Control possibilities are distributed between users, moderators, platforms, commercial actors and states, but all these developments push toward automation (although they are highly asymmetrically distributed). Therefore, AI is baked into the backend of all communications and becomes a subtle yet enormously powerful structuring force.

> The risk emerging from this development is twofold. On the one hand, there can be malicious actors who use these new possibilities to manipulate citizens on a massive scale. The Cambridge Analytica scandal comes to mind as an attempt to read and steer political discourses (see next section on electoral interference). The other risk lies in a changing relationship between public and private corporations. Private powers are becoming increasingly involved in political questions and their capacity to exert opaque influences over political processes has been growing for structural and technological reasons. Furthermore, the reshaping of the public sphere via private business models has been catapulted forward by the changing economic rationality of digital societies such as the development of the attention economy. Private entities grow stronger and become less accountable to public authorities; a development that is accelerated by the endorsement of AI applications which create dependencies and allow for opacity at the same time. The ‘politicization’ of surveillance capitalism lies in its tendency, as Shoshana Zuboff has argued, to not only be ever more invasive and encompassing but also to use the data gathered to predict, modify, and control the behavior of individuals. AI technologies are an integral part in this ‘politicization’ of surveillance capitalism, since they allow for the fulfilment of these aspirations. Yet at the same time, AI also insulates the companies developing and deploying it from public scrutiny through network effects on the one hand and opacity on the other. AI relies on massive amounts of data and has high upfront costs (for example, the talent required to develop it, and the energy consumed by the giant platforms on which it operates), but once established, it is very hard to tame through competitive markets. Although applications can be developed by many sides and for many purposes, the underlying AI infrastructure is rather centralized and hard to reproduce. As in other platform markets, the dominant players are those able to keep a tight grip on the most important resources (models and data) and to benefit from every individual or corporate user. Therefore, we can already see that AI development tightens the grip of today’s internet giants even further. Public powers are expected to make increasing use of AI applications and therefore become ever more dependent on the actors that are able to provide the best infrastructure, although this infrastructure, for commercial and technical reasons, is largely opaque.

> The developments sketched out above – the heightened manipulability of public discourse and the fortification of private powers – feed into each other, with the likely result that many of the deficiencies already visible in today’s digital public spheres will only grow. It is very hard to estimate whether these developments can be counteracted by state action, although a regulatory discourse has kicked in and the assumption that digital matters elude the grasp of state regulation has often been proven wrong in the history of networked communication. Another possibility would be a creative appropriation of AI applications through users whose democratic potential outweighs its democratic risks thus enabling the rise of differently structured, more empowering and inclusive public spaces. This is the hope of many of the more utopian variants of AI and of the public sphere literature, according to which AI-based technologies bear the potential of granting individuals the power to navigate complex, information-rich environments and allowing for coordinated action and effective oversight (e.g. Burgess, Zarkadakis).

Source: https://us.boell.org/en/2022/01/06/artificial-intelligence-a...

Social bots and deep fakes will be so good so quickly — the primary technologies being talked about in terms of how Democracy can survive — I doubt there will be another election without extensive use of these technologies in a true plethora of capacities from influence marketing to outright destabilization campaigns. I'm not sure what Government can deal with a threat like that, but I suspect the recent push to revise tax systems and create a single global standard for multinational taxation recently the subject of an excellent talk at the WEF are more than tangentially related to the AI debate.

So, is it a transformational technology that will liberate mankind of a nuclear bomb? Because ultimately, this is the question in my mind.

Excellent comment, and I agree with your sentiment. I just don't think concentrating control of the technology before it's really developed is wise or prudent.

◧◩◪◨⬒⬓⬔
713. cma+FS1[view] [source] [discussion] 2023-05-16 20:31:18
>>tomrod+1L
Did you read the Wired interview?

> “I listened to him thinking he was going to be crazy. I don't think he's crazy at all,” Hinton says. “But, okay, it’s not helpful to talk about bombing data centers.”

https://www.wired.com/story/geoffrey-hinton-ai-chatgpt-dange...

So, he doesn't think the most extreme guy is crazy whatsoever, just misguided in his proposed solutions. But Eliezer has for instance has said something pretty close to AI might escape by entering in the quantum Konami code which the simulators of our universe put in as a joke and we should entertain nuclear war before letting them get that chance.

◧◩◪◨
730. precom+lU1[view] [source] [discussion] 2023-05-16 20:40:50
>>dahwol+IP1
Oh yeah. And labeling it as an "AI" further obfuscates it. But apart from small gestures catered to people whose work is very "unique" / identifiable, no one else will get a kickback. They only need to kick the ball further for a couple more years and then it'll become a non-issue as linkrot takes over. Or maybe they use non-public domain stuff, maybe they have secret deals with publishers.

Heck, sometimes even google doesn't pay people for introducing new languages to their translation thingy.

https://restofworld.org/2023/google-translate-sorani-kurdish...

736. Animat+IU1[view] [source] 2023-05-16 20:43:22
>>vforgi+(OP)
This is a diversion from the real problem. Regulating AI is really about regulating corporate behavior. What's needed is regulation along these lines:

* Automated systems should not be permitted to make adverse decisions against individuals. This is already law in the EU, although it's not clear if it is enforced. This is the big one. Any company using AI to make decisions which affect external parties in any way must not be allowed to require any waiver of the right to sue, participate in class actions, or have the case heard by a jury. Those clauses companies like to put in EULAs would become invalid as soon as an AI is involved anywhere.

* All marketing content must be signed by a responsible party. AI systems increase the amount of new content generated for marketing purposes substantially. This is already required in the US, but weakly enforced. Both spam and "influencers" tend to violate this. The problem isn't AI, but AI makes it worse, because it's cheaper than troll farms, and writes better.

* Anonymous political speech may have to go. That's a First Amendment right in the US, but it's not unlimited. You should be able to say anything you're willing to sign.[1] This is, again, the troll farm problem, and, again, AIs make it worse.

That's probably enough to deal with the immediate problems.

[1] https://mtsu.edu/first-amendment/article/32/anonymous-speech

782. xnx+102[view] [source] 2023-05-16 21:12:42
>>vforgi+(OP)
Not the first time that OpenAI has claimed their technology is so good it's dangerous. (From early 2019: https://techcrunch.com/2019/02/17/openai-text-generator-dang...) This is the equivalent of martial artists saying that their hands have to be registered as deadly weapons.
◧◩◪
812. taf2+o42[view] [source] [discussion] 2023-05-16 21:36:28
>>nico+YF
This is the direct result of the merger of Sprint and T-mobile. They swore up and down to congress they would NOT raise prices on consumers[0]. So instead they turned around and like gangsters would do said to every business in the US sending text messages: "It'd be a real shame if those texts reminders you wanted to send stopped working... Good thing you can instead pay us $40 / month to be sure those messages are delivered."

At the same time At&T and Verizon saying oh snap let's make money on this too and still being pissed about Stir Shaken so to get ahead of it for Texting before Congress forces it on them. This way they can make money on it before it's forced.

[0] https://fortune.com/2019/02/04/t-mobiles-john-legere-promise...

◧◩◪◨⬒⬓
815. jamesh+Q42[view] [source] [discussion] 2023-05-16 21:39:28
>>Aperoc+pD1
Seems like it’s under-studied (due to anglophone bias in the English language political science world probably) - but comparative political science is a discipline, and this paper suggests it’s a matter of single-member districts rather than the nature of the constitutional arrangement: https://journals.sagepub.com/doi/10.1177/0010414090022004004

(I would just emphasize, before anyone complains, that the Federal Republic of Germany is very much a federal republic.)

830. uptown+V52[view] [source] 2023-05-16 21:45:52
>>vforgi+(OP)
Did he bring his Dreamwindow?

https://twitter.com/next_on_now/status/1653837352198873090?s...

◧◩◪◨⬒⬓
871. Alexan+Ec2[view] [source] [discussion] 2023-05-16 22:25:18
>>davegu+Ng1
Remember DeCSS[1] and later how AACS LA successfully litigated the banning of a number[2]? There was a lot of backlash in the form of distributing DeCSS and later the AACS key, but the DMCA and related WIPO treaties were never repealed and are still used to do things like take youtube-dl repos offline.

Even pretty draconian legislation can stand if it doesn't affect the majority of the country and is supported by powerful industry groups. I could definitely see some kind of compiler licensing requirement meeting these criteria.

[1] https://en.wikipedia.org/wiki/DeCSS#Legal_response

[2] https://en.wikipedia.org/wiki/AACS_encryption_key_controvers...

◧◩
879. Alexan+ed2[view] [source] [discussion] 2023-05-16 22:28:28
>>joebob+YC1
Were they every really scrappy? They had a ton of funding from the get-go.

> In December 2015, Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk, Amazon Web Services (AWS), Infosys, and YC Research announced[13] the formation of OpenAI and pledged over $1 billion to the venture.

[1] https://en.wikipedia.org/wiki/OpenAI

◧◩◪◨
891. smcin+ne2[view] [source] [discussion] 2023-05-16 22:36:51
>>EGreg+lT1
That was entirely different, and a play to muddy the regulatory waters and maybe buy him time: the CFTC is much smaller (budget, staff) than the SEC, and less aggressive in criminal enforcement. Aided by a bill introduced by crypto-friendly Sens Lummis and Gillibrand [https://archive.ph/vqHgC].
◧◩◪
902. stingr+gg2[view] [source] [discussion] 2023-05-16 22:51:43
>>varian+au1
Didn’t Facebook / Meta also do something similar during the whole “fake news” controversy?

https://www.cnbc.com/2020/02/15/facebook-ceo-zuckerberg-call...

◧◩◪◨
915. yarg+ei2[view] [source] [discussion] 2023-05-16 23:05:50
>>pg_123+AP1
There's no chance that we've peeked from a bang for buck sense - we still haven't adequately investigated sparse networks.

Relevantish: https://arxiv.org/abs/2301.00774

The fact that we can reach those levels of sparseness with pruning also indicates that we're not doing a very good job of generating the initial network conditions.

Being able to come up with trainable initial settings for sparse networks across different topologies is hard, but given that we've had a degree of success with pre-trained networks, pre-training and pre-pruning might also allow for sparse networks with minimally compromised learning capabilities.

If it's possible to pre-train composable network modules, it might also be feasible to define trainable sparse networks with significantly relaxed topological constraints.

945. concor+Gm2[view] [source] 2023-05-16 23:32:52
>>vforgi+(OP)
I'm seeing a lot of posts by people who obviously haven't read the full transcript given they specifically discuss regulatory capture and the need to ensure small companies can still do AI development.

See 2:09:40 in https://www.youtube.com/live/iqVxOZuqiSg

◧◩◪◨
950. candio+Gn2[view] [source] [discussion] 2023-05-16 23:38:54
>>runarb+1C1
AI is being used as a consumer good, including to discriminate:

https://www.smh.com.au/national/nsw/maximise-profits-facial-...

AI is being used by law enforcement and public institutions. In fact so much that perhaps this is a good link:

https://www.monster.com/jobs/search?q=artificial+intelligenc...

In both cases it's too late to do anything about it. AI is "loose". Oh and I don't know if you noticed, governments have collectively decided law doesn't apply to them, only to their citizens, and only in a negative way. For instance, just about every country has laws on the books guaranteeing timely emergency care at hospitals, with timely defined as within 1 or 2 hours.

Waiting times are 8-10 hours (going up to days) and this is the normal situation now, it's not a New Year's eve or even Friday evening thing anymore. You have the "right" to less waiting time, which can only mean the government (the worst hospitals are public ones) should be forced to fix this, spending whatever it needs to to fix it. And it can be fixed, I mean at this point you'd have to give physicians and nurses a 50% rise and double the number employed and 10x the number in training.

Government is just outright not doing this, and if one thing's guaranteed, this will keep getting worse, a direct violation of your rights in most states, for the next 10 years minimum, but probably longer.

◧◩◪◨⬒
962. cma+bq2[view] [source] [discussion] 2023-05-16 23:55:15
>>yarg+ei2
50% sparsity is almost certainly already being used given that it is accelerated in current nvidia hardware both at training time, usable dynamically through RigL ("Rigging the Lottery: Making All Tickets Winners" https://arxiv.org/pdf/1911.11134.pdf )--which also addresses your point about initial conditions being locked in-- and at accelerates 50% sparsity at inference time.
1015. quickt+7E2[view] [source] 2023-05-17 01:41:44
>>vforgi+(OP)
The easy take is to be cynical: he is now building his drawbridge.

But taking him as a genuine "concerned citizen" - I don't think AI licensing is going to be effective. The government are pretty useless at punishing big corporations, to the point where I would say corporations have almost immunity from criminal prosecution. [1]. Therefore the kinds of companies that will do bad things with AI, the need for a license wont stop them. Especially as it is hard for anyone to see what they are running on their GPUs.

[1] https://ag.ny.gov/press-release/2023/attorney-general-james-...

1026. Recycl+XI2[view] [source] 2023-05-17 02:29:08
>>vforgi+(OP)
When I taught at a business school, our textbooks told us that once a company had a large lead in a field, they should ask for regulation. Regulations build walls to protect their lead by increasing the cost to compete against them.

I believe this is what OpenAI is doing, and it makes me sad as a teacher.

AI is the greatest tool for equity and social justice in history. Any poor person with Internet access can learn (almost) anything from ChatGPT (http://chat.openai.com)

A bright student trapped in a garbage school where the kid to his right is stoned and the kid to his left is looking up porn on a phone can learn from personalized AI tutors.

While some complain that AI will take our jobs, they are ignoring the effect of competition. Humans will become smarter with AI tutors. Humans will become more capable with AI assistants. With AI an individual can compete with a large corporation. It reminds me of the early days of the World Wide Web and the "Online, nobody knows you are a dog" memes.

I hope the best hope many bright and poor kids have is not taken away to protect the power bases of the rich and powerful. They deserve a chance.

◧◩
1033. fuzzfa+YL2[view] [source] [discussion] 2023-05-17 03:02:47
>>happyt+ZB1
When Zappa testified before Congress he was extremely adamant about unsavory outcomes resulting from government control of language expression being more damaging than any unsavory language on its own.

https://societyofrock.com/in-1985-frank-zappa-is-asked-to-te...

Less fulfilling text version:

https://urbigenous.net/library/zappa.html

◧◩◪
1049. selimt+qO2[view] [source] [discussion] 2023-05-17 03:30:22
>>ben_w+eL1
https://news.ycombinator.com/item?id=35872321
1073. upward+uT2[view] [source] 2023-05-17 04:28:39
>>vforgi+(OP)
One thing that I think is very interesting, which is highlighted in this other article https://apnews.com/article/chatgpt-openai-ceo-sam-altman-con... , is that Mr. Altman warns that we are on the verge of very troubling A.I. Escape scenarios. He specifically said that there should be a ban on A.I. that can "self-replicate and self-exfiltrate into the wild". The fact that he thinks that such a thing could happen in the near future is f**ing terrifying. That would be the first step in A.I. escaping human control and posing a grave threat to our species' survival.
◧◩◪◨⬒⬓
1083. vinay_+LX2[view] [source] [discussion] 2023-05-17 05:17:03
>>code_w+q12
Well, chips needed for AI training/inference are lot more simpler than general purpose CPUs. Fabs have already demonstrated 7nm process with older DUV tech for such chips. They can brute force their way through it – at least for mission-critical use-cases.

https://www.edn.com/the-truth-about-smics-7-nm-chip-fabricat...

◧◩
1114. iNic+Vo3[view] [source] [discussion] 2023-05-17 09:52:25
>>rockem+6P2
Having also watched the hearing I was pretty surprised at all the negativity in the comments. My view of Sam Altman has improved after watching the hearings. He seems to sincerely believe that he is doing the right thing. He owns zero equity in OpenAI and has no financial incentive. Of course, if you don't buy the AI might be dangerous argument then this seems just like theatrics. But there are clear threats with the existing models [1], and I believe there will be even greater threats in the future (see Superintelligence or The Precipice or Human Compatible). Also this [2], and this master list of failures [3].

[1]: https://arxiv.org/abs/2305.06972 [2]: https://arxiv.org/abs/2210.01790 [3]: https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3Hs...

◧◩◪◨⬒⬓⬔
1117. darker+lu3[view] [source] [discussion] 2023-05-17 10:47:18
>>edgyqu+HT2
Source? I'm going to need a receipt for my downvote!

Here's mine: https://movies.stackexchange.com/questions/10572/is-this-quo...

◧◩◪◨⬒⬓
1136. adamsm+224[view] [source] [discussion] 2023-05-17 14:25:45
>>srslac+vz2
This is demonstrably wrong. It can clearly generate unique text not from it's training corpus and can successfully answer logic based questions that were also not in it's training corpus.

Another paper not from Msft showing emergent task capabilities across a variety of LLMs as scale increases.

https://arxiv.org/pdf/2206.07682.pdf

You can hem and haw all you want but the reality is these models have internal representations of the world that can be probed via prompts. They are not stochastic parrots no matter how much you shout in the wind that they are.

◧◩◪◨⬒
1137. adamsm+b34[view] [source] [discussion] 2023-05-17 14:30:25
>>srslac+Sy2
It's incredibly easy to show that you are wrong and the models perform at high levels on questions that are clearly not in their training data.

Unless you think OpenAI is blatantly lying about this:

"A.1 Sourcing. We sourced either the most recent publicly-available official past exams, or practice exams in published third-party 2022-2023 study material which we purchased. We cross-checked these materials against the model’s training data to determine the extent to which the training data was not contaminated with any exam questions, which we also report in this paper."

"As can be seen in tables 9 and 10, contamination overall has very little effect on the reported results."

They also report results on uncontaminated data which shows basically no statistical difference.

https://cdn.openai.com/papers/gpt-4.pdf

◧◩◪◨⬒⬓⬔⧯▣
1141. leonid+Ql4[view] [source] [discussion] 2023-05-17 15:37:58
>>canjob+IP2
Few people saw it coming in just two years, sure. But most people following this space were already expecting a big evolution like the one we saw in 5-ish years.

For example, take this thread: https://news.ycombinator.com/item?id=21717022

It's a text RPG game built on top of GPT-2 that could follow arbitrary instructions. It was a full project with custom training for something that you can get with a single prompt on ChatGPT nowadays, but it clearly showcased what LLMs were capable of and things we take for granted now. It was clear, back then, that at some point ChatGPT would happen.

◧◩◪◨⬒⬓
1154. nullse+4q5[view] [source] [discussion] 2023-05-17 20:37:23
>>chasd0+4E2
That individual in particular was pushing some left-wing talking points.

Though the other day Yuval Noah Harari gave a great talk on the potential threat to democracy - https://youtu.be/LWiM-LuRe6w

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
1155. nullse+Ar5[view] [source] [discussion] 2023-05-17 20:44:53
>>edgyqu+mj4
Any evidence or sources for that? I just don't know how that would be knowable to any of us.

Yuval Noah Harari gave a great talk the other day on the potential threat to democracy from the current state of the technology - https://youtu.be/LWiM-LuRe6w

◧◩◪◨
1164. simonh+QI9[view] [source] [discussion] 2023-05-19 03:44:35
>>elil17+PI3
That’s the alignment problem. We don’t know what the actual goals of an AI trained neural net are. We know what criteria we trained it against, but it turns out that’s not at all the same thing.

I highly recommend Rob Miles channel on YouTube. Here’s a good one, but they’re all fascinating. It turns out training an AI to have the actual goals we want it to have is fiendishly difficult.

https://youtu.be/hEUO6pjwFOo

◧◩◪◨⬒⬓⬔⧯▣▦
1169. hackin+hrg[view] [source] [discussion] 2023-05-21 18:05:04
>>iavael+0Ce
>I don't see any reason why thing must be different this time.

The difference is that AGI isn't a static tool. If some constraint is a limiting factor to economic activity, inventing a tool to eliminate the constraint uncorks new kinds of economic potential and the real economy expands to exploit new opportunities. But such tools historically were narrowly focused and so the new space of economic opportunity is left for human labor to engage with. AGI breaks this trend. Any knowledge work can in principle be captured by AGI. There is nothing "beyond" the function of AGI for human labor en mass to engage productively with.

To be clear, my point in the parent was from extrapolating current trends to a near-term (10-20 years) proto AGI. LLMs as they currently stand certainly won't put 90% of people out of work. But it is severely short-sighted to refuse to consider the trends and where the increasing sophistication of generalist AIs (not necessarily AGI) are taking society.

>Could you provide any sources on this topic? This is a new information for me here.

Graph: https://files.epi.org/charts/img/91494-9265.png

Source: https://www.epi.org/publication/understanding-the-historic-d...

◧◩◪◨⬒⬓⬔⧯
1173. mitthr+VvF[view] [source] [discussion] 2023-05-30 00:18:10
>>Satam+2gb
Take your pick?

https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/

https://intelligence.org/files/AlignmentHardStart.pdf

https://www.youtube.com/watch?v=pYXy-A4siMw

https://europepmc.org/article/med/26185241

https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...

I could keep going but the reading on this spans for tens of thousands of pages of detailed reasoning and analysis, including mathematical proofs and lab-scale experiments.

[go to top]