Runs to congress to attempt to use and suggest new regulations against open source AI models to wipe them out and brand them non-compliant or un-licensed and unsafe for general use and using AI safety as a scapegoat again.
After that, to secretly push a pseudo-open source AI model that is compliant but limited compared to the closed models in an attempt to eliminate the majority of open source AI companies who can’t get such licenses.
So a clever tactic to create new regulations that benefit them (O̶p̶e̶n̶AI.com) more over everyone else, meaning less transparency, more hurdles for actual open AI research and additional bureaucracy. Also don't forget that Altman is also selling his Worldcoin dystopian crypto snake oil project as the 'antidote' to verify against everything getting faked by AI. [0] He his hedged in either way.
So congratulations to everyone here for supporting these gangsters at O̶p̶e̶n̶AI.com for pushing for regulatory capture.
[0] https://worldcoin.org/blog/engineering/humanness-in-the-age-...
This moment, 80% of comments are derisive, and you actually have zero idea how much is computer generated bot content meant to sway opinion by post-GPT AI industry who see themselves as becoming the next iPhone-era billionaires. We are fast approaching a reality where our information space breaks down. Where almost all text you get from HN, Twitter, News, Substack; almost all video you get from Youtube, Instagram, TikTok; is just computer generated output meant to sway opinion and/or make $.
I can't know Altman's true motives. But this is also what it looks like when a frontrunner is terrified at what happens when GPT6 is released and if they don't, the rest of the people who see billionaire $ coming their way are close at your heels trying to leapfrog you if you stop. Consequences? What consequences? We all know social media has been a net good, right? Many of you sound exactly like the few remaining social media cheerleaders (of which there were plenty 5 years ago) who still think Facebook, Instagram, Twitter, isn't causing depression and manipulation. If you appreciated what The Social Dilemma illuminated, then watch the same people on AI: https://www.youtube.com/watch?v=xoVJKj8lcNQ
still, there are probably a lot of people like me who have heard it used (incorrectly it seems) as an insult so many times that it's an automatic response :-(
https://www.telecomtv.com/content/digital-platforms-services...
https://writingillini.com/2023/05/16/illinois-basketball-ill...
Chair Richard Blumenthal (CT), Amy Klobuchar (MN), Chris Coons (DE), Mazie Hirono (HI), Alex Padilla (CA), Jon Ossoff (GA)
Majority Office: 202-224-2823
Ranking Member Josh Hawley (MO), John Kennedy (LA), Marsha Blackburn (TN), Mike Lee (UT), John Cornyn (TX)
Minority Office: 202-224-4224
If you’re in those states, please call their D.C. office and read them the comment you’re leaving here.
Why do you think they are attempting to release a so-called 'open source' [0] and 'compliant' AI model to wipe out other competing open source AI models, to label them to others as unlicensed and dangerous? They know that transparent, open source AI models is a threat. Hence why they are doing this.
They do not have a moat against open source, unless they use regulations that suit them against their competitors using open source models.
O̶p̶e̶n̶AI.com is a scam. On top of the Worldcoin crypto scam that Sam Altman is also selling as a antidote against the unstoppable generative AI hype to verify human eyeballs on the blockchain with an orb. I am not joking. [1] [2]
[0] https://www.reuters.com/technology/openai-readies-new-open-s...
[1] https://worldcoin.org/blog/engineering/humanness-in-the-age-...
[2] https://worldcoin.org/blog/worldcoin/designing-orb-universal...
[1] https://www.yahoo.com/news/worldcoin-chatgpt-sam-altman-ethe...
https://grammarist.com/words/ilk/#:~:text=It's%20neutral.,a%....
Also, note that all of the negative examples are politics related. If a politician reads the word 'ilk', it is going to be interpreted negatively. It might be the case that ilk does "always mean" a negative connotation in politics.
You could change 'ilk' to 'friends', and keep the same meaning with very little negative connotation. There is still a slight negative connotation here, in the political arena, but it's a very vague shade, and I like it here.
"Altman and his ilk try to claim that..." is a negative phrase because "ilk" is negative, but also because "try to claim" is invalidating and dismissive. So this has elements or notes of an emotional attack, rather than a purely rational argument. If someone is already leaning towards Altman's side, then this will feel like an attack and like you are the enemy.
"Altman claims that..." removes all connotation and sticks to just the facts.
I've no doubt AI is here to stay. All I am asking for is some middle ground and safety. Is that too much to ask?
He also says Facebook solved all the problems with their recommendation algorithms' unintended effects on society after 2016.
[1] https://www.bereskinparr.com/doc/chatgpt-ip-strategy
[2] https://hbr.org/2023/04/generative-ai-has-an-intellectual-pr...
Didn't say they do. I said "these are broad questions whose answers are worth serious legal time." I was suggesting one angle I would lobby for were that my job.
It's a live battlefield. Nobody is going to pay tens of thousands of dollars and then post it online, or put out for free what they can charge for.
> OpenAI’s Terms of Use, for example, assign all of its rights, title, and interest in the output to the user
Subject to restrictions, e.g. not using it to "develop models that compete with OpenAI" or "discover the source code or underlying components of models, algorithms, and systems of the Services" [1]. Within the context of open-source competition, those are huge openings.
> shows where OpenAI is trying to weaken copyright, not where they they are trying to strengthen it
It shows what intellectual property claims they and their competitors do and may assert. They're currently "limited" [2].
> notice you don't have a [0]-index
I'm using natural numbers in a natural language conversation with, presumably, a natural person. It's a style choice, nothing more.
Dismiss it as the opinions of "a Googler" but it is entirely true. The seemingly coordinated worldwide[1] push to keep it in the hands of the power class speaks for itself.
Both are seemingly seeking to control not only the commercial use and wide distribution of such systems, but even writing them and personal use. This will keep even the knowledge of such systems and their capabilities in the shadows, ripe for abuse laundered through black box functions.
This is up there with the battle for encryption in ensuring a more human future. Don't lose it.
[1] https://technomancers.ai/eu-ai-act-to-target-us-open-source-...
See also https://grammarist.com/words/ilk/#:~:text=It's%20neutral.,a%....
Here's their planned proposal for government regulation; they discuss not just limiting access to models but also to datasets, and possibly even chips.
This seems particularly relevant, on the discussion of industry standards, regulation, and limiting access:
"Despite these limitations, strong industry norms—including norms enforced by industry standards or government regulation—could still make widespread adoption of strong access restrictions possible. As long as there is a significant gap between the most capable open-source model and the most capable API-controlled model, the imposition of monitoring controls can deny hostile actors some financial benefit.166 Cohere, OpenAI, and AI21 have already collaborated to begin articulating norms around access to large language models, but it remains too early to tell how widely adopted, durable, and forceful these guidelines will prove to be.
Finally, there may be alternatives to APIs as a method for AI developers to provide restricted access. For example, some work has proposed imposing controls on who can use models by only allowing them to work on specialized hardware—a method that may help with both access control and attribution.168 Another strand of work is around the design of licenses for model use.169 Further exploration of how to provide restricted access is likely valuable."
That's not the definition of AI or intelligence
You're letting your understanding of how LLMs work bias you. They may be at their core a token autocompleter but they have emergent intelligence
https://www.eff.org/deeplinks/2015/04/remembering-case-estab...
https://www.top500.org/lists/top500/2004/11/ and https://www.top500.org/system/173736/
And while 1100 Macs wouldn't exactly be affordable, the idea of trying to limit commercial data centers gets amusing.
That system was "only" 12,250.00 GFlop/s - I could do that with a small rack of Mac M1 minis now for less than $10k and fewer computers than are in the local grade school computer room.
(and I'm being a bit facetious here) Local authorities looking at power usage and heat dissipation for marijuana growing places might find underground AI training centers.
https://arxiv.org/pdf/2304.15004.pdf
our alternative suggests that existing claims of emergent abilities are creations of the researcher’s analyses, not fundamental changes in model behavior on specific tasks with scale.
True to form, these articles also work hard at planting the idea that Sam Altman created OpenAI, when in fact he joined rather recently, in a business role. Are these articles being planted somehow? I find it very likely. Don't forget that this approach is also straight out of the YC playbook, disclosed in great detail by Paul Graham in previous writings [3].
Finally, in keeping with the conspiratorial tone of this comment, for another example of Sam Altman rubbing shoulders with The Establishment, his participation in things like the Bilderberg group [4] are a matter of public record. Which I join many others in finding creepy, even moreso as he maneuvers to exert influence on policy around the seismic shift that is AI.
To be clear, I have nothing specific against sama. But I dislike underhanded influence campaigns, which this all reeks of. Oh yeah, I will consider downvotes to this comment as proof of the shadow (AI?) government's campaign to promote Sam Altman. Do your worst!
[1] https://www.cnbc.com/2023/05/16/openai-ceo-woos-lawmakers-ah...
[2] https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma... ("Graham said, “I asked Sam in our kitchen, ‘Do you want to take over YC?,’ and he smiled, like, it worked. I had never seen an uncontrolled smile from Sam. It was like when you throw a ball of paper into the wastebasket across the room—that smile.”")
[3] http://www.paulgraham.com/submarine.html
[4] https://en.wikipedia.org/wiki/2016_Bilderberg_Conference
Edit: List of posts for anyone interested http://paste.debian.net/plain/1280426
https://www.c-span.org/video/?528117-1/openai-ceo-testifies-...
- Mazie Hirono, Junior Senator from Hawaii, has very thoughtful questions. Very impressive.
- Gary Marcus also up there speaking with Sam Altman of OpenAI.
- So far, Sen. Hirono and Sen. Padilla seem very wary of regulating AI at this time.
- Very concerned about not "replicating social media's failure", why is it so biased and inequitable. Much more reasonable concerns.
- Also responding to questions is Christina Montgomery, chair of IBM's AI Ethics Board.
- "Work to generate a representative set of values from around the world."
- Sen. Ossoff asking for definition of "scope".
- "We could draw a line at systems that need to be licensed. Above this amount of compute... Define some capability threshold... Models that are less capable, we don't want to stop open source."
- Ossoff wants specifics.
- "Persuade, manipulate, influence person's beliefs." should be licensed.
- Ossoff asks about predicting human behavior, i.e. use in law enforcement, "It's very important we understand these are tools, not to take away human judgment."
- "We have no national privacy law." — Sen Ossof "Do you think we need one?"
- Sam "Yes. User should be able to opt out of companies using data. Easy to delete data. If you don't want your data use to train, you have right to exclude it."
- "There should be more ways to have your data taken down off the public web." —Sam
- "Limits on what a deployed model is capable of and also limits on what it will answer." — Sam
- "Companies who depend upon usage time, maximize engagement with perverse results. I would humbly advise you to get way ahead of this, the safety of children. We will look very harshly on technology that harms children."
- "We're not an advertising based model." —Sam
- "Requirements about how the values of these systems are set and how they respond to questions." —Sam
- Sen. Booker up now.
- "For congress to do nothing, which no one is calling for here, would be exceptional."
- "What kind of regulation?"
- "We don't want to slow things down."
- "A nimble agency. You can imagine a need for that, right?"
- "Yes." —Christina Montgomery
- "No way to put this genie back in the bottle." Sen. Booker
- "There are more genies yet to come from more bottles." — Gary Marcus
- "We need new tools, new science, transparency." —Gary Marcus
- "We did know that we wanted to build this with humanity's best interest at heart. We could really deeply transform the world." —Sam
- "Are you ever going to do ads?" —Sen Booker
- "I wouldn't say never...." —Sam
- "Massive corporate concentration is really terrifying.... I see OpenAI backed by Microsoft, Anthropic is backed by Google. I'm really worried about that. Are you worried?" —Sen Booker?
- "There is a real risk of technocracy combined with oligarchy." —Gary Marcus
- "Creating alignment dataset has got to come very broadly from society." —Sam Senator Welch from Vermont up now
- "I've come to the conclusion it's impossible for congress to keep up with the speed of technology."
- "The spread of disinformation is the biggest threat."
- "We absolutely have to have an agency. Scope has to be defined by congress. Unless we have an agency, we really don't have much of a defense against the bad stuff, and the bad stuff will come."
- Use of regulatory authority and the recognition that it can be used for good, but there's also legitimate concern of regulation being a negative influence."
- "What are some of the perils of an agency?"
- "America has got to continue to lead."
- "I believe it's possible to do both, have a global view. We want America to lead."
- "We still need open source to comply, you can still do harm with a smaller model."
- "Regulatory capture. Greenwashing." —Gary Marcus
- "Risk of not holding companies accountable for the harms they are causing today." —Christina Montgomery
- Lindsay Graham, very pro-licensing, "You don't build a nuclear power plant without a license, you don't build an AI without a license."
- Sen Blumenthal brings up Anti-Trust legislation.
- Blumenthal mentions how classified briefings already include AI threats.
- "For every successful regulation, you can think of five failures. I hope our experience here will be different."
- "We need to grapple with the hard questions here. This has brought them up, but not answered them."
- "Section 230"
- "How soon do you think gen AI will be self-aware?" —Sen Blumenthal
- "We don't understand what self-awareness is." —Gary Marcus
- "Could be 2 years, could be 20."
- "What are the highest risk areas? Ban? Strict rules?"
- "The space around misinformation. Knowing what content was generated by AI." —Christina Montgomery
- "Medical misinformation, hallucination. Psychiatric advice. Ersatz therapists. Internet access for tools, okay for search. Can they make orders? Can they order chemicals? Long-term risks." —Gary Marcus
- "Generative AI can manipulate the manipulators." —Blumenthal
- "Transparency. Accountability. Limits on use. Good starting point?" —Blumenthal
- "Industry should't wait for congress." —C. Montgomery
- "We don't have transparency yet. We're not doing enough to enforce it." —G. Marcus
- "AGI closer than a lot of people appreciate." —Blumenthall
- Gary and Sam are getting along and like each other now.
- Josh Hawley
- Talking about loss of jobs, invasion of personal privacy, manipulation of behavior, opinion, and degradation of free elections in America.
- "Are they right to ask for a pause?"
- "It did not call for a ban on all AI research or all AI, only on very specific thing, like GPT-5." -G Marcus
- "Moratorium we should focus on is deployment. Focus on safety." —G. Marcus
- "Without external review."
- "We waited more than 6 months to deploy GPT-4. I think the frame of the letter is wrong." —Sam
- Seems to not like the arbitrariness of "six months."
- "I'm not sure how practical it is to pause." —C. Montgomery
- Hawley brings up regulatory capture, usually get controlled by people they're supposed to be watching. "Why don't we just let people sue you?"
- If you were harmed by AI, why not just sue?
- "You're not protected by section 230."
- "Are clearer laws a good thing? Definitely, yes." —Sam
- "Would certainly make a lot of lawyers wealthy." —G. Marcus
- "You think it'd be slower than congress?" —Hawley
- Copyright, wholesale misinformation laws, market manipulation?" Which laws apply? System not thought through? Maybe 230 does apply? We don't know.
- "We can fix that." —Hawley
- "AI is not a shield." —C. Montgomery
- "Whether they use a tool or a human, they're responsible." —C. Montgomery
- "Safeguards and protections, yes. A flat stop sign? I would be very, very worried about." —Blumenthall
- "There will be no pause." Sen. Booker "Nobody's pausing."
- "I would agree." Gary Marcus
- "I have a lot of concerns about corporate intention." Sen Booker
- "What happens when these companies that already control so much of our lives when they are dominating this technology?" Booker
- Sydney really freaked out Gary. He was more freaked out when MS didn't withdraw Sydney like it did Tay.
- "I need to work on policy. This is frightening." G Marcus
- Cory admits he is a tech bro (lists relationships with investors, etc)
- "The free market is not what it should be." —C. Booker
- "That's why we started OpenAI." —Sam "We think putting this in the hands of a lot of people rather than the hands of one company." —Sam
- "This is a new platform. In terms of using the models, people building are doing incredible things. I can't believe you get this much technology for so little money." —Sam
- "Most industries resist reasonable regulation. The only way we're going to see democratization of values is if we enforce safety measures." —Cory Booker
- "I sense a willingness to participate that is genuine and authentic." —Blumenthal
https://en.wikipedia.org/wiki/OpenAI
Just FYI, what you're saying isn't accurate. It was, but it's not anymore.
Čapek, intriguingly, happens to be the person who first used the word robot, which was coined by his brother.
I'll give you one: National Public Lands Day: Let’s Help Elk … and Their Ilk - https://pressroom.toyota.com/npld-2016-elk/ (it's a play on words)
What people also fail to understand is that AI is largely seen by the military industrial complex as a weapon to control culture and influence. The most obvious risk of AI — the risk of manipulating human behavior towards favored ends — has been shown to be quite effective right out the gate. So, the back channel conversation has to be to put it under regulation because of it's weaponization potential, especially considering the difficulty in identifying anyone (which of course is exactly what Elon is doing with X 2.0 — it's a KYC id platform to deal with this exact issue with a 220M user 40B head start).
I mean, the dead internet theory is turning true, and half the traffic on the Web is already bot driven. Imagine when it's 99%, as proliferation of this technology will inevitably generate simply for the economics.
Starting with open source is the only way to get enough people looking at the products to create any meaningful oversight, but I fear the weaponization fears will mean that everything is locked away in license clouds with politically influential regulatory boards simply on the proliferation arguments. Think of all the AI technologists who won't be versed in this technology unless they work at a "licensed company" as well — this is going to make the smaller population of the West much less influential in the AI arms race, which is already underway.
To me, it's clear that nobody in Silicon Valley or the Hill has learned a damn thing from the prosecution of hackers and the subsequent bloodbath of cybersecurity as a result of the exact same kinds of behavior back in the early to mid-2000s. We ended up driving out best and brightest into the grey and black areas of infosec and security, instead of out in the open running companies where they belong. This move would do almost the exact same thing to AI, though I think you have to be a tad of an Asimov or Bradbury fan to see it right now.
I don't know, that's just how I see it, but I'm still forming my opinions. LOVE LOVE LOVE your comment though. Spot on.
Relevant articles:
https://www.independent.co.uk/tech/internet-bots-web-traffic...
https://theconversation.com/ai-can-now-learn-to-manipulate-h....
Meaning anyone could eventually reproduce a Chat GPT4 and beyond. And eventually it can run outside of a large data center.
So... how will you tell its an AI vs a human doing you wrong?
Seems to me if the AI breaks the law, find out who's driving it and prosecute them.
For example: https://commons.wikimedia.org/wiki/File:Cat_August_2010-4.jp... would get classified as /r/standardissuecat
https://stock.adobe.com/fi/images/angry-black-cat/158440149 would get classified as /r/blackcats and /r/stealthbombers
Anyways... that's my hobbyist ML project.
Democracy will not withstand AI when it's fully developed. Let me offer a better written explanation of my general views than I could ever muster up for a comment on HN in the form of a quote from an article by Dr. Thorsten Thiel (Head of the Research Group "Democracy and Digitaliziation" at the Weizenbaum Institute for the Networked Society):
> The debate on AI’s impact on the public sphere is currently the one most prominent and familiar to a general audience. It is also directly connected to long-running debates on the structural transformation of the digital public sphere. The digital transformation has already paved the way for the rise of social networks that, among other things, have intensified the personalization of news consumption and broken down barriers between private and public conversations. Such developments are often thought to be responsible for echo-chamber or filter-bubble effects, which in turn are portrayed as root causes of the intensified political polarization in democracies all over the world. Although empirical research on filter bubbles, echo chambers, and societal polarization has convincingly shown that the effects are grossly overestimated and that many non-technology-related reasons better explain the democratic retreat, the spread of AI applications is often expected to revive the direct link between technological developments and democracy-endangering societal fragmentation.
> The assumption here is that AI will massively enhance the possibilities for analyzing and steering public discourses and/or intensify the automated compartmentalizing of will formation. The argument goes that the strengths of today's AI applications lie in the ability to observe and analyze enormous amounts of communication and information in real time, to detect patterns and to allow for instant and often invisible reactions. In a world of communicative abundance, automated content moderation is a necessity, and commercial as well as political pressures further effectuate that digital tools are created to oversee and intervene in communication streams. Control possibilities are distributed between users, moderators, platforms, commercial actors and states, but all these developments push toward automation (although they are highly asymmetrically distributed). Therefore, AI is baked into the backend of all communications and becomes a subtle yet enormously powerful structuring force.
> The risk emerging from this development is twofold. On the one hand, there can be malicious actors who use these new possibilities to manipulate citizens on a massive scale. The Cambridge Analytica scandal comes to mind as an attempt to read and steer political discourses (see next section on electoral interference). The other risk lies in a changing relationship between public and private corporations. Private powers are becoming increasingly involved in political questions and their capacity to exert opaque influences over political processes has been growing for structural and technological reasons. Furthermore, the reshaping of the public sphere via private business models has been catapulted forward by the changing economic rationality of digital societies such as the development of the attention economy. Private entities grow stronger and become less accountable to public authorities; a development that is accelerated by the endorsement of AI applications which create dependencies and allow for opacity at the same time. The ‘politicization’ of surveillance capitalism lies in its tendency, as Shoshana Zuboff has argued, to not only be ever more invasive and encompassing but also to use the data gathered to predict, modify, and control the behavior of individuals. AI technologies are an integral part in this ‘politicization’ of surveillance capitalism, since they allow for the fulfilment of these aspirations. Yet at the same time, AI also insulates the companies developing and deploying it from public scrutiny through network effects on the one hand and opacity on the other. AI relies on massive amounts of data and has high upfront costs (for example, the talent required to develop it, and the energy consumed by the giant platforms on which it operates), but once established, it is very hard to tame through competitive markets. Although applications can be developed by many sides and for many purposes, the underlying AI infrastructure is rather centralized and hard to reproduce. As in other platform markets, the dominant players are those able to keep a tight grip on the most important resources (models and data) and to benefit from every individual or corporate user. Therefore, we can already see that AI development tightens the grip of today’s internet giants even further. Public powers are expected to make increasing use of AI applications and therefore become ever more dependent on the actors that are able to provide the best infrastructure, although this infrastructure, for commercial and technical reasons, is largely opaque.
> The developments sketched out above – the heightened manipulability of public discourse and the fortification of private powers – feed into each other, with the likely result that many of the deficiencies already visible in today’s digital public spheres will only grow. It is very hard to estimate whether these developments can be counteracted by state action, although a regulatory discourse has kicked in and the assumption that digital matters elude the grasp of state regulation has often been proven wrong in the history of networked communication. Another possibility would be a creative appropriation of AI applications through users whose democratic potential outweighs its democratic risks thus enabling the rise of differently structured, more empowering and inclusive public spaces. This is the hope of many of the more utopian variants of AI and of the public sphere literature, according to which AI-based technologies bear the potential of granting individuals the power to navigate complex, information-rich environments and allowing for coordinated action and effective oversight (e.g. Burgess, Zarkadakis).
Source: https://us.boell.org/en/2022/01/06/artificial-intelligence-a...
Social bots and deep fakes will be so good so quickly — the primary technologies being talked about in terms of how Democracy can survive — I doubt there will be another election without extensive use of these technologies in a true plethora of capacities from influence marketing to outright destabilization campaigns. I'm not sure what Government can deal with a threat like that, but I suspect the recent push to revise tax systems and create a single global standard for multinational taxation recently the subject of an excellent talk at the WEF are more than tangentially related to the AI debate.
So, is it a transformational technology that will liberate mankind of a nuclear bomb? Because ultimately, this is the question in my mind.
Excellent comment, and I agree with your sentiment. I just don't think concentrating control of the technology before it's really developed is wise or prudent.
> “I listened to him thinking he was going to be crazy. I don't think he's crazy at all,” Hinton says. “But, okay, it’s not helpful to talk about bombing data centers.”
https://www.wired.com/story/geoffrey-hinton-ai-chatgpt-dange...
So, he doesn't think the most extreme guy is crazy whatsoever, just misguided in his proposed solutions. But Eliezer has for instance has said something pretty close to AI might escape by entering in the quantum Konami code which the simulators of our universe put in as a joke and we should entertain nuclear war before letting them get that chance.
Heck, sometimes even google doesn't pay people for introducing new languages to their translation thingy.
https://restofworld.org/2023/google-translate-sorani-kurdish...
* Automated systems should not be permitted to make adverse decisions against individuals. This is already law in the EU, although it's not clear if it is enforced. This is the big one. Any company using AI to make decisions which affect external parties in any way must not be allowed to require any waiver of the right to sue, participate in class actions, or have the case heard by a jury. Those clauses companies like to put in EULAs would become invalid as soon as an AI is involved anywhere.
* All marketing content must be signed by a responsible party. AI systems increase the amount of new content generated for marketing purposes substantially. This is already required in the US, but weakly enforced. Both spam and "influencers" tend to violate this. The problem isn't AI, but AI makes it worse, because it's cheaper than troll farms, and writes better.
* Anonymous political speech may have to go. That's a First Amendment right in the US, but it's not unlimited. You should be able to say anything you're willing to sign.[1] This is, again, the troll farm problem, and, again, AIs make it worse.
That's probably enough to deal with the immediate problems.
[1] https://mtsu.edu/first-amendment/article/32/anonymous-speech
At the same time At&T and Verizon saying oh snap let's make money on this too and still being pissed about Stir Shaken so to get ahead of it for Texting before Congress forces it on them. This way they can make money on it before it's forced.
[0] https://fortune.com/2019/02/04/t-mobiles-john-legere-promise...
(I would just emphasize, before anyone complains, that the Federal Republic of Germany is very much a federal republic.)
https://twitter.com/next_on_now/status/1653837352198873090?s...
Even pretty draconian legislation can stand if it doesn't affect the majority of the country and is supported by powerful industry groups. I could definitely see some kind of compiler licensing requirement meeting these criteria.
[1] https://en.wikipedia.org/wiki/DeCSS#Legal_response
[2] https://en.wikipedia.org/wiki/AACS_encryption_key_controvers...
> In December 2015, Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk, Amazon Web Services (AWS), Infosys, and YC Research announced[13] the formation of OpenAI and pledged over $1 billion to the venture.
https://www.cnbc.com/2020/02/15/facebook-ceo-zuckerberg-call...
Relevantish: https://arxiv.org/abs/2301.00774
The fact that we can reach those levels of sparseness with pruning also indicates that we're not doing a very good job of generating the initial network conditions.
Being able to come up with trainable initial settings for sparse networks across different topologies is hard, but given that we've had a degree of success with pre-trained networks, pre-training and pre-pruning might also allow for sparse networks with minimally compromised learning capabilities.
If it's possible to pre-train composable network modules, it might also be feasible to define trainable sparse networks with significantly relaxed topological constraints.
See 2:09:40 in https://www.youtube.com/live/iqVxOZuqiSg
https://www.smh.com.au/national/nsw/maximise-profits-facial-...
AI is being used by law enforcement and public institutions. In fact so much that perhaps this is a good link:
https://www.monster.com/jobs/search?q=artificial+intelligenc...
In both cases it's too late to do anything about it. AI is "loose". Oh and I don't know if you noticed, governments have collectively decided law doesn't apply to them, only to their citizens, and only in a negative way. For instance, just about every country has laws on the books guaranteeing timely emergency care at hospitals, with timely defined as within 1 or 2 hours.
Waiting times are 8-10 hours (going up to days) and this is the normal situation now, it's not a New Year's eve or even Friday evening thing anymore. You have the "right" to less waiting time, which can only mean the government (the worst hospitals are public ones) should be forced to fix this, spending whatever it needs to to fix it. And it can be fixed, I mean at this point you'd have to give physicians and nurses a 50% rise and double the number employed and 10x the number in training.
Government is just outright not doing this, and if one thing's guaranteed, this will keep getting worse, a direct violation of your rights in most states, for the next 10 years minimum, but probably longer.
But taking him as a genuine "concerned citizen" - I don't think AI licensing is going to be effective. The government are pretty useless at punishing big corporations, to the point where I would say corporations have almost immunity from criminal prosecution. [1]. Therefore the kinds of companies that will do bad things with AI, the need for a license wont stop them. Especially as it is hard for anyone to see what they are running on their GPUs.
[1] https://ag.ny.gov/press-release/2023/attorney-general-james-...
I believe this is what OpenAI is doing, and it makes me sad as a teacher.
AI is the greatest tool for equity and social justice in history. Any poor person with Internet access can learn (almost) anything from ChatGPT (http://chat.openai.com)
A bright student trapped in a garbage school where the kid to his right is stoned and the kid to his left is looking up porn on a phone can learn from personalized AI tutors.
While some complain that AI will take our jobs, they are ignoring the effect of competition. Humans will become smarter with AI tutors. Humans will become more capable with AI assistants. With AI an individual can compete with a large corporation. It reminds me of the early days of the World Wide Web and the "Online, nobody knows you are a dog" memes.
I hope the best hope many bright and poor kids have is not taken away to protect the power bases of the rich and powerful. They deserve a chance.
https://societyofrock.com/in-1985-frank-zappa-is-asked-to-te...
Less fulfilling text version:
https://www.edn.com/the-truth-about-smics-7-nm-chip-fabricat...
[1]: https://arxiv.org/abs/2305.06972 [2]: https://arxiv.org/abs/2210.01790 [3]: https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3Hs...
Here's mine: https://movies.stackexchange.com/questions/10572/is-this-quo...
Another paper not from Msft showing emergent task capabilities across a variety of LLMs as scale increases.
https://arxiv.org/pdf/2206.07682.pdf
You can hem and haw all you want but the reality is these models have internal representations of the world that can be probed via prompts. They are not stochastic parrots no matter how much you shout in the wind that they are.
Unless you think OpenAI is blatantly lying about this:
"A.1 Sourcing. We sourced either the most recent publicly-available official past exams, or practice exams in published third-party 2022-2023 study material which we purchased. We cross-checked these materials against the model’s training data to determine the extent to which the training data was not contaminated with any exam questions, which we also report in this paper."
"As can be seen in tables 9 and 10, contamination overall has very little effect on the reported results."
They also report results on uncontaminated data which shows basically no statistical difference.
For example, take this thread: https://news.ycombinator.com/item?id=21717022
It's a text RPG game built on top of GPT-2 that could follow arbitrary instructions. It was a full project with custom training for something that you can get with a single prompt on ChatGPT nowadays, but it clearly showcased what LLMs were capable of and things we take for granted now. It was clear, back then, that at some point ChatGPT would happen.
Though the other day Yuval Noah Harari gave a great talk on the potential threat to democracy - https://youtu.be/LWiM-LuRe6w
Yuval Noah Harari gave a great talk the other day on the potential threat to democracy from the current state of the technology - https://youtu.be/LWiM-LuRe6w
I highly recommend Rob Miles channel on YouTube. Here’s a good one, but they’re all fascinating. It turns out training an AI to have the actual goals we want it to have is fiendishly difficult.
The difference is that AGI isn't a static tool. If some constraint is a limiting factor to economic activity, inventing a tool to eliminate the constraint uncorks new kinds of economic potential and the real economy expands to exploit new opportunities. But such tools historically were narrowly focused and so the new space of economic opportunity is left for human labor to engage with. AGI breaks this trend. Any knowledge work can in principle be captured by AGI. There is nothing "beyond" the function of AGI for human labor en mass to engage productively with.
To be clear, my point in the parent was from extrapolating current trends to a near-term (10-20 years) proto AGI. LLMs as they currently stand certainly won't put 90% of people out of work. But it is severely short-sighted to refuse to consider the trends and where the increasing sophistication of generalist AIs (not necessarily AGI) are taking society.
>Could you provide any sources on this topic? This is a new information for me here.
Graph: https://files.epi.org/charts/img/91494-9265.png
Source: https://www.epi.org/publication/understanding-the-historic-d...
https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/
https://intelligence.org/files/AlignmentHardStart.pdf
https://www.youtube.com/watch?v=pYXy-A4siMw
https://europepmc.org/article/med/26185241
https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...
I could keep going but the reading on this spans for tens of thousands of pages of detailed reasoning and analysis, including mathematical proofs and lab-scale experiments.