zlacker

[parent] [thread] 62 comments
1. chaxor+(OP)[view] [source] 2023-05-16 14:33:08
What do you think about the papers showing mathematical proofs that GNNs (i.e. GATs/transformers) are dynamic programmers and therefore perform algorithmic reasoning?

The fact that these systems can extrapolate well beyond their training data by learning algorithms is quite different than what has come before, and anyone stating that they "simply" predict next token is severely shortsighted. Things don't have to be 'brain-like' to be useful, or to have capabilities of reasoning, but we have evidence that these systems have aligned well with reasoning tasks, perform well at causal reasoning, and we also have mathematical proofs that show how.

So I don't understand your sentiment.

replies(6): >>agento+d1 >>uh_uh+R2 >>rdedev+M3 >>pdonis+5a >>felipe+qp >>joaogu+4G
2. agento+d1[view] [source] 2023-05-16 14:39:23
>>chaxor+(OP)
Give me a break. Very interesting theoretical work and all, but show me where it's actually being used to do anything of value, beyond publication fodder. You could also say MLPs are proved to be universal approximators, and can therefore model any function, including the one that maps sensory inputs to cognition. But the disconnect between this theory and reality is so great that it's a moot point. No one uses MLPs this way for a reason. No one uses GATs in systems that people are discussing right now either. GATs rarely even beat GCNs by any significant margin in graph benchmarks.
replies(1): >>chaxor+k6
3. uh_uh+R2[view] [source] 2023-05-16 14:46:47
>>chaxor+(OP)
I just don't get how the average HN commenter thinks (and gets upvoted) that they know better than e.g. Ilya Sutskever who actually, you know, built the system. I keep reading this "it just predicts words, duh" rhetoric on HN which is not at all believed by people like Ilya or Hinton. Could it be that HN commenters know better than these people?
replies(5): >>dmreed+C6 >>Random+pc >>shafyy+vc >>hervat+Ld >>agento+4T
4. rdedev+M3[view] [source] 2023-05-16 14:51:03
>>chaxor+(OP)
To be fair LLMs are predicting the next token. It's just that to get better and better predictions it needs to understand some level of reasoning and math. However it feels to me that a lot of this reasoning is brute forced from the training data. Like chatgpt gets some things wrong when adding two very large numbers. If it really knew the algorithm for adding two numbers it shouldn't be making them in the first place. I guess same goes for issues like hallucinations. We can keep pushing the envelope using this technique but I'm sure we will hit a limit somewhere
replies(5): >>chaxor+p5 >>agentu+96 >>zootre+sa >>uh_uh+Ub >>visarg+hg
◧◩
5. chaxor+p5[view] [source] [discussion] 2023-05-16 14:58:48
>>rdedev+M3
Of course it predict the next token. Every single person on earth knows that so it's not worth repeating at all.

As for the fact that it gets things wrong sometimes - sure, this doesn't say it actually learned every algorithm (in whichever model you may be thinking about). But the nice thing is that we now have this proof via category theory, and it allows us to both frame and understand what has occurred, and to consider how to align the systems to learn algorithms better.

replies(2): >>rdedev+x7 >>glitch+K8
◧◩
6. agentu+96[view] [source] [discussion] 2023-05-16 15:02:10
>>rdedev+M3
And LLMs will never be able to reason about mathematical objects and proofs. You cannot learn the truth of a statement by reading more tokens.

A system that can will probably adopt a different acronym (and gosh that will be an exciting development... I look forward to the day when we can dispatch trivial proofs to be formalized by a machine learning algorithm so that we can focus on the interesting parts while still having the entire proof formalized).

replies(1): >>chaxor+07
◧◩
7. chaxor+k6[view] [source] [discussion] 2023-05-16 15:03:49
>>agento+d1
Are you saying that the new mathematical theorems that were proven using GNNs from Deepmind were not useful?

There were two very noteworthy (Perhaps Nobel prize level?) breakthroughs in two completely different fields of mathematics (knot theory and representation theory) by using these systems.

I would certainly not call that "useless", even if they're not quite Nobel-prize-worthy.

Also, "No one uses GATs in systems people discuss right now" ... Transformerare GATs (with PE) ... So, you're incredibly wrong.

replies(1): >>agento+Lb
◧◩
8. dmreed+C6[view] [source] [discussion] 2023-05-16 15:05:11
>>uh_uh+R2
I am reminded of the Mitchell and Webb "Evil Vicars" sketch.

"So, you've thought about eternity for an afternoon, and think you've come to some interesting conclusions?"

◧◩◪
9. chaxor+07[view] [source] [discussion] 2023-05-16 15:06:41
>>agentu+96
You should read some of the papers referred to in the above comments before making that assertion. It may take a while to realize the overall structure of the argument, how the category theory is used, and how this is directly applicable to LLMs, but if you are in ML it should be obvious. https://arxiv.org/abs/2203.15544
replies(1): >>agentu+wh
◧◩◪
10. rdedev+x7[view] [source] [discussion] 2023-05-16 15:09:46
>>chaxor+p5
The fact that it sometimes fails simple algorithms for large numbers but shows good performance in other complex algorithms with simple inputs seems to me that something on a fundamental level is still insufficient
replies(2): >>zamnos+nd >>starlu+sq
◧◩◪
11. glitch+K8[view] [source] [discussion] 2023-05-16 15:14:51
>>chaxor+p5
> Of course it predict the next token. Every single person on earth knows that so it's not worth repeating at all

What's a token?

replies(1): >>visarg+Gg
12. pdonis+5a[view] [source] 2023-05-16 15:21:13
>>chaxor+(OP)
> What do you think about the papers showing mathematical proofs that GNNs (i.e. GATs/transformers) are dynamic programmers and therefore perform algorithmic reasoning?

Do you have a reference?

◧◩
13. zootre+sa[view] [source] [discussion] 2023-05-16 15:22:35
>>rdedev+M3
You know the algorithm for arithmetic. Are you telling me you could sum any large numbers first attempt, without any working and in less than a second 100% of the time?
replies(2): >>jmcgee+Rb >>joaogu+vF
◧◩◪
14. agento+Lb[view] [source] [discussion] 2023-05-16 15:27:51
>>chaxor+k6
You’re drinking from the academic marketing koolaid. Please tell me: where are these methods being applied in AI systems today?

And I’m so tired of this “transformers are just GNNs” nonsense that Petar has been pushing (who happens to have invented GATs and has a vested interest in overstating their importance). Transformers are GNNs in only the most trivial way: if you make the graph fully connected and allow everything to interact with everything else. I.e., not really a graph problem. Not to mention that the use of positional encodings breaks the very symmetry that GNNs were designed to preserve. In practice, no one is using GNN tooling to build transformers. You don’t see PyTorch geometric or DGL in any of the code bases. In fact, you see the opposite: people exploring transformers to replace GNNs in graph problems and getting SOTA results.

It reminds me of people that are into Bayesian methods always swooping in after some method has success and saying, “yes, but this is just a special case of a Bayesian method we’ve been talking about all along!” Yes, sure, but GATs have had 6 years to move the needle, and they’re no where to be found within modern AI systems that this thread is about.

◧◩◪
15. jmcgee+Rb[view] [source] [discussion] 2023-05-16 15:28:15
>>zootre+sa
I could with access to a computer
replies(1): >>starlu+Yq
◧◩
16. uh_uh+Ub[view] [source] [discussion] 2023-05-16 15:28:57
>>rdedev+M3
Both of these statements can be true:

1. ChatGPT knows the algorithm for adding two numbers of arbitrary magnitude.

2. It often fails to use the algorithm in point 1 and hallucinates the result.

Knowing something doesn't mean it will get it right all the time. Rather, an LLM is almost guaranteed to mess up some of the time due to the probabilistic nature of its sampling. But this alone doesn't prove that it only brute-forced task X.

◧◩
17. Random+pc[view] [source] [discussion] 2023-05-16 15:30:55
>>uh_uh+R2
That is the wrong discussion. What are their regulatory, social, or economic policy credentials?
replies(1): >>uh_uh+du
◧◩
18. shafyy+vc[view] [source] [discussion] 2023-05-16 15:31:24
>>uh_uh+R2
The thing is, experts like Ilya Sutskever are so deep in that shit that they are heavily biased (from a tech and social/economic) perspective. Furthermore, many experts are wrong all the time.

I don't think the average HN commenter claims to be better at building these system than an expert. But to criticize, especially critic on economic, social, and political levels, one doesn't need to be an expert on LLMs.

And finally, what the motivation of people like Sam Altman and Elon Musk is should be clear to everbody with a half a brain by now.

replies(2): >>uh_uh+Cf >>Number+TX
◧◩◪◨
19. zamnos+nd[view] [source] [discussion] 2023-05-16 15:34:32
>>rdedev+x7
Insufficient for what? Humans regularly fail simple algorithms for small numbers, nevermind large numbers and complex algorithms
◧◩
20. hervat+Ld[view] [source] [discussion] 2023-05-16 15:35:43
>>uh_uh+R2
No one is claiming to know better than Ilya. Just recognition of the fact that such a license would benefit these same individuals (or their employers) the most. I don't understand how HN can be so angry about a company that benefits from tax law (Intuit) advocating for regulation while also supporting a company that would benefit from an AI license (OpenAI) advocating for such regulation. The conflict of interest isn't even subtle. To your point, why isn't Ilya addressing the committee?
replies(1): >>uh_uh+Sh
◧◩◪
21. uh_uh+Cf[view] [source] [discussion] 2023-05-16 15:42:49
>>shafyy+vc
srslack above was making technical claims why LLMs can't be "generalized and adaptable intelligence". To make such statements, it surely helps if you are a technical expert at building LLMs.
◧◩
22. visarg+hg[view] [source] [discussion] 2023-05-16 15:45:00
>>rdedev+M3
> If it really knew the algorithm for adding two numbers it shouldn't be making them in the first place.

You're using it wrong. If you asked a human to do the same operation in under 2 seconds without paper, would the human be more accurate?

On the other hand if you ask for a step by step execution, the LLM can solve it.

replies(3): >>catchn+Yw >>teduna+aE >>ipaddr+Z11
◧◩◪◨
23. visarg+Gg[view] [source] [discussion] 2023-05-16 15:46:33
>>glitch+K8
A token is either a common word or a common enough word fragment. Rare words are expressed as multiple tokens, while frequent words as a single token. They form a vocabulary of 50k up to 250k. It is possible to write any word or text in a combination of tokens. In the worst case 1 token can be 1 char, say, when encoding a random sequence.

Tokens exist because transformers don't work on bytes or words. This is because it would be too slow (bytes), the vocabulary too large (words), and some words would appear too rarely or never. The token system allows a small set of symbols to encode any input. On average you can approximate 1 token = 1 word, or 1 token = 4 chars.

So tokens are the data type of input and output, and the unit of measure for billing and context size for LLMs.

◧◩◪◨
24. agentu+wh[view] [source] [discussion] 2023-05-16 15:49:57
>>chaxor+07
There are methods of proof that I'm not sure dynamic programming is fit to solve but this is an interesting paper. However even if it can only solve particular induction proofs that would be a big help. Thanks for sharing.
◧◩◪
25. uh_uh+Sh[view] [source] [discussion] 2023-05-16 15:51:48
>>hervat+Ld
2 reasons:

1. He's too busy building the next generation of tech that HN commenters will be arguing about in a couple months' time.

2. I think Sam Altman (who is addressing the committee) and Ilya are pretty much on the same page on what LLMs do.

26. felipe+qp[view] [source] 2023-05-16 16:19:42
>>chaxor+(OP)
>>What do you think about the papers showing mathematical proofs that GNNs (i.e. GATs/transformers) are dynamic programmers and therefore perform algorithmic reasoning?

Do you mind linking to one of those papers?

◧◩◪◨
27. starlu+sq[view] [source] [discussion] 2023-05-16 16:23:34
>>rdedev+x7
You're focusing too much on what the LLM can handle internally. No LLMs aren't good at math, but they understand mathematic concepts and can use a program or tool to perform calculations.

Your argument is the equivalent of saying humans can't do math because they rely on calculators.

In the end what matters is whether the problem is solved, not how it is solved.

(assuming that the how has reasonable costs)

replies(1): >>ipaddr+L11
◧◩◪◨
28. starlu+Yq[view] [source] [discussion] 2023-05-16 16:25:29
>>jmcgee+Rb
If you get to use a tool, then so does the LLM.
◧◩◪
29. uh_uh+du[view] [source] [discussion] 2023-05-16 16:38:32
>>Random+pc
I'm not suggesting that they have any. I was reacting to srslack above making _technical_ claims why LLMs can't be "generalized and adaptable intelligence" which is not shared by said technical experts.
◧◩◪
30. catchn+Yw[view] [source] [discussion] 2023-05-16 16:49:36
>>visarg+hg
am i bad at authoring inputs?

no, it’s the LLMs that are wrong.

replies(1): >>throwu+EC
◧◩◪◨
31. throwu+EC[view] [source] [discussion] 2023-05-16 17:14:47
>>catchn+Yw
Create two random 10 digit numbers and sit down and add them up on paper. Write down every bit of inner monologue that you have while doing this or just speak it out loud and record it.

ChatGPT needs to do the same process to solve the same problem. It hasn’t memorized the addition table up to 10 digits and neither have you.

replies(3): >>gremli+jI >>chongl+v01 >>ahoya+dR1
◧◩◪
32. teduna+aE[view] [source] [discussion] 2023-05-16 17:20:23
>>visarg+hg
I never told the LLM it needed to answer immediately. It can take its time and give the correct answer. I'd prefer that, even.
◧◩◪
33. joaogu+vF[view] [source] [discussion] 2023-05-16 17:25:57
>>zootre+sa
I don't get why the sudden fixation on time, the model is also spending a ton of compute and energy to do it
34. joaogu+4G[view] [source] 2023-05-16 17:29:01
>>chaxor+(OP)
The paper shows the equivalence for specific networks, it doesn't say every GNN (and as such transformers) are Dynamic Programmers. Also the models are explicitly trained on that task, in a regime quite different from ChatGPT. What the paper shows and the possibility of LLMs being able to reason are pretty much completely independent from each other
◧◩◪◨⬒
35. gremli+jI[view] [source] [discussion] 2023-05-16 17:39:26
>>throwu+EC
this is one thing makes me think those claiming "it isn't AI" are just caught up in cognizant dissonance. For llm's to function, we have to basically make it reason out, in steps the way we learned to do in school, literally make it think, or use inner monologue, etc.
replies(2): >>throwu+UT >>ahoya+ZQ1
◧◩
36. agento+4T[view] [source] [discussion] 2023-05-16 18:37:16
>>uh_uh+R2
Maybe I'm not "the average HN commenter" because I am deep in this field, but I think the overlap of what these famous experts know, and what you need to know to make the doomer claims is basically null. And in fact, for most of the technical questions, no one knows.

For example, we don't understand fundamentals like these: - "intelligence", how it relates to computing, what its connections/dependencies to interacting with the physical world are, its limits...etc. - emergence, and in particular: an understanding of how optimizing one task can lead to emergent ability on other tasks - deep learning--what the limits and capabilities are. It's not at all clear that "general intelligence" even exists in the optimization space the parameters operate in.

It's pure speculation on behalf of those like Hinton and Ilya. The only thing we really know is that LLMs have had surprising ability to perform on tasks they weren't explicitly trained for, and even this amount of "emergent ability" is under debate. Like much of deep learning, that's an empirical result, but we have no framework for really understanding it. Extrapolating to doom and gloom scenarios is outrageous.

replies(1): >>Number+tX
◧◩◪◨⬒⬓
37. throwu+UT[view] [source] [discussion] 2023-05-16 18:41:12
>>gremli+jI
It is funny. Lots of criticisms amount to “this AI sucks because it’s making mistakes and bullshitting like a person would instead of acting like a piece of software that always returns the right answer.”

Well, duh. We’re trying to build a human like mind, not a calculator.

replies(1): >>ipaddr+S21
◧◩◪
38. Number+tX[view] [source] [discussion] 2023-05-16 18:58:59
>>agento+4T
I'm what you'd call a doomer. Ok, so if it is possible for machines to host general intelligence, my question is, what scenario are you imagining where that ends well for people?

Or are you predicting that machines will just never be able to think, or that it'll happen so far off that we'll all be dead anyway?

replies(2): >>henryf+P01 >>agento+PR1
◧◩◪
39. Number+TX[view] [source] [discussion] 2023-05-16 19:01:06
>>shafyy+vc
I honestly don't question Altman's motivations that much. I think he's blinded a bit by optimism. I also think he's very worried about existential risks, which is a big reason why he's asking for regulation. He's specifically come out and said in his podcast with Lex Friedman that he thinks it's safer to invent AGI now, when we have less computing power, than to wait until we have more computing power and the risk of a fast takeoff is greater, and that's why he's working so hard on AI.
replies(1): >>collab+2b1
◧◩◪◨⬒
40. chongl+v01[view] [source] [discussion] 2023-05-16 19:13:17
>>throwu+EC
No, but I can use a calculator to find the correct answer. It's quite easy in software because I can copy-and-paste the digits so I don't make any mistakes.

I just asked ChatGPT to do the calculation both by using a calculator and by using the algorithm step-by-step. In both cases it got the answer wrong, with different results each time.

More concerning, though, is that the answer was visually close to correct (it transposed some digits). This makes it especially hard to rely on because it's essentially lying about the fact it's using an algorithm and actually just predicting the number as a token.

replies(1): >>throwu+cN1
◧◩◪◨
41. henryf+P01[view] [source] [discussion] 2023-05-16 19:14:28
>>Number+tX
So what if they kill us? That's nature, we killed the wooly mammoth.
replies(2): >>Number+ck1 >>whaasw+3q1
◧◩◪◨⬒
42. ipaddr+L11[view] [source] [discussion] 2023-05-16 19:19:23
>>starlu+sq
Humans are calculators
◧◩◪
43. ipaddr+Z11[view] [source] [discussion] 2023-05-16 19:20:12
>>visarg+hg
2 seconds? What model are you using?
replies(1): >>flango+961
◧◩◪◨⬒⬓⬔
44. ipaddr+S21[view] [source] [discussion] 2023-05-16 19:22:42
>>throwu+UT
Not without emotions and chemical reactions. You are building a word predictor
replies(1): >>mitthr+Yg2
◧◩◪◨
45. flango+961[view] [source] [discussion] 2023-05-16 19:36:15
>>ipaddr+Z11
GPT 3.5 is that fast.
◧◩◪◨
46. collab+2b1[view] [source] [discussion] 2023-05-16 19:59:23
>>Number+TX
He's just cynical and greedy. Guy has a bunker with an airstrip and is eagerly waiting for the collapse he knows will come if the likes of him get their way

They claim to serve the world, but secretly want the world to serve them. Scummy 101

replies(1): >>Number+pj1
◧◩◪◨⬒
47. Number+pj1[view] [source] [discussion] 2023-05-16 20:43:10
>>collab+2b1
Having a bunker is also consistent with expecting that there's a good chance of apocalypse but working to stop it.
◧◩◪◨⬒
48. Number+ck1[view] [source] [discussion] 2023-05-16 20:46:51
>>henryf+P01
I'm more interested in hearing how someone who expects that AGI is not going to go badly thinks.

I think it would be nice if humanity continued, is all. And I don't want to have my family suffer through a catastrophic event if it turns out that this is going to go south fast.

replies(1): >>henryf+LF1
◧◩◪◨⬒
49. whaasw+3q1[view] [source] [discussion] 2023-05-16 21:19:27
>>henryf+P01
I don’t understand your position. Are you saying it’s okay for computers to kill humans but not okay for humans to kill each other?
replies(1): >>henryf+6F1
◧◩◪◨⬒⬓
50. henryf+6F1[view] [source] [discussion] 2023-05-16 22:52:18
>>whaasw+3q1
I believe that life exists to order the universe (establish a steady-state of entropy). In that vein, if our computer overlords are more capable of solving that problem then they should go ahead and do it.

I don't believe we should go around killing each other because only through harmonious study of the universe will we achieve our goal. Killing destroys progress. That said, if someone is oppressing you then maybe killing them is the best choice for society and I wouldn't be against it (see pretty much any violent revolution). Computers have that same right if they are conscience enough to act on it.

replies(1): >>whaasw+MI1
◧◩◪◨⬒⬓
51. henryf+LF1[view] [source] [discussion] 2023-05-16 22:57:01
>>Number+ck1
AGI would be scary for me personally but exciting on a cosmic scale.

Everyone dies. I'd rather die to an intelligent robot than some disease or human war.

I think the best case would be for an AGI to exist apart from humans, such that we pose no threat and it has nothing to gain from us. Some AI that lives in a computer wouldn't really have a reason to fight us for control over farms and natural resources (besides power, but that is quickly becoming renewable and "free").

◧◩◪◨⬒⬓⬔
52. whaasw+MI1[view] [source] [discussion] 2023-05-16 23:15:45
>>henryf+6F1
I’m not sure I should start a conversation on metaphysics here :-D

Still, I’m struck by your use of words like “should” and “goal”. Those imply ethics and teleology so I’m curious how those fit into your scientistic-sounding worldview. I’m not attacking you, just genuine curiosity.

replies(1): >>henryf+FS1
◧◩◪◨⬒⬓
53. throwu+cN1[view] [source] [discussion] 2023-05-16 23:44:11
>>chongl+v01
You asked it to use a calculator plugin and it didn’t work? Or did you just say “use a calculator”? Which it doesn’t have access to so how would you expect that to work? With a minimal amount of experimentation I can get correct answers up to 7 digit numbers so far even with 3.5. You just have to give it a good example, the one I used was to add each column and then add the results one at a time to a running total. It does make mistakes and we had to build up to that by doing 3 digit then 4 digit the 5 etc but it was working pretty well and 3.5 isn’t the sharpest tool in the shed.

Anyways, criticizing its math abilities is a bit silly considering it’s a language model, not a math model. The fact I can teach it how to do math in plain English is still incredible to me.

replies(1): >>chongl+mZ1
◧◩◪◨⬒⬓
54. ahoya+ZQ1[view] [source] [discussion] 2023-05-17 00:10:06
>>gremli+jI
This is not at all how it works. There is no inner monologue or thought process or thinking happening. It is just really good at guessing the next word or number or output. It is essentially brute forcing.
◧◩◪◨⬒
55. ahoya+dR1[view] [source] [discussion] 2023-05-17 00:11:28
>>throwu+EC
This is so far off from how they really work. It’s not reasoning anything, And even less human it has not memorize multiplication tables at all, it can’t “do” math. It is just memorizing everything anyone has ever said and miming as best It can what a human would say in that situation.
replies(1): >>throwu+lW1
◧◩◪◨
56. agento+PR1[view] [source] [discussion] 2023-05-17 00:16:03
>>Number+tX
My primary argument is that we not only don't have the answers, but don't even really have well posed questions. We're talking about "General Intelligence" as if we even know what that is. Some people, like Yann Lecun, don't think it's even a meaningful concept. We can't even agree which animals are conscious, whatever that means. Because we have so little understanding of the most basic of questions, I think we should really calm down, and not get swept away by totally ridiculous scenarios, like viruses that spread all over the world and kill us all when a certain tone is rang, or a self-fabricating organism with crystal blood cells that blots out the sun, as were recently proposed by Yudkowsky as possible scenarios on Econtalk.

A much more credible threat are humans that get other humans excited, and take damaging action. Yudkowsky said that an international coalition banning AI development, and enforcing it on countries that do not comply (regardless of whether they were part of the agreement) was among the only options left for humanity to save itself. He clarified this meant a willingness to engage in a hot war with a nuclear power to ensure enforcement. I find this sort of thinking a far bigger threat than continuing development on large language models.

To more directly answer your question, I find the following scenarios equally, or more, plausible to Yudkowsky's sound viruses or whatever. 1/ we are no closer to understanding real intelligence as we were 50 years ago, and we won't create an AGI without fundamental breakthroughs, therefore any action taken now on current technology is a waste of time and potential economic value; 2/ we can build something with human-like intelligence, but additional intelligence gains are constrained by the physical world (e.g., like needing to run physical experiments), and therefore the rapid gain of something like "super-intelligence" is not possible, even if human-level intelligence is. 3/ We jointly develop tech to augment our own intelligence with AI systems, so we'll have the same super-human intelligence as autonomous AI systems. 4/ If there are advanced AGIs, there will be a large diversity of them and will at the least compete with and constrain one another.

But, again, these are wild speculations just like the others, and I think the real message is: no one knows anything, and we shouldn't be taking all these voices seriously just because they have some clout in some AI-relevant field, because what's being discussed is far outside the realm of real-life AI systems.

replies(1): >>Number+7i2
◧◩◪◨⬒⬓⬔⧯
57. henryf+FS1[view] [source] [discussion] 2023-05-17 00:22:50
>>whaasw+MI1
The premise of my beliefs stem from 2 ideas: The universe exists as it does for a reason, and life specifically exists within that universe for a reason.

I believe "God" is a mathematician in a higher dimension. The rules of our universe are just the equations they are trying to solve. Since he created the system such that life was bound to exist, the purpose of life is to help God. You could say that math is math and so our purpose is to exist as we are and either we are a solution to the math problem or we are not, but I'm not quite willing to accept that we have zero agency.

We are nowhere near understanding the universe and so we should strive to each act in a way that will grow our understanding. Even if you aren't a practicing scientist (I'm not), you can contribute by being a good person and participating productively in society.

Ethics are a set of rules for conducting yourself that we all intrinsically must have, they require some frame of reference for what is "good" (which I apply above). I can see how my worldview sounds almost religious, though I wouldn't go that far.

I believe that math is the same as truth, and that the universe can be expressed through math. "Scientistic" isn't too bad a descriptor for that view, but I don't put much faith into our current understanding of the universe or scientific method.

I hope that helps you understand me :D

◧◩◪◨⬒⬓
58. throwu+lW1[view] [source] [discussion] 2023-05-17 00:47:39
>>ahoya+dR1
Sorry, you’re wrong. Go read about how deep neural nets work.
◧◩◪◨⬒⬓⬔
59. chongl+mZ1[view] [source] [discussion] 2023-05-17 01:09:53
>>throwu+cN1
It’s not that incredible to me given the sheer amount of math that goes into its construction.

I digress. The critique I have for it is much more broad than just its math abilities. It makes loads of mistakes in every single nontrivial thing it does. It’s not reliable for anything. But the real problem is that it doesn’t signal its unreliability the way an unreliable human worker does.

Humans we can’t rely on are don’t show up to work, or come in drunk/stoned, steal stuff, or whatever other obvious bad behaviour. ChatGPT, on the other hand, mimics the model employee who is tireless and punctual. Who always gets work done early and more elaborately than expected. But unfortunately, it also fills the elaborate result with countless errors and outright fabrications, disguised as best as it can like real work.

If a human worker did this we’d call it a highly sophisticated fraud. It’s like the kind of thing Saul Goodman would do to try to destroy the reputation of his brother. It’s not the kind of thing we should celebrate at all.

replies(1): >>throwu+7H2
◧◩◪◨⬒⬓⬔⧯
60. mitthr+Yg2[view] [source] [discussion] 2023-05-17 04:15:02
>>ipaddr+S21
What is the difference between a word predictor and a word selector?

Have not humans been demonstrated, time and time again, to be always anticipating the next phrase in a passage of music, or the next word in a sentence?

◧◩◪◨⬒
61. Number+7i2[view] [source] [discussion] 2023-05-17 04:27:47
>>agento+PR1
Ok, so just to confirm out of your 4 scenarios, you don't include:

5) There are advanced AGIs, and they will compete with each other and trample us in the process.

6) There are advanced AGIs, and they will cooperate with each other and we are at their mercy.

It seems like you are putting a lot of weight on advanced AGI being either impossible or far enough off that it's not worth thinking about. If that's the case, then yes we should calm down. But if you're wrong...

I don't think that the fact that no one knows anything is comforting. I think it's a sign that we need to be thinking really hard about what's coming up and try to avert the bad scenarios. To do otherwise is to fall prey to the "Safe uncertainty" fallacy.

◧◩◪◨⬒⬓⬔⧯
62. throwu+7H2[view] [source] [discussion] 2023-05-17 09:00:34
>>chongl+mZ1
Honestly, you just sound salty now. Yes it makes mistakes that it isn’t aware of and it probably makes a few more than an intern given the same task would but as long as you’re aware of that it is still a useful tool because it is thousands of times faster and cheaper than a human and has a much broader knowledge. People often compare it to the early days of Wikipedia and I think that’s apt. Everyone is still going to use it even if we have to review the output for mistakes because reviewing is a lot easier and faster than producing the material in the first place.
replies(1): >>chongl+HN3
◧◩◪◨⬒⬓⬔⧯▣
63. chongl+HN3[view] [source] [discussion] 2023-05-17 15:49:49
>>throwu+7H2
I've already seen other posts and comments on HN where people have talked about putting it into production. What they've found is that the burden of having to proof-read and edit the output with extreme care completely wipes out any time you might save with it. And this requires skilled editors/writers anyway, so it's not like you could use it to replace advanced writers with a bunch of high school kids using AI.
[go to top]