zlacker

Thousands of AI Authors on the Future of AI

submitted by treebr+(OP) on 2024-01-08 21:23:59 | 85 points 112 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
◧◩
4. sdento+x8[view] [source] [discussion] 2024-01-08 22:06:52
>>idopms+O6
Yeah, it really comes down to the question of how we advance on just-bits vs constrained-environment robotics vs open-domain robotics...

Some interesting work here on using LLMs to improve on open-domain robotics: https://arstechnica.com/information-technology/2023/03/embod...

◧◩◪
22. jncrat+4g[view] [source] [discussion] 2024-01-08 22:40:17
>>tayo42+Cf
You might be interested in OpenWorm:

https://openworm.org/

This paper might be helpful for understanding the nervous system in particular:

https://royalsocietypublishing.org/doi/10.1098/rstb.2017.037...

◧◩◪◨⬒⬓
84. glial+Kr[view] [source] [discussion] 2024-01-08 23:36:27
>>throwu+Jl
The role of astrocytes in neural computation is an example. For a long time, the assumption was that astrocytes were just "maintenance" or structural cells (the name "glia" comes from "glue"). Thus, they were not included in computational models. More recently, there is growing recognition that they play an important role in neural computation, e.g. https://picower.mit.edu/discoveries/key-roles-astrocytes
◧◩
90. arp242+9v[view] [source] [discussion] 2024-01-08 23:55:39
>>mmaund+in
> I think history has shown us that we tend to underestimate the rate of technological progress and it's rate of acceleration.

It's also been overestimated tons of times. Look at some of the predictions from the past. It's been a complete crap-shoot. Many things have changed significantly less than people have predicted, or in significantly different ways, or significantly more.

Just because things are accelerating great pace right now doesn't really mean anything for the future. Look at the predictions people made during the "space age" 1950s and 60s. A well-known example would be 2001 (the film and novel). Yes, it's "just" some fiction, but it was also a serious attempt at predicting what the future would roughly look like, and Arthur C. Clarke wasn't some dumb yahoo either.

The year 2001 is more than 20 years in the past, and obviously we're nowhere near the world of 2001, for various reasons. Other examples include things like the Von Braun wheel, predictions from serious scientists that we'd have a moon colony by the 1990s, etc. etc. There were tons of predictions and almost none of them have come true.

They all assumed that the rate of progress would continue as it had, but it didn't, for technical, economical, and pragmatic reasons. What's the point of establishing an expensive moon colony when we've got a perfectly functional planet right here? Air is nice (in spite of what Spongebob says). Plants are nice. Water is nice. Non-cramped space to live in is nice. A magnetosphere to protect us from radiation is nice. We kind of need these things to survive and none are present on the moon.

Even when people are right they're wrong. See "Arthur C Clarke predicts the internet in 1964"[1]. He did accurately predict the internet; "a man could conduct his business just as well from Bali as London" pretty much predicts all the "digital nomads" in Bali today, right?

But he also predicts that the city will be obsolete and "seizes to make any sense". Clearly that part hasn't come true, and likely never will. Can't "remotely" get a haircut, or get a pint with friends, or all sorts of other things. And where are all those remote workers in Bali? In the Denpasar/Kuta/Canggu area. That is: a city.

It's half right and half wrong.

The take-away is that predicting the future is hard, and that anyone who claims to predicts the future with great certainty is a bullshitter, idiot, or both.

[1]: https://www.youtube.com/watch?v=wC3E2qTCIY8

◧◩◪◨⬒
95. gary_0+ly[view] [source] [discussion] 2024-01-09 00:13:35
>>dmd+4m
I'm referring to the various times biological neurons have been (and will likely continue to be) the inspiration for artificial neurons[0]. I acknowledge that the word "inspiration" is doing a lot of work here, but the research continues[1][2]. If you have a PhD in neuroscience, I understand your need to push back on the hand-wavy optimism of the technologists, but I think saying "almost no idea" is going a little far. Neuroscientists are not looking up from their microscopes and fMRI's, throwing up their hands, and giving up. Yes, there is a lot of work left to do, but it seems needlessly pessimistic to say we have made almost no progress either in understanding biological neurons or in moving forward with their distantly related artificial counterparts.

Just off the top of my head, in my lifetime, I have seen discoveries regarding new neuropeptides/neurotransmitters such as orexin, starting to understand glial cells, new treatments for brain diseases such as epilepsy, new insight into neural metabolism, and better mapping of human neuroanatomy. I might only be a layman observing, but I have a hard time believing anyone can think we've made almost no progress.

[0] https://en.wikipedia.org/wiki/History_of_artificial_neural_n...

[1] https://ai.stackexchange.com/a/3936

[2] https://www.nature.com/articles/s41598-021-84813-6

◧◩◪
106. light_+QT[view] [source] [discussion] 2024-01-09 03:16:04
>>treebr+ZO
> Idle curiosity, but what NLP tools evaluate translation quality better than a person? I was under the (perhaps mistaken) impression that NLP tools would be designed to approximate human intuition on this.

This is a long story. But, your take on this question is what the average person who responded to that survey knows. And it shows you really how little the results mean. Here are some minutia that really matter:

1. Even if you measure quality with people in the loop. What do you ask people? Here's a passage in English, one in French, do you agree? Rate it out of 10? Turns out people aren't calibrated at all to give reasonable ratings, you get basically junk results if you run this experiment.

2. You can ask people to do head to head experiments. Do you like translation A more than translation B? But.. what criteria should they use? Is accuracy what matters most? Is it how would they translate? Is it how well A or B reads? Is it how well it represents the form of the source? Or the ideas of the source?

3. Are we measuring sentences? Paragraphs? Pages? 3 word sentences "give me grool" are pretty easy. 3 page translations get tricky. Now you want to represent something about the style of the writer. Or to realize that they're holding something back. For example, it can be really obvious in French that I'm holding back someone's gender, but not obvious at all in English. What about customs? Taboos? Do we even measure 3 pages worth of translation in our NLP corpora? The respondents have no idea.

4. There are even domain-specific questions about translations. Do you know how to evaluate English to French in the context of a contract? One that goes from common law to civil law? No way. You need to translate ideas now, not just words. How about medical translation? Most translation work is highly technical like this.

I could go on. Mostly we don't even measure minutia about translations or domain-specific translation in our NLP benchmarks because the tools aren't good enough for that. Nor do we measure 5 page translations for their fidelity.

We actually mostly don't measure translations using humans at all! We collect translations from humans and then we compare machine translations to human translations after the fact, with something called parallel corpora (the historical example is the Hansard corpus; which is the proceedings of the Canadian parliament that are manually translated in English and French; the EU has also been a boon for translation research).

I'm scratching the surface here. Translation is a really complicated topic. My favourite book related to this is the Dictionary of Untranslatables https://press.princeton.edu/books/hardcover/9780691138701/di... Not something you'd read end-to-end but a really fun reference to dip into once in a while.

If someone who knows about these issues wants to say that there will be human-level translation AI in 10 years, ok, fine I'm willing to buy that. But if someone who is ignorant of all of this is trying to tell me that there will be human level AI for translation in 10 years, eh, they just don't know what they're talking about. I am by the way a translation visitor, I've published in the area, but I'm not an expert at all, I don't even trust my opinion on the subject of when it will be automated.

About biases, I saw appendix A and D.

Seniority doesn't mean >1000 citations. There are master's students with 1000 citations in junk journals who happened to get a paper in a better venue. Number of citations is not an indication of anything.

The way they count academia vs industry is meaningless. There are plenty of people who have an affiliation to a university but are primarily at a startup. There are plenty of people who are minor coauthors on a paper, or even faculty who are mostly interested in making money off of the AI hype. There are plenty of people who graduated 3 years ago, this is a wrap-up of their work, they counted as academic in the survey, but now they're in industry. etc.

◧◩
112. walkho+9H2[view] [source] [discussion] 2024-01-09 17:15:10
>>sveme+Ed
This argument gives a 35% chance of AI "taking over" (granted this does not mean extinction) this century: https://www.foxy-scout.com/wwotf-review/#underestimating-ris.... The argument consists of 6 steps, assigning probabilities to each step, and multiplying the probabilities.
[go to top]