zlacker

[return to "Thousands of AI Authors on the Future of AI"]
1. light_+Bd[view] [source] 2024-01-08 22:29:52
>>treebr+(OP)
I got this survey; for the record I didn't respond.

I don't think their results are meaningful at all.

Asking random AI researchers about automating a field they have no idea about means nothing. What do I know about the job of a surgeon? My opinion on how current models can automate a job I don't understand is worthless.

Asking random AI researchers about automation outside of their area of expertise is also worthless. A computer vision expert has no idea what the state of the art in grasping is. So what does their opinion on installing wiring in a house count for? Nothing.

Even abstract tasks like translation. If you aren't an NLP researcher who has dealt with translation you have no idea how you even measure how good a translated document is, so why are you being asked when translation will be "fluent"? You're asking a clueless person a question they literally cannot even understand.

This is a survey of AI hype, not any indication of what the future holds.

Their results are also highly biased. Most senior researchers aren't going to waste their time filling this out (90% of people did not fill it out). They almost certainly got very junior people and those with an axe to grind. Many of the respondents also have a conflict of interest, they run AI startups. Of course they want as much hype as possible.

This is not a survey of what the average AI researcher thinks.

◧◩
2. treebr+ZO[view] [source] 2024-01-09 02:32:26
>>light_+Bd
Thank you for this comment. It is great to hear an inside take.

Idle curiosity, but what NLP tools evaluate translation quality better than a person? I was under the (perhaps mistaken) impression that NLP tools would be designed to approximate human intuition on this.

Their results are also highly biased. Most senior researchers aren't going to waste their time filling this out (90% of people did not fill it out). They almost certainly got very junior people and those with an axe to grind. Many of the respondents also have a conflict of interest, they run AI startups.

The survey does address the points above a bit. Per Section 5.2.2 and Appendix D, the survey had a response rate of 15% overall and of ~10% among people with over 1000 citations. Respondents who had given "when HLMI [more or less AGI] will be developed" or "impacts of smarter-than-human machines" a "great deal" of thought prior to the survey were 7.6% and 10.3%, respectively. Appendix D indicates that they saw no large differences between industry and academic respondents besides response rate, which was much lower for people in industry.

[go to top]