zlacker

[parent] [thread] 20 comments
1. Workac+(OP)[view] [source] 2022-12-15 14:32:19
The thing about ChatGPT is that it warning shot. And all these people I see talking about it, laughing about how the shooter missed them.

Clearly ChatGPT is going to improve, and AI development is moving at a breakneck pace and accelerating. Dinging it for totally fumbling 5% or 10% of written code is completely missing the forest for the trees.

replies(6): >>jhbadg+va >>throwa+jl >>woeiru+uE >>tarran+5W >>idontp+jX >>allisd+mq3
2. jhbadg+va[view] [source] 2022-12-15 15:10:54
>>Workac+(OP)
Sure, it will improve, but I think a lot of people think "Hey, it almost looks human quality now! Just a bit more tweaking and it will be human quality or better!". But a more likely case is that the relatively simple statistical modeling tools (which are very different from how our brains work, not that we fully understand how our brains work) that chatGPT uses have a limit to how well they work and they will hit a plateau (and are probably near it now). I'm not one of those people who believe strong AI is impossible, but I have a feeling that strong AI will take more than that just manipulating a text corpus.
replies(1): >>ben_w+eC
3. throwa+jl[view] [source] 2022-12-15 15:48:56
>>Workac+(OP)
Anyone who has doubts has to look at the price. It’s free for now, and will be cheap enough when openai starts monetizing. Price wins over quality. It’s demonstrated time and time again.
replies(1): >>ben_w+YC
◧◩
4. ben_w+eC[view] [source] [discussion] 2022-12-15 16:59:21
>>jhbadg+va
I'd be surprised if it did only take text (or even language in general), but if it does only need that, then given how few parameters even big GPT-3 models have compared to humans, it will strongly imply that PETA was right all along.
◧◩
5. ben_w+YC[view] [source] [discussion] 2022-12-15 17:02:03
>>throwa+jl
Depends on the details. Skip all the boring health and safety steps, you can make very cheap skyscrapers. They might fall down in a strong wind, but they'll be cheap.
replies(2): >>pixl97+7Q >>throwa+1h3
6. woeiru+uE[view] [source] 2022-12-15 17:08:15
>>Workac+(OP)
Yeah, but people were also saying this about self-driving cars, and guess what that long tail is super long, and its also far fatter than we expected. 10 years ago people were saying AI was coming for taxi drivers, and as far as I can tell we're still 10 years away.

I'm nonplussed by ChatGPT because the hype around it is largely the same as was for Github Copilot and Copilot fizzled badly. (Full disclosure: I pay for Copilot because it is somewhat useful).

replies(3): >>pleb_n+DT >>kerkes+jO1 >>edanm+dn3
◧◩◪
7. pixl97+7Q[view] [source] [discussion] 2022-12-15 18:03:08
>>ben_w+YC
After watching lots of videos from 3rd world countries where skyscrapers are built and then tore down a few years later, I think I know exactly how this is going to go.
◧◩
8. pleb_n+DT[view] [source] [discussion] 2022-12-15 18:19:22
>>woeiru+uE
I wonder if some of this is the 80 20 rule. We're seeing the easy 80 percent of the solutions which has taken 20% of the time. We still have the hard 80% (or most of) to go for some of these new techs
replies(2): >>rightb+gl1 >>lostms+cy1
9. tarran+5W[view] [source] 2022-12-15 18:31:24
>>Workac+(OP)
The thing is though, it's trained on human text. And most humans are per difinition, very fallible. Unless someone made it so that it can never get trained on subtly wrong code, how will it ever improve? Imho AI can be great for suggestions as for which method to use (visual studio has this, and I think there is an extension for visual studio code for a couple of languages). I think fine grained things like this are very useful, but I think code snippets are just too coarse to actually be helpful.
replies(1): >>tintor+Ds1
10. idontp+jX[view] [source] 2022-12-15 18:38:02
>>Workac+(OP)
This is magical thinking, no different than a cult.

The fundamental design of transformer architecture isn't capable of what you think it is.

There are still radical, fundamental breakthroughs needed. It's not a matter of incremental improvement over time.

◧◩◪
11. rightb+gl1[view] [source] [discussion] 2022-12-15 20:24:37
>>pleb_n+DT
Replacing 80% of a truck driver's skill would suck but replacing 80% of our skill would be an OK programmer.
◧◩
12. tintor+Ds1[view] [source] [discussion] 2022-12-15 20:56:51
>>tarran+5W
Improve itself through experimentation with reinforcement learning. This is how humans improve too. AlphaZero does it.
replies(1): >>lostms+Ey1
◧◩◪
13. lostms+cy1[view] [source] [discussion] 2022-12-15 21:25:16
>>pleb_n+DT
Considering the deep conv nets that melted the last AI winter happened in 2012, you are basically giving it 40 years till 100%.
◧◩◪
14. lostms+Ey1[view] [source] [discussion] 2022-12-15 21:27:42
>>tintor+Ds1
The amount of work in that area of research is substantial. You will see world shattering results in a few years.

Current SOTA: https://openai.com/blog/vpt/

◧◩
15. kerkes+jO1[view] [source] [discussion] 2022-12-15 22:56:13
>>woeiru+uE
Tesla makes self-driving cars that drive better than humans. The reason you have to touch the steering wheel periodically is political/social, not technical. An acquaintance of mine read books while he commutes 90 minutes from Chattanooga to work in Atlanta once or twice a week. He's sitting in the driver's seat but he's certainly not driving.

The political/social factors which apply to the life-and-death decisions made driving a car, don't apply to whether one of the websites I work on works perfectly.

I'm 35, and I've paid to write code for about 15 years. To be honest, ChatGPT probably writes better code than I did at my first paid internship. It's got a ways to go to catch up with even a junior developer in my opinion, but it's only a matter of time.

And how much time? The expectation in the US is that my career will last until I'm 65ish. That's 30 years from now. Tesla has only been around 19 years and now makes self-driving cars.

So yeah, I'm not immediately worried that I'm going to lose my job to ChatGPT in the next year, but I am quite confident that my role will either cease existing or drastically change because of AI before the end of my career. The idea that we won't see AI replacing professional coders in the next 30 years strains credulity.

Luckily for me, I already have considered some career changes I'd want to do even if I weren't forced to by AI. But if folks my age were planning to finish out their careers in this field, they should come up with an alternative plan. And people starting this field are already in direct competition to stay ahead of AI.

replies(2): >>Panzer+eX1 >>prioms+2l2
◧◩◪
16. Panzer+eX1[view] [source] [discussion] 2022-12-15 23:51:05
>>kerkes+jO1
I'm doubtful - There's a pretty big difference between writing a basic function and even a small program, and that's all I've seen out of these kinds of AIs thus far, and it still gets those wrong regularly because it doesn't really understand what it's doing - just mixing and matching its training set.

Roads are extremely regular, as things go, and as soon as you are off the beaten path with those AIs start having trouble too.

It seems that in general that the long tail will be problematic for a while yet.

◧◩◪
17. prioms+2l2[view] [source] [discussion] 2022-12-16 02:39:16
>>kerkes+jO1
I was of the impression that Tesla's self driving is still not fully reliable yet. For example a recent video shows a famous youtuber having to take manual control 3 times in a 20 min drive to work [0]. He mentioned how stressful it was compared to normal driving as well.

[0] https://www.youtube.com/watch?v=9nF0K2nJ7N8

replies(1): >>kerkes+95I
◧◩◪
18. throwa+1h3[view] [source] [discussion] 2022-12-16 09:32:45
>>ben_w+YC
It does depend on the details. In special fields, like medical software, regulation might alter the market—although code even there is often revealed to be of poor quality.

But of all the examples of cheap and convenient beating quality: photography, film, music, et al, the many industries that digital technology has disrupted, newspapers are more analogous than builders. Software companies are publishers, like newspapers. And newspapers had entire building floors occupied by highly skilled mechanical typesetters, who have long been replaced. A handful of employees on a couple computers could do the job faster, more easily, and of good enough quality.

Software has already disrupted everything else, eventually it would disrupt the process of making software.

◧◩
19. edanm+dn3[view] [source] [discussion] 2022-12-16 10:41:24
>>woeiru+uE
> [...] Copilot fizzled badly. (Full disclosure: I pay for Copilot because it is somewhat useful).

In what sense did Copilot fizzle badly? It's a tool that you incorporated into your workflow and that you pay money for.

Does it solve all programming? No, of course not, and it's far from there. I think even if improves a lot it will not be close to replacing a programmer.

But a tool that lets you write code 10x,100x faster is a big deal. I don't think we're far away from a world in which every programmer has to use AI to be somewhat proficient in their job.

20. allisd+mq3[view] [source] 2022-12-16 11:08:39
>>Workac+(OP)
Excellent summation. Majority of the software developers work on crud based frontend or backend development. When this thing's attention goes beyond the 4k tokens its limited to, there will be very less number of developers needed in general. Same way less number of artists or illustrators will be needed for making run of the mill marketing brochures.

I think majority wouldn't know what hit them when the time comes. My experience with chatgpt has been highly positive changing me from a skeptic to a believer. It takes a bit of skill to tune the prompts but I got it to write frontend, backend, unit test cases, automation test cases, generate test data flawlessly. I have seen and worked with much worse developers than what this current iteration is.

◧◩◪◨
21. kerkes+95I[view] [source] [discussion] 2022-12-29 04:14:33
>>prioms+2l2
If you watch the video you linked, he admits he's not taking manual control because it's unsafe--it's because he's embarrassed. It's hard to tell from the video, but it seems like the choices he makes out of embarrassment are actually more risky than what the Tesla was going to do.

It makes sense. My own experience driving a non-Tesla car the speed limit nearly always, is that other drivers will try to pressure you to do dangerous stuff so they can get where they're going a few seconds faster. I sometimes give into that pressure, but the AI doesn't feel that pressure at all. So if you're paying attention and see the AI not giving into that pressure, the tendency is to take manual control so you can. But that's not safer--quite the opposite. That's an example of the AI driving better than the human.

On the opposite end of the social anxiety spectrum, there's a genre of pornography where people are having sex in the driver's seats of Teslas while the AI is driving. They certainly aren't intervening 3 times in 20 minutes, and so far I don't know of any of these people getting in car accidents.

[go to top]