zlacker

[return to "GitHub Copilot available for JetBrains and Neovim"]
1. wara23+UC[view] [source] 2021-10-27 20:49:39
>>orph+(OP)
I have a few questions about copilot. I haven’t gotten a chance to use it yet.

Is it irrational that this makes me a little anxious about job security over the longterm? Idk why but this was my initial reaction when learning about this.

Given the scenario where copilot and its likes becomes used in a widespread manner. Can it be argued that this might improve productivity but stifle innovation?

Im pretty early in my career but the rate things are capable of changing soon doesn’t sit too well with me.

◧◩
2. abeced+GL[view] [source] 2021-10-27 21:39:16
>>wara23+UC
It's reasonable to worry.

- Copilot is qualitatively different from the kinds of automation of programming we've seen before.

- It's barely version 1.0 of this kind of thing. Deep learning has been advancing incredibly for a decade and doesn't seem to be slowing down. Researchers also work on things like mathematical problem-solving, which would tie in to "real work" and not just the boilerplate.

- In past examples of AI going from subhuman to superhuman, e.g. chess and go, the practitioners did not expect to be overtaken just a few years after it had not even felt like real competition. You'd read quotes from them about the ineffability of human thought.

What to do personally, I don't know. Stay flexible?

◧◩◪
3. anigbr+q91[view] [source] 2021-10-28 00:39:23
>>abeced+GL
I could see it being big with domain specialists who can tell very specific 'user stories' but don't want to stop what they're doing in order to learn programming.

I could see it being huge for the GUI scraping market. Or imagine a browser plugin that watches what you're doing and then offers to 'rewire' the web page to match your habits.

Imagine some sort of Clippy assistant watching where you hover attention as you read HN for example. After a while it says 'say, you seem to be more interested in the time series than the tree structure, would you rather just look at the activity on this thread than the semantics?' Or perhaps you import/include a sentiment analysis library, and it just asks you whether you want the big picture for the whole discussion, or whether you'd rather drill down to the thread/post/sentence level.

I notice all the examples I'm thinking of revolve around a simple pattern of someone asking 'Computer, here is a Thing; what can you tell me about it?' and the computer offering simplified schematic representations that allow the person to home in on features of interest and either do things with them or posit relationships to other features. This will probably set off all sorts of arms races, eg security people will want to misdirect AI-assisted intruders, marketers will probably want tos tart rendering pages as flat graphics to maintain brand differentiation and engagement vs a 'clean web' movement that wants to get rid of visual cruft and emphasize the basic underlying similarities.

It will lead to quite bitter arguments about how things should be; you'll have self-appointed champions of aesthetics saying that AI is decomposing the rich variety of human creativity into some sort of borg-like formalism that's a reflection of its autistic creators, and information liberators accusing the first group of being obscurantist tyrants trying to profit off making everything more difficult and confusing than it needs to be.

[go to top]