zlacker

[parent] [thread] 59 comments
1. ipnon+(OP)[view] [source] 2021-10-27 18:11:22
Can any users give their opinion on how it's helping their productivity? What problems are they finding, if any?
replies(16): >>ed_ell+71 >>capabl+P2 >>etaioi+J3 >>dreyfa+j4 >>dgunay+06 >>abidla+96 >>speedg+q7 >>ridicu+E7 >>the42t+d8 >>JoshCo+0b >>nemo16+wb >>kall+Sb >>rawtxa+Gc >>baby+Xm >>trotha+1n >>karmas+fp
2. ed_ell+71[view] [source] 2021-10-27 18:16:09
>>ipnon+(OP)
It is massively improving my productivity, things I couldn’t be bothered to write it does for me.

The thing I find really good is it can predict what I will do next. Say if I have a list of columns in some text somewhere in the project when I write one “df = df.withColumn(“OneItemInList”)”

Copilot will then add the same for all the other items - is really nice

3. capabl+P2[view] [source] 2021-10-27 18:23:34
>>ipnon+(OP)
Tried it out for a while, and it's clear that it's trying to get people to be faster at writing boilerplate, not get people to write better code.

I'm a bit scared for what this means as I don't think being able to faster write boilerplate is something worthwhile. The example ed_elliott_asc made is one of those examples where instead of fixing things so you don't have to repeat yourself, copilot makes it easy to just live with the boilerplate instead.

replies(4): >>airstr+17 >>pc86+l7 >>csee+E8 >>aranch+g9
4. etaioi+J3[view] [source] 2021-10-27 18:26:36
>>ipnon+(OP)
I only played around with it on OpenAI but it's the same model as far as I know. It's pretty good at regurgitating algorithms it's seen before. It's not good at all at coming up with new algorithms.

It's very good at translating between programming languages, including pseudocode.

It can write a lot more valid code much quicker than any human, and in a whole slew of languages.

I haven't had the urge to use it much after playing around with it constantly for a few days, but it was pretty mind-blowing.

replies(1): >>yeptha+O4
5. dreyfa+j4[view] [source] 2021-10-27 18:29:15
>>ipnon+(OP)
It’s a dream come true for the script kiddies.
◧◩
6. yeptha+O4[view] [source] [discussion] 2021-10-27 18:31:19
>>etaioi+J3
Your response makes me wonder if poisoning the well is possible by submitting code to Github with multiple languages and coding styles. A single file with a function signature written in Javascript and the body written in Python + Ruby. Enough code would surely break the AI model behind it. Unless Copilot has some sort of ingestion validation which wouldn’t surprise.
replies(3): >>Grimm1+i6 >>stu2b5+H6 >>ctoth+JA
7. dgunay+06[view] [source] 2021-10-27 18:36:29
>>ipnon+(OP)
It can generate the body of test cases well, especially in BDD frameworks where you write the high-level scenario first to prime it with context. Less tedium encourages me to write more tests.

More verbose languages like C++ become less obnoxious to write in. I know RSI has been mentioned and any tool which cuts down on excessive typing will help with that.

It sometimes reveals bits of the standard library I wasn't aware of in unfamiliar languages. I can write my intent as a comment and then it may pull out a one-liner to replace what I would have normally done using a for loop.

The main downside I've observed is that if I'm not trying to reign it in, it can result in a lot of WET code since it can pattern match other areas of the surrounding code but can't actually rewrite anything that has already been written. It is important to go back and refactor the stuff it produces to avoid this.

8. abidla+96[view] [source] 2021-10-27 18:37:14
>>ipnon+(OP)
What I like most about Copilot is seeing different programming styles suggested to me

For example, I didn't know about self.fail() in unittests and had never used it, but Copilot suggested it and it produced the most readable version of the unit test

◧◩◪
9. Grimm1+i6[view] [source] [discussion] 2021-10-27 18:38:19
>>yeptha+O4
In any training with code I've done, we've written a parser that validates against tree sitter grammars to make sure it's at least syntactically valid against some known subset of languages we're training on.
replies(1): >>yeptha+M8
◧◩◪
10. stu2b5+H6[view] [source] [discussion] 2021-10-27 18:39:40
>>yeptha+O4
Probably but you would have to submit an absurdly large amount of code to make a dent. Practically unreasonable considering their training corpus is also increasing per lines of public code submitted on github.

So not only would you have to submit a insanely large amount of code but you're also racing against literally millions of users writing legitimate code at any period of time.

replies(2): >>yeptha+p8 >>josefx+xb
◧◩
11. airstr+17[view] [source] [discussion] 2021-10-27 18:40:59
>>capabl+P2
> I don't think being able to faster write boilerplate is something worthwhile

But do you believe people being slower at writing boilerplate is undesirable?

replies(2): >>kyleee+D7 >>chrsig+19
◧◩
12. pc86+l7[view] [source] [discussion] 2021-10-27 18:43:12
>>capabl+P2
Writing a slightly abstracted library to handle populating a list isn't necessarily "fixing" something. It might be, for sure, but is going to be very use case dependent, and there are a lot of instances where it's better to have 5, 10, or yes even 15-20+ nearly-identical lines and be done in a minute or two (or 5 seconds with Copilot IME) than spend half a day tweaking test coverage on your one-off library.
replies(1): >>capabl+nw
13. speedg+q7[view] [source] 2021-10-27 18:43:23
>>ipnon+(OP)
It does the boring code for me.

If I want to throw an exception if an object is null or undefined, Co-pilot will do the if and the exception throw using the right error class, and a more meaningful error message that what I usually came up with.

If want to map some random technical data table to something useful in my programming language, I can copy paste the data from the documentation in pdf or html into a comment block, give an example, and co-pilot will write everything in the format I want.

If I want to slightly refactor some code, I can do it once or twice and co-pilot can help me a lot to refactor the remaining.

If I want to have a quick http server in nodejs, I don't have to Google anything.

It's a lot of tiny things like this.

replies(1): >>TaupeR+6c
◧◩◪
14. kyleee+D7[view] [source] [discussion] 2021-10-27 18:44:24
>>airstr+17
It may be desirable for boilerplate to be maximally painful if it forces our collective hands to cut down on boilerplate and innovate it away
replies(3): >>keving+T7 >>closep+Y9 >>verve_+6b
15. ridicu+E7[view] [source] 2021-10-27 18:44:24
>>ipnon+(OP)
There's a risk of it turning into a worse StackOverflow, by suggesting plausible-looking but subtly incorrect code. Here's two examples I found:

https://twitter.com/ridiculous_fish/status/14527512360594513...

◧◩◪◨
16. keving+T7[view] [source] [discussion] 2021-10-27 18:45:20
>>kyleee+D7
Who is realistically going to innovate the boilerplate out of Java if they're stuck using it at work?
replies(3): >>gridsp+Ba >>stelco+zh >>amused+Pl1
17. the42t+d8[view] [source] 2021-10-27 18:47:12
>>ipnon+(OP)
I was working on some internationalization stuff, translating some phrases from english to portuguese and Copilot just did it for me, does not seem like much but for me that is amazing.

I was able to write {"Settings":...} and Copilot completed with {"Settings": "Configurações"} that tool is simply amazing.

◧◩◪◨
18. yeptha+p8[view] [source] [discussion] 2021-10-27 18:48:00
>>stu2b5+H6
Why not just use AI to generate the code, and automate submission via APIs?
◧◩
19. csee+E8[view] [source] [discussion] 2021-10-27 18:48:54
>>capabl+P2
It's going to be great for exploratory data science. You don't really need stellar, maintainable or extensible code for that, the early stage is largely about iteration speed.
replies(1): >>manque+ua
◧◩◪◨
20. yeptha+M8[view] [source] [discussion] 2021-10-27 18:49:20
>>Grimm1+i6
I’m which case shifting strategies toward code that looks correct but isn’t using shared syntax between languages as well as language specific gotchas.
replies(1): >>Grimm1+4c
◧◩◪
21. chrsig+19[view] [source] [discussion] 2021-10-27 18:50:05
>>airstr+17
Possibly yes, if you're only contrasting it with being able to be faster.

I mean, it's sort of a false dichotomy -- it's omitting a "default speed" for writing boilerplate that is neither enhanced nor impeded.

the potential issue with an enhanced speed for writing boilerplate is that it means that there'll just be more and more boilerplate to maintain over time, and it's not clear what that cost over time will be.

How much more effort will be expended to replace things in multiple places? It exacerbates existing issues of "these two things look almost the same, but someone manually modified one copy...should that change be propagated?"

Meaning, it's essentially an ad-hoc code generator. Code generation can be a very useful technique (see protobufs), but without the ability to re-generate from a source?

Perhaps a possible enhancement might be for copilot to keep track of all blurbs it generated and propose refactoring/modifying all copies?

replies(1): >>airstr+fo
◧◩
22. aranch+g9[view] [source] [discussion] 2021-10-27 18:51:04
>>capabl+P2
Boilerplate is exclusively what I use AI-powered code completion for (currently Tabnine).

In a perfect world we’d all have excellent comprehensive metaprogramming facilities in our programming languages and no incidence of RSI (e.g. carpal tunnel syndrome). Code completion is a good tool to deal with these realities.

◧◩◪◨
23. closep+Y9[view] [source] [discussion] 2021-10-27 18:54:39
>>kyleee+D7
That’s a good plan if your language isn’t Go. For us I think tools to wrangle boilerplate are a lot more feasible than actually eliminating it.
replies(1): >>zomgli+hl
◧◩◪
24. manque+ua[view] [source] [discussion] 2021-10-27 18:56:44
>>csee+E8
Iteration speed also depends on code being well written and performance code, you need to get results faster to iterate faster.

Also if your don't fully understand your code( when generated or copied from SO) as not uncommon with junior developers and data science practitioners, then they struggle to make even small change for the next iteration, because they don't fully understand what their code is doing and how.

When your code is composable or modifiable easily then iterations become faster because you understand what you have written. One of the reasons why analysts prefer Excel if data size is within limits.

◧◩◪◨⬒
25. gridsp+Ba[view] [source] [discussion] 2021-10-27 18:57:36
>>keving+T7
That, or any job where you're not permitted to make the sweeping changes required to resolve boilerplate. Many of my jobs had such restrictions.
26. JoshCo+0b[view] [source] 2021-10-27 18:59:21
>>ipnon+(OP)
I've actually found it helpful as an API autocomplete, but... also not helpful at the same time.

So for example I was working with processing an image to extract features and a few variants of docstrings for the method got me a pretty close to working function which converted the image to gray scale, detected edges, and computed the result I wanted.

The helpful thing here was that there were certain APIs that were useful as a part of doing this that it knew but which I would have to do look up. I had to go through and modify the proposed solution: it got the conditional in the right place, but I wanted a broader classification so switched from a (255, 255, 255) check to a nearBlack(pixel) function which it then autocompleted successfully. I also had to modify the cropping.

When doing a similar task in the past I spent a lot more time on it, because I went down a route in which I was doing color classification based on the k nearest neighbors. Later I found that the AI I was working on was learning to exploit the opacity of the section of the screen I was extracting a feature from in order to maximize its reward, because it kept finding edge cases in the color classifier. I ended up switching to a different color space to make color difference distance functions more meaningful, but it wasn't good enough to beat the RL agent that was trying to exploit mistakes in the classifier.

Anyway, what I'm getting at here is that it is pretty easy to spend a lot of time doing similar things to what I'm doing and not get a great solution at the end. In this case though it only took a few minutes to get a working solution. CoPilot didn't code the solution for me, but it helped me get the coding done faster because it knew the APIs and the basic structure of what I needed to do. To be clear, its solutions were all broken in a ton of ways, but it didn't matter it still saved me time.

To give another example let's say you have a keyboard key event press and you weren't sure about how to translate that into the key that was pressed. key.char? key.key? str(key)? key.symbol? A former method of figuring out what the right key might be is looking up the code, but with CoPilot you type '# Get the key associated with the key press' then hit tab and it gives you code that is broken but looks perfect and you gain a false sense of confidence that you actually know the API. You later realize after being amazed that it knew the API so well that you didn't have to look it up that actually the key press event handles symbols differently and so it errors on anything that is used as a modifier key.

My general impression is something like: Wow, this is amazing. It understood exactly what I wanted and it knew the APIs and coded up the solution... Wait, no. Hold on a second. This and that are wrong.

replies(1): >>karmas+Oq
◧◩◪◨
27. verve_+6b[view] [source] [discussion] 2021-10-27 18:59:49
>>kyleee+D7
In my experience whenever someone tries to "innovate" away boilerplate they end up creating shitty abstractions that are inflexible, poorly documented, and unmaintained.

Boilerplate generally exists for a reason, and it's not because the creator likes typing.

28. nemo16+wb[view] [source] 2021-10-27 19:02:10
>>ipnon+(OP)
I wrote a test recently for a simple "echo"-style server: clients writes a name to a socket, server replies with "Hello, " + name. Nothing crazy.

In the test body, I wrote "foo" to the socket. Copilot immediately filled in the rest of the test: read from the socket, check that the result is "Hello, foo", and fail the test with a descriptive error otherwise.

wtf? How did it figure out that the correct result was "Hello, foo"? I changed "Hello" to "Flarbnax", and sure enough, Copilot suggested "Flarbnax, foo". I added a "!" to the end, and it suggested that too. After pushing a bit further, the suggestions started to lose accuracy: it would check for "???" instead of "??" for example, or it would check for a newline where there was none. But overall I came away very impressed.

◧◩◪◨
29. josefx+xb[view] [source] [discussion] 2021-10-27 19:02:13
>>stu2b5+H6
> Probably but you would have to submit an absurdly large amount of code to make a dent.

So how about an already poisoned well. How up to date is the average Github project on encryption standards?

30. kall+Sb[view] [source] 2021-10-27 19:04:00
>>ipnon+(OP)
If I have an if else case, a switch statement or something similar, it can often predict the next branch exactly how I would write it. That‘s probably 80% of the suggestions I accept, the rest is single line autocompletes. I have never accepted a whole function implementation, and they are actually rather annoying because they make the document jump.

It‘s useful enough for me, as a magic autocomplete.

◧◩◪◨⬒
31. Grimm1+4c[view] [source] [discussion] 2021-10-27 19:04:53
>>yeptha+M8
Yeah but if malicious intent is a concern you can just spin up a sandboxed instance to run the code to check first.

Really the thing is there's not way to ascribe correctness to a piece of code right, like humans fail at this even. The only "correct" code is like rote algorithmic code that has a well defined method of operation. And there's likely a lot more correct examples of that, like way more than you'd ever be able to poison.

You may be able to be misleading though by using names that say one thing but do another, but again you'd be fighting against the tide of correctly named things.

◧◩
32. TaupeR+6c[view] [source] [discussion] 2021-10-27 19:05:07
>>speedg+q7
This is the kind of thing I would need to see in real time, because I simply can't believe that it does any of these things in a way that is reliable and doesn't involve having to search through and make sure it hasn't made any mistakes, taking just as much time as if you did it by hand.
replies(3): >>gfosco+6h >>Polycr+4j >>baby+2n
33. rawtxa+Gc[view] [source] 2021-10-27 19:08:41
>>ipnon+(OP)
It's truly amazing, almost felt magical the first few autocomplete results I got.

There's the benefits that a lot of people mentioned, but to me the biggest benefit is I can avoid procrastination. Usually when I'm blocked on something I'll run a search in the browser, but very quickly I end up going off the trail and just browsing the web and losing a lot of time. Now when I'm blocked I simply type the comment of what I'm trying to do and the autocomplete suggestion is pretty damn good and unblocks me very quickly. More surprising of all, it somehow understands my code style by looking at the context.

◧◩◪
34. gfosco+6h[view] [source] [discussion] 2021-10-27 19:28:57
>>TaupeR+6c
Likewise skeptical, but I have been super impressed with it. I just got in to the technical preview, and worked through a specific task I needed to do (involving mongoose, a mongo aggregate query, a few loops, some date functions) and started by adding a comment above each line. It helped a lot actually, felt like a collab.

I'll reproduce a generic example I sent a friend.

Prompt:

    const redis = require('redis');

    // Create a redis client
Copilot suggested:

    const client = redis.createClient();
I accepted it and moved 2 lines down, prompt:

    // Get the current count from redis
Copilot suggested a start to the operation:

    client.get('count', (err, count) => {
I accepted it, and Copilot continued:

      if (err) {
        console.log(err);
        return;
      }

      // If there is no count, set it to 0
      if (!count) {
        count = 0;
      }

      // Increment the count
      count++;

      // Set the new count in redis
      client.set('count', count);
  
      // Log the new count
      console.log(count);
    }
replies(3): >>aaaaaa+ij >>foepys+4k >>matsem+ht
◧◩◪◨⬒
35. stelco+zh[view] [source] [discussion] 2021-10-27 19:30:33
>>keving+T7
I mean, Clojure kinda does that
replies(1): >>keving+bC
◧◩◪
36. Polycr+4j[view] [source] [discussion] 2021-10-27 19:37:23
>>TaupeR+6c
It's _very_ good at "learn by example" with some twists. It _does_ make mistakes, and I do double check it, but it still definitely saves time. I used it to write the bulk of a new implementation of a new audio backend for a game engine yesterday - it filled out a lot of the "boilerplate" integration work (e.g. generating all the functions like "set volume/pan/3D audio position" that map over 1:1 to functions in the other library).

I will say, though, that it's also good at making up code that looks very believably real but doesn't actually work.

The ethics involved in Copilot are a bit strange, and I'm not sure I'll keep using it for those reasons, but it does a good job.

◧◩◪◨
37. aaaaaa+ij[view] [source] [discussion] 2021-10-27 19:38:09
>>gfosco+6h
Why does it increment the count?
replies(1): >>gfosco+Jk
◧◩◪◨
38. foepys+4k[view] [source] [discussion] 2021-10-27 19:41:29
>>gfosco+6h
Redis has the INCR command that does this in redis without the additional round-trips (and race conditions). It also sets the value to 0 if the key doesn't exist.

So, I actually consider this to be exactly the bad behavior that people accuse Copilot of.

replies(2): >>gfosco+on >>speedg+Co
◧◩◪◨⬒
39. gfosco+Jk[view] [source] [discussion] 2021-10-27 19:44:02
>>aaaaaa+ij
I assume that in its training, incrementing a counter in redis is common.
◧◩◪◨⬒
40. zomgli+hl[view] [source] [discussion] 2021-10-27 19:46:25
>>closep+Y9

    if commentErr != nil {
        hn.Upvote("https://news.ycombinator.com/item?id=29017491")
    }
41. baby+Xm[view] [source] 2021-10-27 19:54:42
>>ipnon+(OP)
As other pointed out, it makes boring or repetitive tasks a breeze.

Also, it’s like a more clever auto complete most of the time, even when it’s wrong in calling a function you can use it as foundation code to go faster.

And you don’t need to think too much about it, it really keeps you in the flow.

42. trotha+1n[view] [source] 2021-10-27 19:55:20
>>ipnon+(OP)
I find it works well when my intent is clear. For example, I might want to log a value I just computed for debugging purposes. I type LOG, wait a second, and it completes a zephyr logging macro, complete with a sensible message and the value I just computed.

It sort of feels like pair programming with an undergraduate, except copilot never learns. That isn't to say it's bad, more that it is just a tool you can hand simple stuff off to, except the handoff is zero-effort.

EDIT: I will say that there are times when it makes up fantasy constants or variable names, that seem plausible but don't exist. An eventual version of Copilot that includes more normal autocompletion information, so it only suggests symbols that exist, will be a stronger tool.

◧◩◪
43. baby+2n[view] [source] [discussion] 2021-10-27 19:55:23
>>TaupeR+6c
I was really skeptical at first, but after using it for a while omg it is just insane.
◧◩◪◨⬒
44. gfosco+on[view] [source] [discussion] 2021-10-27 19:56:41
>>foepys+4k
This was just one of its suggestions, but you're right of course.. it's all based on the training data and idioms used there. If it doesn't weight more modern code higher, if it's not aware of new versions and methods, it isn't going to be super intelligent.. but it can still give you some ideas.
◧◩◪◨
45. airstr+fo[view] [source] [discussion] 2021-10-27 20:01:15
>>chrsig+19
> I mean, it's sort of a false dichotomy -- it's omitting a "default speed" for writing boilerplate that is neither enhanced nor impeded.

I'm not sure. I think that understanding is omitting a "default amount" of boilerplate that will have to be written regardless of one's individual preference that is really a function of the language / framework of choice, the existing codebase and the problem at hand.

Removing that boilerplate would be ideal but is not always possible given limited resources, time constraints and the usual inability to make sweeping changes to the codebase

So we settle for the second best solution which is to automate away that tedious process (or short of that provide developers with tools to get it out of the way faster) so we can all focus on "real work"

replies(1): >>chrsig+451
◧◩◪◨⬒
46. speedg+Co[view] [source] [discussion] 2021-10-27 20:03:32
>>foepys+4k
Yes it's not very shinning there. I would also throw the error instead of printing the error to the console and returning undefined.
replies(1): >>ponyou+fK
47. karmas+fp[view] [source] 2021-10-27 20:06:28
>>ipnon+(OP)
Useful to say, write a python script, doing some mandane things, like generate all the argparse lines for you, read the files, etc.

In a way, it does the dirty pipes surprisingly well. But when it comes to implement the core of the algorithm, it is not there yet, but the potential is huge.

◧◩
48. karmas+Oq[view] [source] [discussion] 2021-10-27 20:14:07
>>JoshCo+0b
Right

I am in the same boat with you. I am simultaneously wowed and underwhelmed to some degree.

Yes it is amazing when it gets right, it feels like cheating. But at the same time, it many times, does ... too much? To read a huge chuck of code and figuring out where it goes wrong is not a thing for me. Also Copilot doesn't really know the API, so yes, the amount of mental tax isn't less to make sure your program really behaves.

But again, I see the idea of Copilot is already huge win. I hate writing those manual scripts just offer people an entrance to some utility behind. Copilot does those things, surprisingly well and with accuracy.

Let it improve in the future, and we will see changes that quite fundamental to the idea of programming itself.

◧◩◪◨
49. matsem+ht[view] [source] [discussion] 2021-10-27 20:28:19
>>gfosco+6h

  // Create a redis client
Writing that probably takes a longer time than just doing it with the IDE help in jetbrains, though?

Press "r", press "." to autocomplete so it now says "redis.", then write "cC" and it suggests createClient (or write "create" or "client" if you're not sure what you're looking for). Now it says "redis.createClient()". Press ctrl+alt+v to extract a variable or write ".const" and press tab to apply the const live template. Ending up with your result in two seconds.

replies(1): >>anigbr+8w1
◧◩◪
50. capabl+nw[view] [source] [discussion] 2021-10-27 20:45:44
>>pc86+l7
> Writing a slightly abstracted library to handle populating a list

> than spend half a day tweaking test coverage on your one-off library

If you need to write a library and spend half a day to populate a list, you have bigger problems than boilerplate.

Nothing wrong with having duplicate lines. The problem becomes when writing those lines become automated so you start spewing those all over the place.

◧◩◪
51. ctoth+JA[view] [source] [discussion] 2021-10-27 21:07:26
>>yeptha+O4
I don't know if this is true, but I would assume that the tokenizers they used for Codex use actual language parsers which would drop invalid files like this and make this attack infeasible.

When I was playing around a couple years ago with the Fastai courses in language modeling I used the Python tokenize module to feed my model, and with excellent parser libraries like Lark[0] out there it wouldn't take that long to build real quality parsers.

Of course I could be totally wrong and they might just be dumping pure text in, shutter.

[0]: https://github.com/lark-parser/lark

◧◩◪◨⬒⬓
52. keving+bC[view] [source] [discussion] 2021-10-27 21:15:24
>>stelco+zh
So the solution to the problems that Copilot tries to solve is "migrate your workplace to Clojure"? Ordinary devs can't do that.
replies(1): >>stelco+Fx3
◧◩◪◨⬒⬓
53. ponyou+fK[view] [source] [discussion] 2021-10-27 22:12:08
>>speedg+Co
You are in a callback there. Goodbye your error!
replies(1): >>speedg+wH1
◧◩◪◨⬒
54. chrsig+451[view] [source] [discussion] 2021-10-28 00:54:40
>>airstr+fo
I agree that there's some default amount of boilerplate that needs to be written -- but one isn't impeded by that -- it's just built into the task.

An impedance would be something to adjust the status quo in a negative direction e.g., a hardware failure

◧◩◪◨⬒
55. amused+Pl1[view] [source] [discussion] 2021-10-28 03:31:49
>>keving+T7
Lombok and most intellij features make java boilerplate pretty obselete.
◧◩◪◨⬒
56. anigbr+8w1[view] [source] [discussion] 2021-10-28 05:31:27
>>matsem+ht
The power of something like Copilot is in building out stuff you're not familiar with or don't have templates set up. It's probably not as helpful when you already have a clear idea of what you want to do and just need it rather than think about it.

+1 for the variable extraction thing, I've been using their IDE for ages and it never occurred to me to look for such a thing.

replies(1): >>matsem+JG1
◧◩◪◨⬒⬓
57. matsem+JG1[view] [source] [discussion] 2021-10-28 07:25:03
>>anigbr+8w1
I use it especially much when writing old school java. Instead of writing "MyClass myclass = new MyClass()", I just write "new MyClass()" and get the typing for free. Even better when you do longer expressions and don't want to think about the type up front. Like working with streams or so.
replies(1): >>anigbr+IP1
◧◩◪◨⬒⬓⬔
58. speedg+wH1[view] [source] [discussion] 2021-10-28 07:34:25
>>ponyou+fK
Oh right. I once made a layer on top of the redis client to use promises because callbacks are a pain to deal with.
◧◩◪◨⬒⬓⬔
59. anigbr+IP1[view] [source] [discussion] 2021-10-28 09:02:57
>>matsem+JG1
Yeah this would be quite helpful for me as I tend to just experiment with things in the console (cleaning up messy datasets and the like) and then copy or rewrite into something more structured later. I feel like I'm only using about 20% of what Pycharm can do.
◧◩◪◨⬒⬓⬔
60. stelco+Fx3[view] [source] [discussion] 2021-10-28 19:24:58
>>keving+bC
Oh I was just chiming in really, not trying to say anything about copilot
[go to top]