zlacker

[parent] [thread] 1 comments
1. jiggaw+(OP)[view] [source] 2023-11-21 10:37:05
When experimenting with the early models that were set up for "text completion" instead of question-answer chat, I noticed that I could get it to generate vastly better code by having the LLM complete a high quality "doc comment" style preamble instead of a one-line comment.

I also noticed that if I wrote comments in "my style", then it would complete the code in my style also, which I found both hilarious and mildly disturbing.

replies(1): >>kromem+Nn3
2. kromem+Nn3[view] [source] 2023-11-22 04:54:59
>>jiggaw+(OP)
The fact that 90% of the people aware of and using LLMs have yet to experience it thinking their own thoughts before they do means we're in store for a whole new slew of freak outs as integration in real world products expands.

It's a very weird feeling for sure. I remember when Copilot first took a comment I left at the end of the day for me to start my next day with and generated exactly the thing I was going to end up thinking of 5 minutes later in my own personal style.

It doesn't always work and it often has compile issues, but when it does align just right - it's quite amazing and unsettling at the same time.

[go to top]