zlacker

[parent] [thread] 0 comments
1. wahnfr+(OP)[view] [source] 2025-08-22 04:39:00
Can't do that with state of the art LLMs and no sign of that changing (as they like to retain control over model behaviors). I would not want to use or contribute to a project that embraces LLMs yet disallows leading models.

Besides, the output of an LLM is not really any more trustworthy (even if reproducible) than the contribution of an anonymous actor. Both require review of outputs. Reproducibility of output from prompt doesn't mean that the output followed a traceable logic such that you can skip a full manual code review as with your mass renaming example. LLMs produce antagonistic output from innocuous prompting from time to time, too.

[go to top]