zlacker

[parent] [thread] 81 comments
1. hellop+(OP)[view] [source] 2026-02-04 05:08:36
And when programming with agentic tools, you need to actively push for the idea to not regress to the most obvious/average version. The amount of effort you need to expend on pushing the idea that deviates from the 'norm' (because it's novel), is actually comparable to the effort it takes to type something out by hand. Just two completely different types of effort.

There's an upside to this sort of effort too, though. You actually need to make it crystal clear what your idea is and what it is not, because of the continuous pushback from the agentic programming tool. The moment you stop pushing back, is the moment the LLM rolls over your project and more than likely destroys what was unique about your thing in the first place.

replies(9): >>fallou+I3 >>jivetu+g4 >>GCUMst+i4 >>Der_Ei+86 >>dkdbej+Gr >>fflluu+QF >>rixed+6Q >>lo_zam+qw1 >>seg_lo+2l2
2. fallou+I3[view] [source] 2026-02-04 05:51:31
>>hellop+(OP)
You just described the burden of outsourcing programming.
replies(5): >>tomrod+79 >>darkwa+me >>agumon+If >>onion2+zv >>bitwiz+Il2
3. jivetu+g4[view] [source] 2026-02-04 05:57:52
>>hellop+(OP)
> need to make it crystal clear

That's not an upside in that it's unique to LLM vs human written code. When writing it yourself, you also need to make it crystal clear. You do that in the language of implementation.

replies(1): >>balama+xw
4. GCUMst+i4[view] [source] 2026-02-04 05:58:36
>>hellop+(OP)
I can't help but imagine training horses vs training cats. One of them is rewarding, a pleasure, beautiful to see, the other is frustrating, leaves you with a lot of scratches and ultimately both of you "agreeing" on a marginal compromise.
replies(2): >>lambda+yB >>KptMar+LN
5. Der_Ei+86[view] [source] 2026-02-04 06:12:33
>>hellop+(OP)
Yet another example of "comments that are only sort of true because high temperature sampling isn't allowed".

If you use LLMs at very high temperature with samplers which correctly keep your writing coherent (i.e. Min_p, or better like top-h, P-less decoding, etc), than "regression to the mean" literally DOES NOT HAPPEN!!!!

replies(2): >>adevil+X6 >>hnlmor+Sj
◧◩
6. adevil+X6[view] [source] [discussion] 2026-02-04 06:18:20
>>Der_Ei+86
How do you configure LLM température in coding agents, e.g. opencode?
replies(2): >>Der_Ei+l7 >>kabr+h9
◧◩◪
7. Der_Ei+l7[view] [source] [discussion] 2026-02-04 06:22:21
>>adevil+X6
You can't without hacking it! That's my point! The only places you can easily are via the API directly, or "coomer" frontends like SillyTavern, Oobabooga, etc.

Same problem with image generation (lack of support for different SDE solvers, the image version of LLM sampling) but they have different "coomer" tools, i.e. ComfyUI or Automatic1111

replies(1): >>yoyohe+q8
◧◩◪◨
8. yoyohe+q8[view] [source] [discussion] 2026-02-04 06:32:46
>>Der_Ei+l7
Once again, porn is where the innovation is…
replies(1): >>dizhn+cn
◧◩
9. tomrod+79[view] [source] [discussion] 2026-02-04 06:38:54
>>fallou+I3
100%! There is significant analogy between the two!
replies(1): >>salawa+Fa
◧◩◪
10. kabr+h9[view] [source] [discussion] 2026-02-04 06:39:25
>>adevil+X6
https://opencode.ai/docs/agents/#temperature

set it in your opencode.json

replies(1): >>Der_Ei+AJ1
◧◩◪
11. salawa+Fa[view] [source] [discussion] 2026-02-04 06:51:42
>>tomrod+79
There is a reason management types are drawn to it like flies to shit.
replies(1): >>theshr+wn
◧◩
12. darkwa+me[view] [source] [discussion] 2026-02-04 07:28:13
>>fallou+I3
With the basic and enormous difference that the feedback loop is 100 or even 1000x faster. Which changes the type of game completely, although other issues will probably arise as we try this new path.
replies(1): >>Terr_+Zr
◧◩
13. agumon+If[view] [source] [discussion] 2026-02-04 07:40:01
>>fallou+I3
We need a new word for on-premise offshoring.

On-shoring ;

replies(8): >>aleph_+ki >>intend+fs >>pferde+xB >>helium+pN >>tmtvl+fV >>bregma+4w1 >>AgentO+DR1 >>fallou+Os3
◧◩◪
14. aleph_+ki[view] [source] [discussion] 2026-02-04 08:00:40
>>agumon+If
> On-shoring

I thought "on-shoring" is already commonly used for the process that undos off-shoring.

replies(3): >>saghm+Bm >>boring+471 >>agumon+2n4
◧◩
15. hnlmor+Sj[view] [source] [discussion] 2026-02-04 08:12:17
>>Der_Ei+86
Have you actually tried high temperature values for coding? Because I don’t think it’s going to do what you claim it will.

LLMs don’t “reason” the same way humans do. They follow text predictions based on statistical relevance. So raising the temperature will more likely increase the likelihood of unexecutable pseudocode than it would create a valid but more esoteric implementation of a problem.

replies(2): >>Terr_+es >>bob102+Ns
◧◩◪◨
16. saghm+Bm[view] [source] [discussion] 2026-02-04 08:32:58
>>aleph_+ki
How about "in-shoring"? We already have "insuring" and "ensuring", so we might as well add another confusingly similar sounding term to our vocabulary.
replies(1): >>weebul+sQ
◧◩◪◨⬒
17. dizhn+cn[view] [source] [discussion] 2026-02-04 08:37:24
>>yoyohe+q8
Please.. "Creative Writing"
◧◩◪◨
18. theshr+wn[view] [source] [discussion] 2026-02-04 08:40:34
>>salawa+Fa
Working with and communicating with offshored teams is a specific skill too.

There are tips and tricks on how to manage them and not knowing them will bite you later on. Like the basic thing of never asking yes or no questions, because in some cultures saying "no" isn't a thing. They'll rather just default to yes and effectively lie than admit failure.

19. dkdbej+Gr[view] [source] 2026-02-04 09:12:42
>>hellop+(OP)
Fair enough but I am a programmer because I like programming. If I wanted to be a product manager I could have made that transition with or without LLMs.
replies(3): >>raw_an+qZ >>sgarla+561 >>Pantal+Qy1
◧◩◪
20. Terr_+Zr[view] [source] [discussion] 2026-02-04 09:15:10
>>darkwa+me
That embeds an assumption that the outsourced human workers are incapable of thought, and experience/create zero feedback loops of their own.

Frustrated rants about deliverables aside, I don't think that's the case.

replies(2): >>darkwa+LB >>ambica+f61
◧◩◪
21. Terr_+es[view] [source] [discussion] 2026-02-04 09:17:15
>>hnlmor+Sj
To put it another way, a high-temperature mad-libs machine will write a very unusual story, but that isn't necessarily the same as a clever story.
replies(1): >>balama+XA
◧◩◪
22. intend+fs[view] [source] [discussion] 2026-02-04 09:17:44
>>agumon+If
Ai-shoring.

Tech-shoring.

replies(2): >>johnis+hu >>dzdt+0C
◧◩◪
23. bob102+Ns[view] [source] [discussion] 2026-02-04 09:22:30
>>hnlmor+Sj
High temperature seems fine for my coding uses on GPT5.2.

Code that fails to execute or compile is the default expectation for me. That's why we feed compile and runtime errors back into the model after it proposes something each time.

I'd much rather the code sometimes not work than to get stuck in infinite tool calling loops.

◧◩◪◨
24. johnis+hu[view] [source] [discussion] 2026-02-04 09:33:34
>>intend+fs
Would work, but with "snoring". :D
◧◩
25. onion2+zv[view] [source] [discussion] 2026-02-04 09:42:32
>>fallou+I3
Outsourcing development and vibe coding are incredibly similar processes.

If you just chuck ideas at the external coding team/tool you often get rubbish back.

If you're good at managing the requirements and defining things well you can achieve very good things with much less cost.

◧◩
26. balama+xw[view] [source] [discussion] 2026-02-04 09:49:21
>>jivetu+g4
And programming languages are designed for clarifying the implementation details of abstract processes; while human language is this undocumented, half grandfathered in, half adversarially designed instrument for making apes get along (as in, move in the same general direction) without excessive stench.

The humane and the machinic need to meet halfway - any computing endeavor involves not only specifying something clearly enough for a computer to execute it, but also communicating to humans how to benefit from the process thus specified. And that's the proper domain not only of software engineering, but the set of related disciplines (such as the various non-coding roles you'd have in a project team - if you have any luck, that is).

But considering the incentive misalignments which easily come to dominate in this space even when multiple supposedly conscious humans are ostensibly keeping their eyes on the ball, no matter how good the language machines get at doing the job of any of those roles, I will still intuitively mistrust them exactly as I mistrust any human or organization with responsibly wielding the kind of pre-LLM power required for coordinating humans well enough to produce industrial-scale LLMs in the first place.

What's said upthread about the wordbox continually trying to revert you to the mean as you're trying to prod it with the cowtool of English into outputting something novel, rings very true to me. It's not an LLM-specific selection pressure, but one that LLMs are very likely to have 10x-1000xed as the culmination of a multigenerational gambit of sorts; one whose outset I'd place with the ever-improving immersive simulations that got the GPU supply chain going.

◧◩◪◨
27. balama+XA[view] [source] [discussion] 2026-02-04 10:24:58
>>Terr_+es
So why is this "temperature" not on, like, a rotary encoder?

So you can just, like, tweak it when it's working against your intent in either direction?

replies(1): >>Terr_+qM2
◧◩◪
28. pferde+xB[view] [source] [discussion] 2026-02-04 10:30:24
>>agumon+If
Corporate has been using the term "best-shoring" for a couple of years now. To my best guess, it means "off-shoring or on-shoring, whichever of the two is cheaper".
◧◩
29. lambda+yB[view] [source] [discussion] 2026-02-04 10:30:30
>>GCUMst+i4
Right now vibe coding is more like training cats. You are constantly pushing against the model's tendency to produce its default outputs regardless of your directions. When those default outputs are what you want - which they are in many simple cases of effectively English-to-code translation with memorized lookup - it's great. When they are not, you might as well write the code yourself and at least be able to understand the code you've generated.
replies(1): >>kimixa+XL
◧◩◪◨
30. darkwa+LB[view] [source] [discussion] 2026-02-04 10:32:00
>>Terr_+Zr
No. It just means the harsh reality: what's really soul crushing in outsourced work is having endless meetings to pass down / get back information, having to wait days/weeks/months to get some "deliverable" back on which iterate etc. Yes, outsourced human workers are totally capable of creative thinking that makes sense, but their incentive will always be throughput over quality, since their bosses usually give closed prices (at least in what I lived personally).

If you are outsourcing to an LLM in this case YOU are still in charge of the creative thought. You can just judge the output and tune the prompts or go deep in more technical details and tradeoffs. You are "just" not writing the actual code anymore, because another layer of abstraction has been added.

replies(2): >>Jagerb+yZ >>dimitr+f41
◧◩◪◨
31. dzdt+0C[view] [source] [discussion] 2026-02-04 10:34:00
>>intend+fs
vibe-shoring
32. fflluu+QF[view] [source] 2026-02-04 11:02:42
>>hellop+(OP)
This is why people thinkless of artists like Damien Hirst and Jeff Koons because their hands have never once touched the art. They have no connection to the effort. To the process. To the trail and error. To the suffer. They’ve out sourced it, monetized it, and make it as efficient as possible. It’s also soulless.
◧◩◪
33. kimixa+XL[view] [source] [discussion] 2026-02-04 11:48:25
>>lambda+yB
Yup - I've related it to working with Juniors, often smart and have good understandings and "book knowledge" of many of the languages and tools involved, but you often have to step back and correct things regularly - normally around local details and project specifics. But then the "junior" you work with every day changes, so you have to start again from scratch.

I think there needs to be a sea change in the current LLM tech to make that no longer the case - either massively increased context sizes, so they can contain near a career worth of learning (without the tendency to start ignoring that context, as the larger end of the current still-way-too-small-for-this context windows available today), or even allow continuous training passes to allow direct integration of these "learnings" into the weights themselves - which might be theoretically possible today, but is many orders of magnitude higher in compute requirements than available today even if you ignore cost.

replies(1): >>throwt+J01
◧◩◪
34. helium+pN[view] [source] [discussion] 2026-02-04 11:56:37
>>agumon+If
We already have a perfect one

Slop;

◧◩
35. KptMar+LN[view] [source] [discussion] 2026-02-04 11:59:32
>>GCUMst+i4
I've never seen horse that scratches you.
36. rixed+6Q[view] [source] 2026-02-04 12:18:26
>>hellop+(OP)
To me it feels a bit like literate programming, it forces you to form a much more accurate idea of your project before your start. Not a bad thing, but can be wasteful also when eventually you realise after the fact that the idea was actually not that good :)
replies(1): >>wtetzn+go2
◧◩◪◨⬒
37. weebul+sQ[view] [source] [discussion] 2026-02-04 12:21:10
>>saghm+Bm
How about we leave "...shoring" alone?
◧◩◪
38. tmtvl+fV[view] [source] [discussion] 2026-02-04 12:56:32
>>agumon+If
Rubber-duckying... although a rubber ducky can't write code... infinite-monkeying?
replies(1): >>biofox+8e1
◧◩
39. raw_an+qZ[view] [source] [discussion] 2026-02-04 13:22:36
>>dkdbej+Gr
I’m a programmer (well half my job) because I was a short (still short) fat (I got better) kid with a computer in the 80s.

Now, the only reason I code and have been since the week I graduated from college was to support my insatiable addictions to food and shelter.

While I like seeing my ideas come to fruition, over the last decade my ideas were a lot larger than I could reasonably do over 40 hours without having other people working on projects I lead. Until the last year and a half where I could do it myself using LLMs.

Seeing my carefully designed spec that includes all of the cloud architecture get done in a couple of days - with my hands on the wheel - that would have taken at least a week with me doing some work while juggling dealing with a couple of other people - is life changing

replies(1): >>docmar+5F1
◧◩◪◨⬒
40. Jagerb+yZ[view] [source] [discussion] 2026-02-04 13:23:52
>>darkwa+LB
Also, with an LLM you can tell it to throw away everything and start over whenever you want.

When you do this with an outsourced team, it can happen at most once per sprint, and with significant pushback, because there's a desire for them to get paid for their deliverable even if it's not what you wanted or suffers some other fundamental flaw.

replies(1): >>raw_an+M21
◧◩◪◨
41. throwt+J01[view] [source] [discussion] 2026-02-04 13:32:02
>>kimixa+XL
Try writing more documentation. If your project is bigger than a one man team then you need it anyways and with LLM coding you effectively have an infinite man team.
replies(1): >>kimixa+fA4
◧◩◪◨⬒⬓
42. raw_an+M21[view] [source] [discussion] 2026-02-04 13:46:21
>>Jagerb+yZ
Yep, just these past two weeks. I tried to reuse an implementation I had used for another project, it took me a day to modify it (with Codex), I tried it out and it worked fine with a few hundred documents.

Then I tried to push through 50000 documents, it crashed and burned like I suspected. It took one day to go from my second more complicated but more scalable spec where I didn’t depend on an AWS managed service to working scalable code.

It would have taken me at least a week to do it myself

◧◩◪◨⬒
43. dimitr+f41[view] [source] [discussion] 2026-02-04 13:56:56
>>darkwa+LB
It doesn't have to be soul crushing.

Just like people more, and have better meetings.

Life is what you make it.

Enjoy yourself while you can.

replies(3): >>darkwa+Ni1 >>docmar+VA1 >>tayo42+JF1
◧◩
44. sgarla+561[view] [source] [discussion] 2026-02-04 14:06:37
>>dkdbej+Gr
Agreed. The higher-ups at my company are, like most places, breathlessly talking about how AI has changed the profession - how we no longer need to code, but merely describe the desired outcome. They say this as though it’s a good thing.

They’re destroying the only thing I like about my job - figuring problems out. I have a fundamental impedance mismatch with my company’s desires, because if someone hands me a weird problem, I will happily spend all day or longer on that problem. Think, hypothesize, test, iterate. When I’m done, I write it up in great detail so others can learn. Generally, this is well-received by the engineer who handed the problem to me, but I suspect it’s mostly because I solved their problem, not because they enjoyed reading the accompanying document.

replies(3): >>dahart+Df1 >>WorldM+Ov1 >>Camper+UN1
◧◩◪◨
45. ambica+f61[view] [source] [discussion] 2026-02-04 14:07:19
>>Terr_+Zr
Not really, its just obviously true that the communication cycle with your terminal/LLM is faster than with a human over Slack/email.
◧◩◪◨
46. boring+471[view] [source] [discussion] 2026-02-04 14:10:59
>>aleph_+ki
En-shoring?
◧◩◪◨
47. biofox+8e1[view] [source] [discussion] 2026-02-04 14:47:25
>>tmtvl+fV
In silico duckying
◧◩◪
48. dahart+Df1[view] [source] [discussion] 2026-02-04 14:55:08
>>sgarla+561
FWIW, when a problem truly is weird, AI & vibe coding tends to not be able to solve it. Maybe you can use AI to help you spend more time working on the weird problems.

When I play sudoku with an app, I like to turn on auto-fill numbers, and auto-erase numbers, and highlighting of the current number. This is so that I can go directly to the crux of the puzzle and work on that. It helps me practice working on the hard part without having to slog through the stuff I know how to do, and generally speaking it helps me do harder puzzles than I was doing before. BTW, I’ve only found one good app so far that does this really well.

With AI it’s easier to see there are a lot of problems that I don’t know how to solve, but others do. The question is whether it’s wasteful to spend time independently solving that problem. Personally I think it’s good for me to do it, and bad for my employer (at least in the short term). But I can completely understand the desire for higher-ups to get rid of 90% of wheel re-invention, and I do think many programmers spend a lot of time doing exactly that; independently solving problems that have already been solved.

replies(1): >>docmar+UC1
◧◩◪◨⬒⬓
49. darkwa+Ni1[view] [source] [discussion] 2026-02-04 15:08:08
>>dimitr+f41
It's not strictly soul-crushing for me, but I definitely don't like to waste time in non-productive meetings where everyone bullshits everyone else. Do you like that? Do you find it a good use of your time and brain attention capacity?
◧◩◪
50. WorldM+Ov1[view] [source] [discussion] 2026-02-04 16:06:43
>>sgarla+561
Though it is not like management roles have ever appreciated the creative aspects of the job, including problem solving. Management has always wished to just describe the desired outcome and get magic back. They don't like acknowledging that problems and complications exist in the first place. Management likes to think that they are the true creatives for company vision and don't like software developers finding solutions bottom up. Management likes to have a single "architect" and maybe a single "designer" for the creative side that they like and are a "rising" political force (in either the Peter Principle or Gervais Principle senses) rather than deal with a committee of creative people. It's easier for them to pretend software developers are blue collar cogs in the system rather than white collar problem solvers with complex creative specialties. LLMs are only accelerating those mechanics and beliefs.
replies(1): >>docmar+8E1
◧◩◪
51. bregma+4w1[view] [source] [discussion] 2026-02-04 16:08:30
>>agumon+If
eshoring
52. lo_zam+qw1[view] [source] 2026-02-04 16:10:08
>>hellop+(OP)
Uniqueness is not the aim. Who cares if something is uniquely bad? But in any case, yes, if you use LLMs uncritically, as a substitute for reasoning, then you obviously aren't doing any reasoning and your brain will atrophy.

But it is also true that most programming tedious and hardly enriching for the mind. In those cases, LLMs can be a benefit. When you have identified the pattern or principle behind a tedious change, an LLM can work like a junior assistant, allowing you to focus on the essentials. You still need to issue detailed and clear instructions, you still need to verify the work.

Of course, the utility of LLMs is a signal that either the industry is bad at abstracting, or that there's some practical limit.

◧◩
53. Pantal+Qy1[view] [source] [discussion] 2026-02-04 16:20:10
>>dkdbej+Gr
I became an auto mechanic because I love machining heads, and dropping oil pans to inspect, and fitting crankshafts in just right, and checking fuel filters, and adjusting alternators.

If I wanted to work on electric power systems I would have become an electrician.

(The transition is happening.)

◧◩◪◨⬒⬓
54. docmar+VA1[view] [source] [discussion] 2026-02-04 16:28:05
>>dimitr+f41
I think there's a certain kind of irony in being asked externally to enjoy the rubbish I've been given to eat. It's still rubbish.
replies(1): >>dimitr+ih3
◧◩◪◨
55. docmar+UC1[view] [source] [discussion] 2026-02-04 16:37:09
>>dahart+Df1
You touch on an aspect of AI-driven development that I don't think enough people realize: choosing to use AI isn't all or nothing.

The hard problems should be solved with our own brains, and it behooves us to take that route so we can not only benefit from the learnings, but assemble something novel so the business can differentiate itself better in the market.

For all the other tedium, AI seems perfectly acceptable to use.

Where the sticking point comes in is when CEOs, product teams, or engineering leadership put too much pressure on using AI for "everything", in that all solutions to a problem should be AI-first, even if it isn't appropriate—because velocity is too often prioritized over innovation.

replies(1): >>hirvi7+Wy2
◧◩◪◨
56. docmar+8E1[view] [source] [discussion] 2026-02-04 16:42:04
>>WorldM+Ov1
Agreed. I hate to say it, but if anyone thought this train of thought in management was bad now, it's going to get much worse, and unfortunately burnout is going to sweep the industry as tech workers feel evermore underappreciated and invisible to their leaders.

And worse: with few opportunities to grow their skills from rigorous thinking as this blog post describes. Tech workers will be relegated to cleaning up after sloppy AI codebases.

replies(1): >>WorldM+gk2
◧◩◪
57. docmar+5F1[view] [source] [discussion] 2026-02-04 16:46:18
>>raw_an+qZ
Not sure why this is getting downvoted, but you're right — being able to crank out ideas on our own is the "killer app" of AI so to speak.

Granted, you would learn a lot more if you had pieced your ideas together manually, but it all depends on your own priorities. The difference is, you're not stuck cleaning up after someone else's bad AI code. That's the side to the AI coin that I think a lot of tech workers are struggling with, eventually leading to rampant burnout.

replies(1): >>raw_an+KI1
◧◩◪◨⬒⬓
58. tayo42+JF1[view] [source] [discussion] 2026-02-04 16:48:07
>>dimitr+f41
Just have better meetings

If we could I think we would be doing that...

replies(1): >>wheeli+Eb2
◧◩◪◨
59. raw_an+KI1[view] [source] [discussion] 2026-02-04 17:01:53
>>docmar+5F1
What would I learn that I don’t already know? The exact syntax and property of Terraform and boto3 for every single one of the 150+ services that AWS offers? How to modify a React based front end written by another developer even though I haven’t and have actively stayed away from front end development for well over a decade?

Will a company pay me more for knowing those details? Will I be more affectively able to architect and design solutions that a company will pay my employer to contract me to do and my company pays me? They pay me decently not because I “codez real gud”. They pay me because I can go from empty AWS account, empty repo and ambiguous customer requirements to a working solution (after spending time talking to a customer) to a full well thought out architecture + code on time on budget and that meets requirements.

I am not bragging, I’m old those are table stakes to being able to stay in this game for 3 decades

◧◩◪◨
60. Der_Ei+AJ1[view] [source] [discussion] 2026-02-04 17:06:17
>>kabr+h9
Note when I said "you have to hack it in", I mean you'll need to hack in support for modern LLM samplers like min_p, which enables setting temperature up to infinity (given min_p approaching 1) while maintaining coherence.
◧◩◪
61. Camper+UN1[view] [source] [discussion] 2026-02-04 17:26:04
>>sgarla+561
They’re destroying the only thing I like about my job - figuring problems out.

So, tackle other problems. You can now do things you couldn't even have contemplated before. You've been handed a near-godlike power, and all you can do is complain about it?

replies(1): >>wtetzn+8n2
◧◩◪
62. AgentO+DR1[view] [source] [discussion] 2026-02-04 17:42:08
>>agumon+If
NIH-shoring?
◧◩◪◨⬒⬓⬔
63. wheeli+Eb2[view] [source] [discussion] 2026-02-04 19:03:07
>>tayo42+JF1
It's going to come across very naive and dumb, but I believe we can and people just aren't aware of or they simply aren't implementing the basics.

Harvard Business Review and probably hundreds of other online content providers provide some simple rules for meetings yet people don't even do these.

1. Have a purpose / objective for the meeting. I consider meetings to fall into one of three broad categories information distribution, problem solving, decision making. Knowing this will allow the meeting to go a lot smoother or even be moved to something like an email and be done with it.

2. Have an agenda for the meeting. Put the agenda in the meeting invite.

3. If there are any pieces of pre-reading or related material to be reviewed, attach it and call it out in the invite. (But it's very difficult to get people to spend the time preparing for a meeting.)

4. Take notes during the meeting and identify any action items and who will do them (preferably with an initial estimate). Review these action items and people responsible in the last couple of minutes of the meeting.

5. Send out the notes and action items.

Why aren't we doing these things? I don't know, but I think if everyone followed these for meetings of 3+ people, we'd probably see better meetings.

replies(1): >>tayo42+LS2
◧◩◪◨⬒
64. WorldM+gk2[view] [source] [discussion] 2026-02-04 19:47:10
>>docmar+8E1
I greatly agree with that deep cynicism and I too am a cynic. I've spent a lot of my career in the legacy code mines. I've spent a lot of my career trying to climb my way out of them or at least find nicer, more lucrative mines. LLMs are the "gift" of legacy-code-as-a-service. They only magnify and amplify the worst parts of my career. The way the "activist shareholder" class like to over-hype and believe in Generative AI magic today only implies things have more room to keep getting worse before they get better (if they ever get better again).

I'm trying my best to adapt to being a "centaur" in this world. (In Chess it has become statistically evident that Human and Bot players of Chess are generally "worse" than the hybrid "Centaur" players.) But even "centaurs" are going to be increasingly taken for granted by companies, and at least for me the sense is growing that as WOPR declared about tic-tac-toe (and thermo-nuclear warfare) "a curious game, the only way to win is not to play". I don't know how I'd bootstrap an entirely new career at this point in my life, but I keep feeling like I need to try to figure that out. I don't want to just be a janitor of other people's messes for the rest of my life.

65. seg_lo+2l2[view] [source] 2026-02-04 19:51:32
>>hellop+(OP)
I think harder while using agents, just not about the same things. Just because we all got a super powers doesn't make the problems go away, they just move and we still have our full brains to solve them.

It isn't all great, skills that feel important have already started atrophying, but other skills have been strengthened. The hardest part is in being able to pace onself as well as figuring out how to start cracking certain problems.

◧◩
66. bitwiz+Il2[view] [source] [discussion] 2026-02-04 19:55:40
>>fallou+I3
YES!

AI assistance in programming is a service, not a tool. You are commissioning Anthropic, OpenAI, etc. to write the program for you.

replies(1): >>fallou+1t3
◧◩◪◨
67. wtetzn+8n2[view] [source] [discussion] 2026-02-04 20:02:09
>>Camper+UN1
> You can now do things you couldn't even have contemplated before. You've been handed a near-godlike power, and all you can do is complain about it?

This seems to be a common narrative, but TBH I don't really see it. Where is all the amazing output from this godlike power? It certainly doesn't seem like tech is suddenly improving at a faster pace. If anything, it seems to be regressing in a lot of cases.

replies(1): >>sgarla+of5
◧◩
68. wtetzn+go2[view] [source] [discussion] 2026-02-04 20:07:04
>>rixed+6Q
Yeah, it's why I don't like trying to write up a comprehensive design before coding in the first place. You don't know what you've gotten wrong until the rubber meets the road. I try to get a prototype/v1 of whatever I'm working on going as soon as possible, so I can root out those problems as early as possible. And of course, that's on top of the "you don't really know what you're building until you start building it" problem.
◧◩◪◨⬒
69. hirvi7+Wy2[view] [source] [discussion] 2026-02-04 20:52:13
>>docmar+UC1
> choosing to use AI isn't all or nothing.

That's how I have been using AI the entire time. I do not use Claude Code or Codex. I just use AI to ask questions instead of parsing the increasingly poor Google search results.

I just use the chat options in the web applications with manual copy/pasting back and forth if/when necessary. It's been wonderful because I feel quite productive, and I do not really have much of an AI dependency. I am still doing all of my work, but I can get a quicker answer to simple questions than parsing through a handful of outdated blogs and StackOverflow answers.

If I have learned one thing about programming computers in my career, it is that not all documentation (even official documentation) was created equally.

replies(2): >>sgarla+Cf5 >>docmar+3S6
◧◩◪◨⬒
70. Terr_+qM2[view] [source] [discussion] 2026-02-04 21:59:15
>>balama+XA
AFAIK there's no algorithmic reason against it, but services might not expose the controls in a convenient way, or at all.
◧◩◪◨⬒⬓⬔⧯
71. tayo42+LS2[view] [source] [discussion] 2026-02-04 22:31:11
>>wheeli+Eb2
Probably like most businesses issues, it's a people problem. They have to care in the first place and idk if you can make people who don't care starting caring.

I agree the info is out there about how to run effective meetings.

replies(1): >>dimitr+Ah3
◧◩◪◨⬒⬓⬔
72. dimitr+ih3[view] [source] [discussion] 2026-02-05 01:14:10
>>docmar+VA1
You sit at a desk.

You get paid in the top 1% globally

You have benefits

Some hope or dreams for what to do with your future, life after work, retirement.

You get to work with other people, overseas.

Talk to those contractors sometimes. They are under tremendous pressure. They are mistreated. One wrong move, they're gone. They undergo tremendous prejudices, and soft racism everyday especially by us FTEs.

You find out that they struggle with the drudgery as well, looking for solutions, better understanding, etc.

We all feel disposable by our corporate masters, but they feel it even more so.

Be the change you want to see in the world.

replies(1): >>docmar+kS6
◧◩◪◨⬒⬓⬔⧯▣
73. dimitr+Ah3[view] [source] [discussion] 2026-02-05 01:17:29
>>tayo42+LS2
Bingo -- 95% of work is people problems.

The coding is the easy part.

With LLMs and advanced models, even more so.

◧◩◪
74. fallou+Os3[view] [source] [discussion] 2026-02-05 02:49:41
>>agumon+If
If the on-premise offshoring centers around the use of LLMs then I suggest the term "off-braining." :)
◧◩◪
75. fallou+1t3[view] [source] [discussion] 2026-02-05 02:51:50
>>bitwiz+Il2
Yes, but as with outsourcing those who are making such decisions often lack the awareness, or even skills, to properly specify the requirements and be able to evaluate the results.
◧◩◪◨
76. agumon+2n4[view] [source] [discussion] 2026-02-05 11:30:57
>>aleph_+ki
Ha, my inexperience is showing :)
◧◩◪◨⬒
77. kimixa+fA4[view] [source] [discussion] 2026-02-05 13:21:16
>>throwt+J01
But that doesn't actually work for my use cases though, plenty of other people have already told me "I'm Holding It Wrong" without actual suggestions that work I've started ignoring them. At this stage I just assume many people work in very different sectors, and some see the "great benefits" often proselytized on the internet. And other areas don't see that. Systems programming, where I work, seems to be a poor fit - possibly due to relatively lack of content in the training corpus, perhaps due to company internal styles and APIs meaning lots of the context is taken up simply detailing takes a huge amount of the context leaving little for further corrections or details, or some other failure modes.

We have lots of documentation. Arguably too much - it quickly fills much of the claude opus context window with relevant documentation alone, and even then repeatedly outputs things directly counter to the documentation it just ingested.

◧◩◪◨⬒
78. sgarla+of5[view] [source] [discussion] 2026-02-05 17:11:34
>>wtetzn+8n2
Agreed. Mostly, what has occurred is I now have to field a lot more PRs that weren't well thought out (let's be honest, there was no thought), and - my personal favorite - argue with an AI via a human, where they post an AI's opinion on what I said as a reply. I've started just asking them to post links to documentation validating their claims.
◧◩◪◨⬒⬓
79. sgarla+Cf5[view] [source] [discussion] 2026-02-05 17:13:04
>>hirvi7+Wy2
Same! I don't mind copy/pasting a code snippet or asking a question, and I also always ask it to show its sources for anything non-obvious. That alone cuts down on a lot of bullshit.
◧◩◪◨⬒⬓
80. docmar+3S6[view] [source] [discussion] 2026-02-06 02:14:19
>>hirvi7+Wy2
It's funny you say this because this is considered the "old" way to use LLMs since agents can write code so well, but I don't think enough people realize how much more efficient your preferred approach is compared to the era before LLMs were widely available at all.

Gone are the days of hopeless Googling where 20 minutes of research becomes 3 hours with the possibility of having zero solutions. The sheer efficiency of having reliable, immediate answers is a tremendous improvement, even if you're choosing to write everything by hand using it as a reference.

◧◩◪◨⬒⬓⬔⧯
81. docmar+kS6[view] [source] [discussion] 2026-02-06 02:16:39
>>dimitr+ih3
> Be the change you want to see in the world.

Gladly! I think what I would choose is building on-shore teams exclusively. That's the change I'd like to see more of, while overseas teams build their own economies instead of ripping away jobs from domestic citizens in an already difficult job market.

replies(1): >>bdangu+xU6
◧◩◪◨⬒⬓⬔⧯▣
82. bdangu+xU6[view] [source] [discussion] 2026-02-06 02:32:41
>>docmar+kS6
almost feels like this could be a good political slogan for a campaign… like “america first” or something like that… oh wait… :)
[go to top]