zlacker

[return to "OpenAI reaches agreement to buy Windsurf for $3B"]
1. librar+av2[view] [source] 2025-05-06 22:54:17
>>swyx+(OP)
But is there a secret sauce in any of the coding agents (Copilot Agent, Windsurf, Claude Code, Cursor, Cline, Aider, etc)? Sure, some have better user experience than others, but what if anything makes one "better at coding" than another?

As this great blog post lays bare ("The Emperor Has No Clothes", https://ampcode.com/how-to-build-an-agent), the core tech of a coding agent isn't anything magic - it's a set of LLM prompts plus a main loop running the calls to the LLM and executing the tool calls that the LLM wants to do. The tools are pretty standard like, search, read file, edit file, execute a bash command, etc. etc. Really all the power and complexity and "coding ability is in the LLM itself. Sure, it's a lot of work to make something polished that devs want to use - but is there any more to it than that?

So what is the differentiator here, other than user experience (for which I prefer the CLI tools, but to each their own)? $3B is a lot for something that sure doesn't seem to have any secret sauce tech or moat that I can see.

◧◩
2. hello_+aK2[view] [source] 2025-05-07 01:40:24
>>librar+av2
The moat is Windsurf’s custom LLM and the ops around it (training pipelines, fine-tuning, infra).

Codeium (Windsurf’s parent) started as a GPU optimization company, so they have deep expertise there. Unlike most agents that might just wrap OpenAI/Claude/etc Windsurf’s own model powers its code edits, not external API calls.

That’s where the defensibility is. better in-house models + efficient infra = stronger long-term moat

◧◩◪
3. rhubar+LP4[view] [source] 2025-05-07 19:03:51
>>hello_+aK2
I suspect it’s also around handling large code bases, building out a prompt that is maximally useful via more conventional processing before passing to the LLM
[go to top]