zlacker

[return to "My AI skeptic friends are all nuts"]
1. TheRoq+fb[view] [source] 2025-06-02 22:17:50
>>tablet+(OP)
One of the biggest anti LLM arguments for me at the moments is about security. In case you don't know, if you open a file with copilot active or cursor, containing secrets, it might be sent to a server a thus get leaked. The companies say that if that file is in a cursorignore file, it won't be indexed, but it's still a critical security issue IMO. We all know what happened with the "smart home assistants" like Alexa.

Sure, there might be a way to change your workflow and never ever open a secret file with those editors, but my point is that a software that sends your data without your consent, and without giving you the tools to audit it, is a no go for many companies, including mine.

◧◩
2. knallf+YL2[view] [source] 2025-06-03 19:31:09
>>TheRoq+fb
It's pretty unlikely someone at Cursor cares about accessing your Spring Boot project on GitHub through your personal access token – because they already have all your code.
◧◩◪
3. tjhorn+U43[view] [source] 2025-06-03 21:24:05
>>knallf+YL2
I don't think that's the threat model here. The concern is regarding potentially sensitive information being sent to a third-party system without being able to audit which information is actually sent or what is done with it.

So, for example, if your local `.env` is inadvertently sent to Cursor and it's persisted on their end (which you can't verify one way or the other), an attacker targeting Cursor's infrastructure could potentially compromise it.

[go to top]