It's been a fun exercise in software architecture. Because I actually care about this.
But we keep pushing this annual survey another year since we never seem to be ready to actually implement it (due to other priorities)
The thing is, as soon as you allow free-text entry, the exercise becomes moot assuming you got a solid training corpus of emails to train an AI on - basically the same approach that Wikipedia activists used to do two decades ago to determine "sockpuppet" accounts.
Over the course of 4 years I think it was only used 3 times. Most people assumed it was some kind of trap. It wasn’t, I genuinely wanted honest feedback, and thought some people were too shy to speak up in a group setting, so wanted to give options.
Management can 'drill down' to get information on how specific teams responded.
One of the things they mentioned doing is using a statistical (differential privacy?) model to limit the depth, to prevent any specific persons responses being revealed unless it was shared with a substantial number of other responses.
Surprisingly difficult when you consider e.g. a team lead reading a statement like "of the 10 people in your team, one is highly dissatisfied with management" - they have personal knowledge of the situation and are going to know which person it is.
They later decided to adopt it for an annual IT satisfaction survey that they sent out to users. In an ideal world we wouldn't participate because the respondents were grading my team's performance but we got invites because we were part of the Exchange distro the message was sent to. I quickly discovered that the dev team had left a bunch of default routes enabled so we were able to view a list of all responses and see who submitted which. We knew our customers well enough that we could reliably attribute most of the negative responses via the free-text comments field anyhow but the fact that anybody could explicitly see everybody else's response wasn't great.
I suppose the NTLM-authenticated username in the server logs would convey the same info but at least that'd require CIFS/RDP access to the web server...
In most of the places I've worked, I would have assumed the same.
The thing is that there is no real technological solution that would instill trust in someone that doesn't already have trust. In the end, all such privacy solutions necessarily must boil down to "trust us" because it's not practical or reasonable to perform the sort of deep analysis that would be required to confirm privacy claims.
You may have provided the source, for instance, but that doesn't give reassurance that the binary that is executing was compiled from that source.