The government being this sloppy at getting accents right is surprising, I would expect them to value accuracy and a clean paper trail when handling names.
That tells me you're German, I didn't even need to see the ä and ß.
Even in the UK I encounter websites that won't accept my Norwegian address because it begins with Å. English speaking countries generally are pretty bad at this sort of thing.
Ü isn't even a special character or utf-8 - ü is part of ascii. How does this even fail? Is their database a 7-bit database?
My understanding is that they are still phonetically entirely equivalent. How does it feel to have to substitute them into your name? (Or do you have a different recourse?)
That is not true. Type “man ascii” on macOS or Linux to see everything that is part of ascii.
My Spanish girlfriend has an ñ in her last name, and as does our son. To the people here in Norway, I just tell them to put a plain n when typing the last name. It’s easier to just go with that than to try and get people to understand how to type ñ on the keyboard (even though our computers can do it), and to avoid extra back and forth with people who have systems that don’t handle it.
Likewise, when I’m in Spain I don’t bother to say that my last name has ø in it. I don’t even bother to rewrite the o in my last name as oe. I just put it as o.
The only situation where I put it as oe is indirectly when an airline converts ø to oe on my airline ticket, or where the airline system doesn’t handle ø and I put it as oe for them when making the booking. To me my name looks worse with oe in it, and seems harder to pronounce for people if I write it as having oe in it than just putting it as o.
But I wouldn’t bother memorising that and every other possible way that the other person has to press the keys depending on their keyboard layout and operating system. I’d just tell people to put u instead.
Ascii is 7 bits. What people think of as 8-bit ASCII is actually code page 437, the alternate characters added to the PC BIOS in the original IBM PC. Like UTF-8 it uses the most significant bit in a 1 byte ASCII char to determine if it should use a character from ASCII if 0 or the extended 437 characters which includes ü if 1. https://en.wikipedia.org/wiki/Code_page_437
If you try spelling your name over the phone to an American government employee, the vast majority would have no idea what a eszett was or how to enter it. Even if you wrote ß on a form, most wouldn't be able to enter it. Nor would most know how to pronounce it.
Even for accented letters like ä which at least have a form someone might recognize, the sheer number of different accent marks used across languages and the difficulty in reading someone's handwriting and general unfamiliarity with foreign names is just asking for some clerk to enter in wrong.
And that's just names with Latin letters. It becomes infinitely worse once you start including all the other character in world languages.
Instead, US government databases usually have first and last names transliterated into uppercase non-accented letters and they match against the transliterated name. Middle names are often only for display purposes. If you're lucky, they'll be display versions of first and last as well where you might sometimes be able to stick an accented character.
This isn't really limited to the US either. If you look at any passport, you'll notice the machine-readable section does the exact same thing, so on German passports ß becomes SS and Ä becomes AE.
It does not directly bother me but can lead to downstream inconveniences. Public services (in Germany) ime don't like mismatches in identifiers, especially inconsistent ones. If it is required then it might sometimes take more than one application (with a small explanation on why the mismatch is there).
As another example, if ä is substituted for ae in shipping addresses then automatic tracking for packages by DHL via my customer account breaks (as the address is not identical anymore).
Imagine you are an American designing a system. What about non-Latin alphabets? Yeah, these should probably be converted, nobody's going to bother with those. What about Hungarians, should we care about their O / Ó / Ö / Ő and U / Ú / Ü / Ű? And Icelanders - should we allow their Ð / Þ?
I understand that seeing your name misspelled hurts, but pretending ASCII is enough for everyone is an understandable simplification.
There was a case of some German bank treating ü as "ue", its typical ASCII transliteration. A customer complained under GDPR and won.
By the way, the accents can often be used to force the right pronunciation of a foreign name on native speakers (at least in US, where Spanish names are so widespread). So e.g. use "á" if you want it to be pronounced [a] etc.
And yes absolutely we should bring Ð / Þ back for English use and drop those ridiculous digraphs.
But that’s not really the point. No matter how many keyboard shortcuts the clerk at the DMV memorizes there is always going to be some text that they just cannot reproduce accurately. Whether it’s an accented character from the exotic land of Spain or some real Zalgo, something is going to get lost. No individual human can correctly deal with all possible textual forms.
My point was that someone who can type it will often have it rejected by a website. I was using a hotel booking site and when I booked a room it asked me for my address so I typed Å... The web page rendered it correctly but when I hit the button to complete the transaction it told me that my address contained an illegal character (or some similar wording). And this site handles bookings for hotels that themselves have names with umlauts, tildes, cedillas, etc.