zlacker

[return to "Brother have gotten to where they are now by not innovating"]
1. cookie+b3[view] [source] 2023-11-27 08:14:30
>>anothe+(OP)
Some could argue that Brother printers adhere to the POSIX / UNIX philosophy: Solve one problem only, and solve it well.

In the end it somewhat boils down to pure greed. Instead of stabilizing production costs and/or reusing generic components to ease up manufacturing and repair - HP, Epson, Canon, Dell, Samsung, Kyocera and others try to hype their products with whatever tech stack is currently in trend. "growth hacking" is literally their job description.

There eventually will be a ChatGPT printer on the market. It's inevitable due to what kind of people manage a printer business: It's not the type of people that know how to build printers anymore.

◧◩
2. vidarh+q5[view] [source] 2023-11-27 08:28:58
>>cookie+b3
"As a language model I don't think this is the tone you should take in a letter to your printer manufacturer. Instead of the long string of expletives, here is a suggested letter of praise for your printers reliability, and an order for more toner instead:

..."

◧◩◪
3. layer8+Da[view] [source] 2023-11-27 09:05:33
>>vidarh+q5
It will just silently "auto-correct" the letter instead.
◧◩◪◨
4. jll29+nb[view] [source] 2023-11-27 09:10:55
>>layer8+Da
You don't need deep learning for that... https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres...
◧◩◪◨⬒
5. vidarh+Oj[view] [source] 2023-11-27 10:11:32
>>jll29+nb
But that was at least a bug, and a somewhat understandable one (though still stupid). I did my MSc on doing statistical methods to reduce error rates in OCR, and one of the methods that actually worked very well was various nearest neighbour variations over small windows of the pixel data. As part of that I did a literature review, of course, and there has been quite a lot of work on various algorithms for cleaning up images by trying to replace patches of pixels with presumed "clean" samples (sometimes from a known font, but more often by applying various clustering methods to patches from the image itself). Get that wrong and you'd very easily end up with something like this.

My own methods would also have easily produced this kind of error if you set the threshold for what to consider identical when clustering high enough. But for OCR the risk is somewhat mitigated by people not trusting it to be error-free, and so it can be an acceptable tradeoff if it reduces the overall error rate, but if you're outputting the raw pixel data and let people think it's an unmanipulated image you're begging for trouble.

[go to top]