https://github.com/Svalorzen/AI-Toolbox
Each algorithm is extensively commented, self-contained (aside from general utilities), and the interfaces are as similar as I could make them be. One of my goals is specifically to help people try out simple algorithms so they can inspect and understand what is happening, before trying out more powerful but less transparent algorithms.
I'd be happy to receive feedback on accessibility, presentation, docs or even more algorithms that you'd like to see implemented (or even general questions on how things work).
That comment indeed looks a lot like it is generated. It has correlated a bunch of words, but it did not understand that the link between UI and AI is tenuous. It is probably one of the few comments where it is so glaringly obvious. There are likely a lot more comments around which are generated but which went unnoticed.
This comment is not generated, as the links below are dated after the GPT-3 dataset was scraped.
[0] https://news.ycombinator.com/submitted?id=adolos
[1] https://adolos.substack.com/p/what-i-would-do-with-gpt-3-if-...
For those unaware, the university of Alberta is Rich Sutton's home institution, and he approves of and promotes the course.
I recommend the FastAI course on deep learning. Several of their lectures relate to things their students have done in biotech and medical. The main lecturer Jeremy Howard has worked for years at the crossroads of medicinal technology and AI, and routinely discusses this.
The full fastai course is here[1] and free. Here is a blog post and associated video[2] as an example of fastai incorporating biotech into their work. In this example they use AI to upsample the resolution and quality of microscopes.
http://rail.eecs.berkeley.edu/deeprlcourse/
For a more mathematical treatment, there's a beautiful book by Puterman:
https://www.amazon.com/Markov-Decision-Processes-Stochastic-...
In a way the real story is that people are so eager to believe it that it didn't matter that it was untrue. Like Voltaire's God, if it didn't exist it was necessary to invent it.
Obviously there's an infatuation right now with GPT-3. That's normal. If people keep posting these without disclosing them, I imagine there will be two consequences. One (good for HN) is readers scrutinizing comments more closely and raising the bar for what counts as a high-quality comment. The other (bad for HN) is readers accusing each other of posting generated comments.
Accusing other commenters of being bots is not a new phenomenon ("this sounds like it was written by a Markov chain" has long been an internet swipe) but if it gets bigger, we might have to figure something out. But first we should wait for the original wave of novelty to die down.
https://www.reddit.com/r/datascience/comments/iav3lv/how_oft...
Plug/Source: I did a lit. review on this topic https://doi.org/10.3233/DS-200028