Who is responsible for OpenAI's UI/UX design? It is immaculate and should be the standard for the community. I'm always dazzled by the impeccable standards of OpenAI with regards to tone, presentation, accessibility.
The documentation is both familiar but distinct, an impressive achievement!
I have my own personal qualms on OpenAI's ethics and virtues but am nevertheless impressed by their aesthetics and regard for their publicity. It's always delightful to look at their work.
OpenAI has in my opinion, the most appropriate presentation for their ideas with marketing and branding. It feels exquisitely simple to grasp what goes on here.
I feel comfortable saying that the biggest obstacle in progress for AI is UI but projects such as this give me hope.
Whether odomojuli's post was written by a human, robot or dog is rather immaterial. It's the content of the post that makes it good or bad. It can be evaluated without knowing the author.
https://github.com/Svalorzen/AI-Toolbox
Each algorithm is extensively commented, self-contained (aside from general utilities), and the interfaces are as similar as I could make them be. One of my goals is specifically to help people try out simple algorithms so they can inspect and understand what is happening, before trying out more powerful but less transparent algorithms.
I'd be happy to receive feedback on accessibility, presentation, docs or even more algorithms that you'd like to see implemented (or even general questions on how things work).
That comment indeed looks a lot like it is generated. It has correlated a bunch of words, but it did not understand that the link between UI and AI is tenuous. It is probably one of the few comments where it is so glaringly obvious. There are likely a lot more comments around which are generated but which went unnoticed.
This comment is not generated, as the links below are dated after the GPT-3 dataset was scraped.
[0] https://news.ycombinator.com/submitted?id=adolos
[1] https://adolos.substack.com/p/what-i-would-do-with-gpt-3-if-...
For those unaware, the university of Alberta is Rich Sutton's home institution, and he approves of and promotes the course.
Web of trust is inevitable either way. I just hope it won't be owned by any huge company. Most likely it will be practically shared by a few of them, like the Internet or web standards.
After playing some with GPT-3 though I noticed I work a lot of like it though. Unrelated comments about web of trust are a great example.
Also, dude, I see you in almost every comment thread, how do you consume HN? Some clever scripts or lots of tabs and lots of refreshing?
It might be confirmation bias (or maybe we just like the same things), if you look at my history I only leave a few comments a day.
Would it be possible to use gpt3 to "beautify" existing prose without changing its meaning? Now that would be useful!
I know that GPT3 is impressive, but I'm not as convinced as some of the other commenters here that it's a generated comment. If a similar comment were posted on a non-AI related post, nobody would bat an eye.
Is there an equivalent to Hanlon's razor for AI? "Don't attribute to an text generation AI, which could be adquately explained by slightly nonsensical speech".
Not me. Not without total attribution so I know it's a bot and whose. There is no AI generated text etc without an agenda - implicit or explicit; benign or sinister.
Advancing the conversation is one thing but when u can't tell between a Russian GRU bot let loose to promote trigger words and some 4chan teenager who had a bad day at school then the online forum as a mode of expression is dead.
A 4channer is entitled to their opinion however wrong. A bot acting like it's human needs to be hunted, killed, and erased.
Definitely a recommendation!
I recommend the FastAI course on deep learning. Several of their lectures relate to things their students have done in biotech and medical. The main lecturer Jeremy Howard has worked for years at the crossroads of medicinal technology and AI, and routinely discusses this.
The full fastai course is here[1] and free. Here is a blog post and associated video[2] as an example of fastai incorporating biotech into their work. In this example they use AI to upsample the resolution and quality of microscopes.
Ever since the 2016 foreign-meddling-in-the-election news, I've see people commenting that there must be tons of russian bots/shills/astroturfers/etc in comments where I see genuine disagreement. I'm sure there are both, but I suspect the dismissal of "'people/opinions' that disagree with me aren't real people/opinions" is more common than the actual act of fake commenting.
http://rail.eecs.berkeley.edu/deeprlcourse/
For a more mathematical treatment, there's a beautiful book by Puterman:
https://www.amazon.com/Markov-Decision-Processes-Stochastic-...
In a way the real story is that people are so eager to believe it that it didn't matter that it was untrue. Like Voltaire's God, if it didn't exist it was necessary to invent it.
Obviously there's an infatuation right now with GPT-3. That's normal. If people keep posting these without disclosing them, I imagine there will be two consequences. One (good for HN) is readers scrutinizing comments more closely and raising the bar for what counts as a high-quality comment. The other (bad for HN) is readers accusing each other of posting generated comments.
Accusing other commenters of being bots is not a new phenomenon ("this sounds like it was written by a Markov chain" has long been an internet swipe) but if it gets bigger, we might have to figure something out. But first we should wait for the original wave of novelty to die down.
Note: I know this is generated by Sphinx. I'm commenting more on the actual content and their overall work towards presentation. Again, I should be providing more concrete examples to highlight my points.
https://www.reddit.com/r/datascience/comments/iav3lv/how_oft...
Plug/Source: I did a lit. review on this topic https://doi.org/10.3233/DS-200028