zlacker

[parent] [thread] 2 comments
1. jkaptu+(OP)[view] [source] 2019-12-13 18:54:13
So to "change a lightbulb", so to speak, the system decides something like "turn off the lamp first". But then the evaluation above says that the human would not dislike touching the bulb, but, in reality, it's still too hot.

So you could incorporate some kind of cooling rate, then change the above to "When an incandescent bulb has been on x of the last y minutes it is too hot to touch".

This all seems just impossibly complicated (not that I can think of something simpler!) - am I missing anything?

replies(1): >>_bxg1+J
2. _bxg1+J[view] [source] 2019-12-13 18:59:01
>>jkaptu+(OP)
It is very complicated, yes. The goal is to be AI's "left brain"; the slower, more methodical (and explainable!) end of the spectrum. We see our place as being complementary to ML's "right brain", fast-but-messy thinking.

I will say also that our focus on "common sense" means we make deliberate choices about where to "bottom out" the granularity with which we represent the world; otherwise we'd find ourselves reasoning about literal atoms in every case. We generally try to target the level at which a human might conceive of a concept; humans can treat a collection of atoms as roughly "a single object", but then still apply formal logical to that object and its properties (and "typical" properties). In one sense it isn't a perfect representation, but in another sense it strikes the right balance between perfect and meaningful.

replies(1): >>jkaptu+gb
◧◩
3. jkaptu+gb[view] [source] [discussion] 2019-12-13 20:14:27
>>_bxg1+J
Thanks so much for the explanation!
[go to top]