zlacker

[parent] [thread] 12 comments
1. mindcr+(OP)[view] [source] 2023-09-01 18:08:59
If anybody wants to hear more about Doug's work and ideas, here is a (fairly long) interview with Doug by Lex Fridman, from last year.

https://www.youtube.com/watch?v=3wMKoSRbGVs&pp=ygUabGV4IGZya...

replies(4): >>lern_t+03 >>mistri+R9 >>replwo+9P >>chubot+S41
2. lern_t+03[view] [source] 2023-09-01 18:25:38
>>mindcr+(OP)
Just search for Doug Lenat on YouTube. I can guarantee that any one of the other videos will be better than a Fridman interview.
replies(2): >>mindcr+64 >>dang+og
◧◩
3. mindcr+64[view] [source] [discussion] 2023-09-01 18:32:42
>>lern_t+03
Only about two of them will be more contemporary though, and both are academic talks, not interviews. I get that you don't like Lex Fridman, which is a perfectly fine position to hold. But there is something to be said for seeing two people just sit and talk, as opposed to seeing somebody monologue for an hour. The Fridman interview with Doug is, IMO, absolutely worth watching. And so are all of the other videos by / about Doug. shrug
replies(1): >>yarpen+ba
4. mistri+R9[view] [source] 2023-09-01 19:05:46
>>mindcr+(OP)
reading the bio of Lex Fridman on wikipedia.. "Learning of Identity from Behavioral Biometrics for Active Authentication" what?
replies(3): >>lionko+Oe >>modele+6g >>dang+Jg
◧◩◪
5. yarpen+ba[view] [source] [discussion] 2023-09-01 19:07:33
>>mindcr+64
I don't know this particular interview, but it's not necessarily about not liking Lex. I listened to many episodes of his podcast and while I appreciate the selection of guests from the CS domain, many of these interviews aren't very good. They are not completely terrible but they should have been so much better: Lex had so many passionate, educated, experienced and gifted guests, yet his ability to ask interesting and focused questions is not on the same level.
replies(1): >>pengar+1u
◧◩
6. lionko+Oe[view] [source] [discussion] 2023-09-01 19:32:07
>>mistri+R9
Like anything reasonably complex, it means little to you if its not your field - that said, I have no clue either.
◧◩
7. modele+6g[view] [source] [discussion] 2023-09-01 19:39:01
>>mistri+R9
Makes sense to me. He basically made a system that detects when someone else is using your computer by e.g. comparing patterns of mouse and keyboard input to your typical usage. It would be useful in a situation such as if you left your screen unlocked and a coworker sat down at your desk to prank you by sending an email from you to your boss (or worse, obviously). The computer would lock itself as soon as it suspects someone else is using it instead of you.
◧◩
8. dang+og[view] [source] [discussion] 2023-09-01 19:40:42
>>lern_t+03
Hey you guys, please don't go offtopic like this. Whimsical offtopicness can be ok, but offtopicness in the intersection of:

(1) generic (e.g. swerves the thread toward larger/general topic rather than something more specific);

(2) flamey (e.g. provocative on a divisive issue); and

(3) predictable (e.g. has been hashed so many times already that comments will likely fall in a few already-tiresome hash buckets)

- is the bad kind of offtopicness: the kind that brings little new information and eventually lots of nastiness. We're trying for the opposite here—lots of information and little nastiness.

https://news.ycombinator.com/newsguidelines.html

◧◩
9. dang+Jg[view] [source] [discussion] 2023-09-01 19:42:16
>>mistri+R9
Please don't go offtopic in predictable/nasty ways - more at >>37355320 .
◧◩◪◨
10. pengar+1u[view] [source] [discussion] 2023-09-01 21:05:04
>>yarpen+ba
He's a shitty interviewer. Often doesn't even engage with his guest's responses, as if he's not even listening to what they're saying, instead moving mechanically to his next bullet-point. Which is completely ridiculous for what's supposed to be a long-format conversational interview.

The best episodes are ones where the guest drives the interview and has a lot of interesting things to say. Fridman's just useful for attracting interesting domain experts somewhere we can hear them speak for hours on end.

The Jim Keller episodes are excellent IMO, despite Fridman. Guests like Keller and Carmack don't need a good interviewer for it to be a worthwhile listen.

11. replwo+9P[view] [source] 2023-09-02 00:19:32
>>mindcr+(OP)
Enjoyed watching that. Doug sounds very impressive. RIP.
12. chubot+S41[view] [source] 2023-09-02 03:53:20
>>mindcr+(OP)
Thanks for the link. I watched the first part, and an interesting story/claim is that before Cyc started, many "smart people" including Marvin Minsky came up with "~1 million" as the number of things you would have to encode in a system for it to have "common sense".

He said they learned after ~5 years that this was an order of magnitude off -- it's more like 10 M things.

Is there any literature about this? Did they publish?

To me, the obvious questions are -

- how do they know it's not 100M things?

- how do they know it's even bounded? Why isn't there a combinatorial explosion?

I mean I guess they were evaluating the system all along. You don't go for 38 years without having some clear metrics. But I am having some problems with the logic -- I'd be interested in links to references / criticism.

I'd be interested in any arguments for and against ~10 M. Naively speaking, the argument seems a bit flawed to me.

FWIW I heard of Cyc back in the 90's, but I had no idea it was still alive. It is impressive that he kept it alive for so long.

---

Actually the wikipedia article is pretty good

https://en.wikipedia.org/wiki/Cyc#Criticisms

Though I'm still interested in the ~1M or ~10M claim. It seems like a strong claim to hold onto for decades, unless they had really strong metrics backing it up.

replies(1): >>HarHar+6Ag
◧◩
13. HarHar+6Ag[view] [source] [discussion] 2023-09-07 12:39:21
>>chubot+S41
> how do they know it's not 100M things?

> how do they know it's even bounded? Why isn't there a combinatorial explosion?

I don't know - I'm in middle of watching the interview too, but he's moved on from that topic already. I'd guess the 10M vs 1M (or 100M) estimate comes from the curve of total "assertions" vs time leveling off towards some asymptotic limit.

I suppose the reason there's no combinatorial explosion is because they're entering these assertions in most general form possible, so considering new objects doesn't necessarily mean new assertions since it may all be covered by the superclasses the objects are part of (e.g. few assertions that are specific to apples since most will apply to all fruit).

[go to top]