zlacker

[parent] [thread] 6 comments
1. agumon+(OP)[view] [source] 2024-10-13 18:48:08
IIRC Markus Triska showed a trick (with a nickname i forgot) to constrain the search space by embedded a variable length into the top level goal.
replies(1): >>YeGobl+dl
2. YeGobl+dl[view] [source] 2024-10-13 21:31:34
>>agumon+(OP)
I think what you mean is that he adds an argument that counts the times a goal is resolved with, thus limiting the depth of resolution? That works, but you need to give a magic number as a resolution depth limit, and if the number is too small then your program fails to find a proof that it normally should be able to find. It's not a perfect solution.
replies(1): >>agumon+nx
◧◩
3. agumon+nx[view] [source] [discussion] 2024-10-13 23:09:04
>>YeGobl+dl
Yes, well not so much a constant value. He added an unbound variable and it was enough to alter the search. Indeed it's still more or a trick, but it got me interested if there were other more fundamental ideas beyond that.
replies(1): >>YeGobl+Wx1
◧◩◪
4. YeGobl+Wx1[view] [source] [discussion] 2024-10-14 11:19:40
>>agumon+nx
That sounds like iterative deepening without a lower bound then. I guess that's possible. Maybe if you had a link to Markus' page I could have a look.

There are techniques to constraint the search space for _programs_ rather than proofs, that I know from Inductive Logic Programming, like Bottom Clause construction in Inverse Entailment, or the total ordering of the Herbrand Base in Meta-Interpretive Learning (ILP). It would be interesting to consider applying them to constraint the space of proofs in ordinary logic progamming.

Refs for the above techniques are here but they're a bit difficult to read if you don't have a good background in ILP:

http://wp.doc.ic.ac.uk/arusso/wp-content/uploads/sites/47/20...

https://link.springer.com/content/pdf/10.1007/s10994-014-547...

replies(2): >>agumon+Lh2 >>jodrel+kA3
◧◩◪◨
5. agumon+Lh2[view] [source] [discussion] 2024-10-14 16:48:02
>>YeGobl+Wx1
thanks a lot, i'll add a comment with the video I had in mind soon
◧◩◪◨
6. jodrel+kA3[view] [source] [discussion] 2024-10-15 02:13:47
>>YeGobl+Wx1
> "Maybe if you had a link to Markus' page I could have a look."

e.g. here: https://www.metalevel.at/tist/ solving the Water Jugs problem (search on the page for "We use iterative deepening to find a shortest solution") finding a list of moves emptying and filling jugs, and using `length(Ms, _)` to find shorter list of moves first.

or here: https://www.metalevel.at/prolog/puzzles under "Wolf and Goat" he writes "You can use Prolog's built-in search strategy to search for a sequence of admissible state transitions that let you reach the desired target state. Use iterative deepening to find a shortest solution. In Prolog, you can easily obtain iterative deepening via length/2, which creates lists of increasing length on backtracking."

replies(1): >>YeGobl+MW4
◧◩◪◨⬒
7. YeGobl+MW4[view] [source] [discussion] 2024-10-15 15:17:53
>>jodrel+kA3
Thanks! Yes, it's iterative deepening without a lower bound. The trick is that iterative deepening is used to order the space of proofs so that the shortest proof (path through an SLD-tree) is found first. I use that approach often. The cool thing with it is that it doesn't stop in the first solution and will keep generating new solutions ordered by length of the proof as long as there are any. Some times you really want to know all solutions.

There is a bit of a problem, in that if there is no solution the lack of a lower bound will cause the search to go on forever, or until the search space is exhausted- and you don't want that. If you use a lower bound, on the other hand, you may be cutting the search just short of finding the solution. It's another trade-off.

[go to top]