zlacker

[parent] [thread] 16 comments
1. usgrou+(OP)[view] [source] 2024-10-13 12:42:10
Depth first search is not complete if branches can be infinitely deep. Therefore if you're in the wrong infinite branch the search will never finish.

Breadth first search is complete even if the branches are infinitely deep. In the sense that, if there is a solution it will find it eventually.

replies(3): >>desden+f1 >>xelxeb+e4 >>agumon+hK
2. desden+f1[view] [source] 2024-10-13 12:48:43
>>usgrou+(OP)
In practice, though, with BFS you'd run out of memory instead of never finding a solution.

Also, there shouldn't be many situations where you'd be able to produce infinite branches in a prolog program. Recursions must have a base case, just like in any other language.

replies(1): >>YeGobl+6k
3. xelxeb+e4[view] [source] 2024-10-13 13:13:51
>>usgrou+(OP)
Hrm. I guess the converse applies if nodes can have infinite children. That said, even if your tree is infinitely wide and deep, we're only dealing with countable children, right? Thus a complete traversal has to exist, right?

For example, each node has unique path to root, so write <n1, n2, ..., nk> where each ni is the sibling ordinal of the node at depth i in that path, i.e. it's the ni-th sibling of the n(i-1)st node. Raising each of these to the ith prime and taking a product gives each node a unique integer label. Traverse nodes in label order and voilà?

However, that all assumes we know the tree beforehand, which doesn't make sense for generic call trees. Do we just smash headfirst into Rice on this when trying to traverse in complete generality?

replies(1): >>usgrou+Fm
◧◩
4. YeGobl+6k[view] [source] [discussion] 2024-10-13 15:16:12
>>desden+f1
This has to do with the ordering of search: searching a proof tree (an SLD tree, in SLD-Resolution) with DFS, as in Prolog, can get stuck when there are cycles in the tree. That's especially the case with left-recursion. The article gives an example of a left-recursive program that loops if you execute it with Prolog, but note that it doesn't loop if you change the order of the clauses.

This version of the program, taken from the article, loops (I mean it enters an infinite recursion):

  last([_H|T],E) :- last(T,E).
  last([E],E).

  ?- last_(Ls,3).
  % Loops
This one doesn't:

  last([E],E).
  last([_H|T],E) :- last(T,E).

  Ls = [3] ;
  Ls = [_,3] ;
  Ls = [_,_,3] ;
  Ls = [_,_,_,3] ;
  Ls = [_,_,_,_,3] ;
  Ls = [_,_,_,_,_,3] .
  % And so on forever
To save you some squinting, that's the same program with the base-case moved before the inductive case, so that execution "hits" the base case when it can terminate. That's half of what the article is kvetching about: that in Prolog, you have to take into account the execution strategy of logic programs and can't just reason about the logical consequences of a program, you also have to think of the imperative meaning of the program's structure. It's an old complain about Prolog, as old as Prolog itself.
replies(1): >>agumon+hM
◧◩
5. usgrou+Fm[view] [source] [discussion] 2024-10-13 15:36:25
>>xelxeb+e4
No breadth first search is still complete given an infinite branching factor (i.e. a node with infinite children). "Completeness" is not about finishing in finite time, it also applies to completing in infinite time.

Breadth first search would visit every node breadth first, so given infinite time, the solution would eventually be visited.

Meanwhile, say a branch had a cycle in it, even given infinite time, a naive depth first search would be trapped there, and the solution would never be found.

replies(2): >>Legion+1o >>thethi+SK
◧◩◪
6. Legion+1o[view] [source] [discussion] 2024-10-13 15:47:17
>>usgrou+Fm
Suppose you have a node with two children A and B, each of which has infinitely many children. If you performed an ordinary BFS, you could get trapped in A's children forever, before ever reaching any of B's children.

Or, suppose that a node has infinitely many children, but the first child has its own child. A BFS would get stuck going through all the first-level children and never reach the second-level child.

A BFS-like approach could work for completeness, but you'd have to put lower-level children on the same footing as newly-discovered higher-level children. E.g., by breaking up each list of children into additional nodes so that it has branching factor 2 (and possibly infinite depth).

replies(1): >>usgrou+WH
◧◩◪◨
7. usgrou+WH[view] [source] [discussion] 2024-10-13 18:18:09
>>Legion+1o
Countable infinity does not work like that: two countable infinities are not more than one countable infinity. I think it falls into the "not even wrong" category of statements.

The Wikipedia article is fairly useful: https://en.wikipedia.org/wiki/Countable_set

replies(1): >>Legion+tN
8. agumon+hK[view] [source] 2024-10-13 18:35:52
>>usgrou+(OP)
Reminds me that Warren made a talk about prolog term domains to study resolution over infinite branches.
◧◩◪
9. thethi+SK[view] [source] [discussion] 2024-10-13 18:39:29
>>usgrou+Fm
> "Completeness" is not about finishing in finite time, it also applies to completing in infinite time.

Can you point to a book or article where the definition of completeness allows infinite time? Every time I have encountered it, it is defined as finding a solution if there is one in finite time.

> No breadth first search is still complete given an infinite branching factor (i.e. a node with infinite children).

In my understanding, DFS is complete for finite depth tree and BFS is complete for finite branching trees, but neither is complete for infinitely branching infinitely deep trees.

You would need an algorithm that iteratively deepens while exploring more children to be complete for the infinite x infinite trees. This is possible, but it is a little tricky to explain.

For a proof that BFS is not complete if it must find any particular node in finite time: Imagine there is a tree starting with node A that has children B_n for all n and each B_n has a single child C_n. BFS searching for C_1 would have to explore all of B_n before it could find it so it would take infinite time before BFS would find C_1.

◧◩◪
10. agumon+hM[view] [source] [discussion] 2024-10-13 18:48:08
>>YeGobl+6k
IIRC Markus Triska showed a trick (with a nickname i forgot) to constrain the search space by embedded a variable length into the top level goal.
replies(1): >>YeGobl+u71
◧◩◪◨⬒
11. Legion+tN[view] [source] [discussion] 2024-10-13 18:56:48
>>usgrou+WH
Yes, if you put two (or three, or countably many) countable sets together, you obtain a set that is also countable. The problem is, we want to explicitly describe a bijection between the combined set and the natural numbers, so that each element is visited at some time. Constructing such a bijection between the natural numbers and a countably-infinite tree is perfectly possible, but it's less trivial than just DFS or BFS.

If we're throwing around Wikipedia articles, I'd suggest a look at https://en.wikipedia.org/wiki/Order_type. Even if your set is countable, it's possible to iterate through its elements so that some are never reached, not after any length of time.

For instance, suppose I say, "I'm going to search through all positive odd numbers in order, then I'm going to search through all positive even numbers in order." (This has order type ω⋅2.) Then I'll never ever reach the number 2, since I'll be counting through odd numbers forever.

That's why it's important to order the elements in your search strategy so that each one is reached in a finite time. (This corresponds to having order type ω, the order type of the natural numbers.)

◧◩◪◨
12. YeGobl+u71[view] [source] [discussion] 2024-10-13 21:31:34
>>agumon+hM
I think what you mean is that he adds an argument that counts the times a goal is resolved with, thus limiting the depth of resolution? That works, but you need to give a magic number as a resolution depth limit, and if the number is too small then your program fails to find a proof that it normally should be able to find. It's not a perfect solution.
replies(1): >>agumon+Ej1
◧◩◪◨⬒
13. agumon+Ej1[view] [source] [discussion] 2024-10-13 23:09:04
>>YeGobl+u71
Yes, well not so much a constant value. He added an unbound variable and it was enough to alter the search. Indeed it's still more or a trick, but it got me interested if there were other more fundamental ideas beyond that.
replies(1): >>YeGobl+dk2
◧◩◪◨⬒⬓
14. YeGobl+dk2[view] [source] [discussion] 2024-10-14 11:19:40
>>agumon+Ej1
That sounds like iterative deepening without a lower bound then. I guess that's possible. Maybe if you had a link to Markus' page I could have a look.

There are techniques to constraint the search space for _programs_ rather than proofs, that I know from Inductive Logic Programming, like Bottom Clause construction in Inverse Entailment, or the total ordering of the Herbrand Base in Meta-Interpretive Learning (ILP). It would be interesting to consider applying them to constraint the space of proofs in ordinary logic progamming.

Refs for the above techniques are here but they're a bit difficult to read if you don't have a good background in ILP:

http://wp.doc.ic.ac.uk/arusso/wp-content/uploads/sites/47/20...

https://link.springer.com/content/pdf/10.1007/s10994-014-547...

replies(2): >>agumon+243 >>jodrel+Bm4
◧◩◪◨⬒⬓⬔
15. agumon+243[view] [source] [discussion] 2024-10-14 16:48:02
>>YeGobl+dk2
thanks a lot, i'll add a comment with the video I had in mind soon
◧◩◪◨⬒⬓⬔
16. jodrel+Bm4[view] [source] [discussion] 2024-10-15 02:13:47
>>YeGobl+dk2
> "Maybe if you had a link to Markus' page I could have a look."

e.g. here: https://www.metalevel.at/tist/ solving the Water Jugs problem (search on the page for "We use iterative deepening to find a shortest solution") finding a list of moves emptying and filling jugs, and using `length(Ms, _)` to find shorter list of moves first.

or here: https://www.metalevel.at/prolog/puzzles under "Wolf and Goat" he writes "You can use Prolog's built-in search strategy to search for a sequence of admissible state transitions that let you reach the desired target state. Use iterative deepening to find a shortest solution. In Prolog, you can easily obtain iterative deepening via length/2, which creates lists of increasing length on backtracking."

replies(1): >>YeGobl+3J5
◧◩◪◨⬒⬓⬔⧯
17. YeGobl+3J5[view] [source] [discussion] 2024-10-15 15:17:53
>>jodrel+Bm4
Thanks! Yes, it's iterative deepening without a lower bound. The trick is that iterative deepening is used to order the space of proofs so that the shortest proof (path through an SLD-tree) is found first. I use that approach often. The cool thing with it is that it doesn't stop in the first solution and will keep generating new solutions ordered by length of the proof as long as there are any. Some times you really want to know all solutions.

There is a bit of a problem, in that if there is no solution the lack of a lower bound will cause the search to go on forever, or until the search space is exhausted- and you don't want that. If you use a lower bound, on the other hand, you may be cutting the search just short of finding the solution. It's another trade-off.

[go to top]