Breadth first search is complete even if the branches are infinitely deep. In the sense that, if there is a solution it will find it eventually.
Also, there shouldn't be many situations where you'd be able to produce infinite branches in a prolog program. Recursions must have a base case, just like in any other language.
For example, each node has unique path to root, so write <n1, n2, ..., nk> where each ni is the sibling ordinal of the node at depth i in that path, i.e. it's the ni-th sibling of the n(i-1)st node. Raising each of these to the ith prime and taking a product gives each node a unique integer label. Traverse nodes in label order and voilà?
However, that all assumes we know the tree beforehand, which doesn't make sense for generic call trees. Do we just smash headfirst into Rice on this when trying to traverse in complete generality?
This version of the program, taken from the article, loops (I mean it enters an infinite recursion):
last([_H|T],E) :- last(T,E).
last([E],E).
?- last_(Ls,3).
% Loops
This one doesn't: last([E],E).
last([_H|T],E) :- last(T,E).
Ls = [3] ;
Ls = [_,3] ;
Ls = [_,_,3] ;
Ls = [_,_,_,3] ;
Ls = [_,_,_,_,3] ;
Ls = [_,_,_,_,_,3] .
% And so on forever
To save you some squinting, that's the same program with the base-case moved before the inductive case, so that execution "hits" the base case when it can terminate. That's half of what the article is kvetching about: that in Prolog, you have to take into account the execution strategy of logic programs and can't just reason about the logical consequences of a program, you also have to think of the imperative meaning of the program's structure. It's an old complain about Prolog, as old as Prolog itself.Breadth first search would visit every node breadth first, so given infinite time, the solution would eventually be visited.
Meanwhile, say a branch had a cycle in it, even given infinite time, a naive depth first search would be trapped there, and the solution would never be found.
Or, suppose that a node has infinitely many children, but the first child has its own child. A BFS would get stuck going through all the first-level children and never reach the second-level child.
A BFS-like approach could work for completeness, but you'd have to put lower-level children on the same footing as newly-discovered higher-level children. E.g., by breaking up each list of children into additional nodes so that it has branching factor 2 (and possibly infinite depth).
The Wikipedia article is fairly useful: https://en.wikipedia.org/wiki/Countable_set
Can you point to a book or article where the definition of completeness allows infinite time? Every time I have encountered it, it is defined as finding a solution if there is one in finite time.
> No breadth first search is still complete given an infinite branching factor (i.e. a node with infinite children).
In my understanding, DFS is complete for finite depth tree and BFS is complete for finite branching trees, but neither is complete for infinitely branching infinitely deep trees.
You would need an algorithm that iteratively deepens while exploring more children to be complete for the infinite x infinite trees. This is possible, but it is a little tricky to explain.
For a proof that BFS is not complete if it must find any particular node in finite time: Imagine there is a tree starting with node A that has children B_n for all n and each B_n has a single child C_n. BFS searching for C_1 would have to explore all of B_n before it could find it so it would take infinite time before BFS would find C_1.
If we're throwing around Wikipedia articles, I'd suggest a look at https://en.wikipedia.org/wiki/Order_type. Even if your set is countable, it's possible to iterate through its elements so that some are never reached, not after any length of time.
For instance, suppose I say, "I'm going to search through all positive odd numbers in order, then I'm going to search through all positive even numbers in order." (This has order type ω⋅2.) Then I'll never ever reach the number 2, since I'll be counting through odd numbers forever.
That's why it's important to order the elements in your search strategy so that each one is reached in a finite time. (This corresponds to having order type ω, the order type of the natural numbers.)
There are techniques to constraint the search space for _programs_ rather than proofs, that I know from Inductive Logic Programming, like Bottom Clause construction in Inverse Entailment, or the total ordering of the Herbrand Base in Meta-Interpretive Learning (ILP). It would be interesting to consider applying them to constraint the space of proofs in ordinary logic progamming.
Refs for the above techniques are here but they're a bit difficult to read if you don't have a good background in ILP:
http://wp.doc.ic.ac.uk/arusso/wp-content/uploads/sites/47/20...
https://link.springer.com/content/pdf/10.1007/s10994-014-547...
e.g. here: https://www.metalevel.at/tist/ solving the Water Jugs problem (search on the page for "We use iterative deepening to find a shortest solution") finding a list of moves emptying and filling jugs, and using `length(Ms, _)` to find shorter list of moves first.
or here: https://www.metalevel.at/prolog/puzzles under "Wolf and Goat" he writes "You can use Prolog's built-in search strategy to search for a sequence of admissible state transitions that let you reach the desired target state. Use iterative deepening to find a shortest solution. In Prolog, you can easily obtain iterative deepening via length/2, which creates lists of increasing length on backtracking."
There is a bit of a problem, in that if there is no solution the lack of a lower bound will cause the search to go on forever, or until the search space is exhausted- and you don't want that. If you use a lower bound, on the other hand, you may be cutting the search just short of finding the solution. It's another trade-off.