The article is fudging things with its use of "backtracking" as a stand-in for backtracking Depth First Search (DFS)- the latter is the search strategy that is "fixed" in Prolog. And it's not really fixed. Prolog programs can also be executed by "tabling" a.k.a. SLG-Resolution, which basically replaces DFS with Breadth-First Search modulo momoization. Tabling avoids an important source of "incompleteness" in Prolog, that of non-terminating left-recursions.
To clarify, that is what makes Prolog "incomplete": that executing Prolog programs by DFS makes Prolog loop infinitely when encountering some left-recursions. The article gives the example of a last/2 predicate:
last([_H|T],E) :- last(T,E).
last([E],E).
This does indeed loop. But this one doesn't: last([E],E).
last([_H|T],E) :- last(T,E).
?- last_(Ls,3).
Ls = [3] ;
Ls = [_,3] ;
Ls = [_,_,3] ;
Ls = [_,_,_,3] ;
Ls = [_,_,_,_,3] ;
Ls = [_,_,_,_,_,3] .
And that's what the article is pointing out with the allusion to "a higher, declarative programming style which frees the programmer from thinking about low-level control details". With such a "higher declarative programming style" the programmer does not have to think about program structure or execution strategy and can write whatever, however, and still get results!The problem with that argument, which is as old as Prolog and possibly even older than that, is that it's an argument from taste, or, indeed, "style", to use the article's terminology. More specifically, for every non-terminating Prolog program that can be written, we can most likely write a Prolog program that does terminate, and that is (success-set) equivalent to the non-terminating program, by keeping in mind the structure of the search space for proofs traversed by ordinary Prolog with DFS, or tabling. And possibly with a few cuts here and there (oh, the humanity!). The article is simply arguing that it is "more declarative" to not have to do that. But, why is "more declarative" better? It's style all the way down.
As to the second axis of the argument, the horror of predicates that do not neatly map to "many" real world problems that are better represented as functions, asking whether we can liberate logic programming from predicates is like asking whether we can liberate the First Order Predicate Calculus from predicates, or, equivalently, make ice cream without the ice or the cream. Sure we can. The question is again: why? I don't see a clear justification for that. In particular, setting implementation details aside (as the article does on this point) SLD-Resolution is already sound and refutation-complete, or complete with subsumption. That is the best that can ever be achieved, and the article doesn't seem to claim anything else (or the ghosts of Alonzo Church and Alan Turing would be very, very sad). So this, too, seems to be a matter of taste: functions are more stylish than predicates. M'kay.
In fact, if I may blaspheme a little, this is what Church did with his Lambda calculus: he turned everything into typed functions to extend First Order Logic (FOL) into a higher order. And in the process completely destroyed the simple elegance of FOL. Why?
That's the problem with breadth-first search though: how to make it online?
Conversely, that's the advantage of DFS: it's online because the only state it needs is a search path, and the search path is O(log N), which is small enough to consider online.
But there's no way to always ensure termination unless one restricts expressivity. Right? Halting problem and all that. Recursion, like iteration, must always risk going infinite, else it is not complete.
The tradeoff between DFS and BFS is like you say and as the article above points out that's the reason DFS was chosen as the original execution strategy for Prolog. Tabling on the other hand can be very memory-hungry (in my applications I often run out of tabling space on my 64GB RAM laptop). In the end, with languages like Prolog, it's the old joke: "sound, complete, efficient- choose two".
WITH RECURSIVE transitive_closure AS (
SELECT parent, child FROM things
UNION
SELECT tc.parent, t.child
FROM things t
JOIN transitive_closure tc ON tc.child = t.parent
)
SELECT * FROM transitive_closure;
where `transitive_closure` is a table that gets all the results of the computation. The engine will run the second part of the query repeatedly until no new rows are added to the `transitive_closure` table. (If you change `UNION` to `UNION ALL` and there's circular paths then this will not terminate.)Sounds a lot like whatever "tabling" is in Prolog.