zlacker

[return to "Can logic programming be liberated from predicates and backtracking? [pdf]"]
1. xelxeb+th2[view] [source] 2024-10-13 05:10:12
>>matt_d+(OP)
Abstract. Logic programming has a long history. The representative of logic programming in practice, the language Prolog, has been introduced more than 50 years ago. The main features of Prolog are still present today: a Prolog program is a set of predicate definitions executed by resolution steps with a backtracking search strategy. The use of back- tracking was justified by efficiency reasons when Prolog was invented. However, its incompleteness destroys the elegant connection of logic pro- gramming and the underlying Horn clause logic and causes difficulties to teach logic programming. Moreover, the restriction to predicates hinders an adequate modeling of real world problems, which are often functions from input to output data, and leads to unnecessarily inefficient exe- cutions. In this paper we show a way to overcome these problems. By transforming predicates and goals into functions and nested expressions, one can evaluate them with a demand-driven strategy which might re- duce the number of computation steps and avoid infinite search spaces. Replacing backtracking by complete search strategies with new imple- mentation techniques closes the gap between the theory and practice of logic programming. In this way, we can keep the ideas of logic program- ming in future programming systems.
◧◩
2. cerved+Mt2[view] [source] 2024-10-13 08:03:54
>>xelxeb+th2
isn't backtracking a complete search strategy?
◧◩◪
3. usgrou+BQ2[view] [source] 2024-10-13 12:42:10
>>cerved+Mt2
Depth first search is not complete if branches can be infinitely deep. Therefore if you're in the wrong infinite branch the search will never finish.

Breadth first search is complete even if the branches are infinitely deep. In the sense that, if there is a solution it will find it eventually.

◧◩◪◨
4. desden+QR2[view] [source] 2024-10-13 12:48:43
>>usgrou+BQ2
In practice, though, with BFS you'd run out of memory instead of never finding a solution.

Also, there shouldn't be many situations where you'd be able to produce infinite branches in a prolog program. Recursions must have a base case, just like in any other language.

◧◩◪◨⬒
5. YeGobl+Ha3[view] [source] 2024-10-13 15:16:12
>>desden+QR2
This has to do with the ordering of search: searching a proof tree (an SLD tree, in SLD-Resolution) with DFS, as in Prolog, can get stuck when there are cycles in the tree. That's especially the case with left-recursion. The article gives an example of a left-recursive program that loops if you execute it with Prolog, but note that it doesn't loop if you change the order of the clauses.

This version of the program, taken from the article, loops (I mean it enters an infinite recursion):

  last([_H|T],E) :- last(T,E).
  last([E],E).

  ?- last_(Ls,3).
  % Loops
This one doesn't:

  last([E],E).
  last([_H|T],E) :- last(T,E).

  Ls = [3] ;
  Ls = [_,3] ;
  Ls = [_,_,3] ;
  Ls = [_,_,_,3] ;
  Ls = [_,_,_,_,3] ;
  Ls = [_,_,_,_,_,3] .
  % And so on forever
To save you some squinting, that's the same program with the base-case moved before the inductive case, so that execution "hits" the base case when it can terminate. That's half of what the article is kvetching about: that in Prolog, you have to take into account the execution strategy of logic programs and can't just reason about the logical consequences of a program, you also have to think of the imperative meaning of the program's structure. It's an old complain about Prolog, as old as Prolog itself.
◧◩◪◨⬒⬓
6. agumon+SC3[view] [source] 2024-10-13 18:48:08
>>YeGobl+Ha3
IIRC Markus Triska showed a trick (with a nickname i forgot) to constrain the search space by embedded a variable length into the top level goal.
◧◩◪◨⬒⬓⬔
7. YeGobl+5Y3[view] [source] 2024-10-13 21:31:34
>>agumon+SC3
I think what you mean is that he adds an argument that counts the times a goal is resolved with, thus limiting the depth of resolution? That works, but you need to give a magic number as a resolution depth limit, and if the number is too small then your program fails to find a proof that it normally should be able to find. It's not a perfect solution.
◧◩◪◨⬒⬓⬔⧯
8. agumon+fa4[view] [source] 2024-10-13 23:09:04
>>YeGobl+5Y3
Yes, well not so much a constant value. He added an unbound variable and it was enough to alter the search. Indeed it's still more or a trick, but it got me interested if there were other more fundamental ideas beyond that.
◧◩◪◨⬒⬓⬔⧯▣
9. YeGobl+Oa5[view] [source] 2024-10-14 11:19:40
>>agumon+fa4
That sounds like iterative deepening without a lower bound then. I guess that's possible. Maybe if you had a link to Markus' page I could have a look.

There are techniques to constraint the search space for _programs_ rather than proofs, that I know from Inductive Logic Programming, like Bottom Clause construction in Inverse Entailment, or the total ordering of the Herbrand Base in Meta-Interpretive Learning (ILP). It would be interesting to consider applying them to constraint the space of proofs in ordinary logic progamming.

Refs for the above techniques are here but they're a bit difficult to read if you don't have a good background in ILP:

http://wp.doc.ic.ac.uk/arusso/wp-content/uploads/sites/47/20...

https://link.springer.com/content/pdf/10.1007/s10994-014-547...

[go to top]