zlacker

[parent] [thread] 25 comments
1. xelxeb+(OP)[view] [source] 2024-10-13 05:10:12
Abstract. Logic programming has a long history. The representative of logic programming in practice, the language Prolog, has been introduced more than 50 years ago. The main features of Prolog are still present today: a Prolog program is a set of predicate definitions executed by resolution steps with a backtracking search strategy. The use of back- tracking was justified by efficiency reasons when Prolog was invented. However, its incompleteness destroys the elegant connection of logic pro- gramming and the underlying Horn clause logic and causes difficulties to teach logic programming. Moreover, the restriction to predicates hinders an adequate modeling of real world problems, which are often functions from input to output data, and leads to unnecessarily inefficient exe- cutions. In this paper we show a way to overcome these problems. By transforming predicates and goals into functions and nested expressions, one can evaluate them with a demand-driven strategy which might re- duce the number of computation steps and avoid infinite search spaces. Replacing backtracking by complete search strategies with new imple- mentation techniques closes the gap between the theory and practice of logic programming. In this way, we can keep the ideas of logic program- ming in future programming systems.
replies(3): >>bradle+f4 >>cerved+jc >>crypto+1R1
2. bradle+f4[view] [source] 2024-10-13 06:04:58
>>xelxeb+(OP)
I didn't read past the abstract, but it sounds like they are just transforming logic-based programs into function-based programs. But: if I wanted functional programming, I wouldn't be writing in Prolog.

What would be interesting, would be to replace depth-first search while remaining in the world of predicates and Horn clauses.

replies(3): >>usgrou+39 >>harper+bj >>YeGobl+HP
◧◩
3. usgrou+39[view] [source] [discussion] 2024-10-13 07:17:39
>>bradle+f4
functional logic programming is not equivalent to functional programming.
4. cerved+jc[view] [source] 2024-10-13 08:03:54
>>xelxeb+(OP)
isn't backtracking a complete search strategy?
replies(2): >>desden+vq >>usgrou+8z
◧◩
5. harper+bj[view] [source] [discussion] 2024-10-13 09:36:02
>>bradle+f4
Naïvely, writing only one logic relation of n parameters is about equivalent to writing n^2 functions (just decide for each parameter whether you give it or not as input). So there clearly is value there.

I say naïvely because on one hand you might not need all versions of the function, but on the other one you can also provide partial values, so it’s not either input or output.

◧◩
6. desden+vq[view] [source] [discussion] 2024-10-13 11:08:48
>>cerved+jc
That phrase was badly written.

Backtracking is a complete search of the problem-space.

What is incomplete is the Horn-SAT problem space, which is a subset of SAT, that can be solved in polynomial time, and is what Prolog is based on.

A complete logic system would have to solve SAT, which is NP-complete.

At least that's what I understood they meant by that.

replies(1): >>YeGobl+JQ
◧◩
7. usgrou+8z[view] [source] [discussion] 2024-10-13 12:42:10
>>cerved+jc
Depth first search is not complete if branches can be infinitely deep. Therefore if you're in the wrong infinite branch the search will never finish.

Breadth first search is complete even if the branches are infinitely deep. In the sense that, if there is a solution it will find it eventually.

replies(3): >>desden+nA >>xelxeb+mD >>agumon+pj1
◧◩◪
8. desden+nA[view] [source] [discussion] 2024-10-13 12:48:43
>>usgrou+8z
In practice, though, with BFS you'd run out of memory instead of never finding a solution.

Also, there shouldn't be many situations where you'd be able to produce infinite branches in a prolog program. Recursions must have a base case, just like in any other language.

replies(1): >>YeGobl+eT
◧◩◪
9. xelxeb+mD[view] [source] [discussion] 2024-10-13 13:13:51
>>usgrou+8z
Hrm. I guess the converse applies if nodes can have infinite children. That said, even if your tree is infinitely wide and deep, we're only dealing with countable children, right? Thus a complete traversal has to exist, right?

For example, each node has unique path to root, so write <n1, n2, ..., nk> where each ni is the sibling ordinal of the node at depth i in that path, i.e. it's the ni-th sibling of the n(i-1)st node. Raising each of these to the ith prime and taking a product gives each node a unique integer label. Traverse nodes in label order and voilà?

However, that all assumes we know the tree beforehand, which doesn't make sense for generic call trees. Do we just smash headfirst into Rice on this when trying to traverse in complete generality?

replies(1): >>usgrou+NV
◧◩
10. YeGobl+HP[view] [source] [discussion] 2024-10-13 14:52:16
>>bradle+f4
>> What would be interesting, would be to replace depth-first search while remaining in the world of predicates and Horn clauses.

For that you want tabled Prolog, or in other words Prolog executed by SLG-Resolution. The paradigmatic implementation is XSB Prolog:

https://xsb.com/xsb-prolog/

SWI-Prolog also supports tabling but I think the XSB implementation is more mature.

◧◩◪
11. YeGobl+JQ[view] [source] [discussion] 2024-10-13 14:59:53
>>desden+vq
Yeah, it's confusing. The article is referring to the incompleteness of Prolog implemented using Depth First Search. That's what the author means by "backtracking". I know this because I know "backtracking" is used in the logic programming community to stand for DFS, but if you don't know the jargon you'd be right to be confused. You can kind of, er, glean, that meaning in the article if you see how they refer to a "fixed" search strategy, and also notice that "backtracking" is not normally a search strategy since it can't search on its own. "Backtracking" is really "DFS with backtracking".

The article is pointing out that Prolog with backtracking DFS is incomplete with respect to the completeness of SLD-Resolution. To clarify, SLD-Resolution is complete for refutation, or with subsumption. Prolog is an implementation of SLD-Resolution using DFS with backtracking. DFS is incomplete in the sense that it gets stuck in infinite loops when an SLD tree (the structure searched by DFS in Prolog) has cycles, especially left-recursive cycles. The article gives an example of a program that loops forever when executed with DFS with backtracking, in ordinary Prolog.

SLD-Resolution's completeness does not violate the Church-Turing thesis, so it's semi-decidable: SLD-trees may have infinite branches. To be honest I don't know about the equivalence with Horn-SAT, but Resolution, restricted to definite clauses, i.e. SLD-Resolution, is complete (by refutation and subsumption, as I say above, and respecting some structural constraints to do with the sharing of variables in heads and bodies of clauses). We got several different proofs of its completeness so I think we can trust it's true.

Edit: where does this knowledge about Horn-Sat come from? Do you have references? Gimme gimme gimme.

◧◩◪◨
12. YeGobl+eT[view] [source] [discussion] 2024-10-13 15:16:12
>>desden+nA
This has to do with the ordering of search: searching a proof tree (an SLD tree, in SLD-Resolution) with DFS, as in Prolog, can get stuck when there are cycles in the tree. That's especially the case with left-recursion. The article gives an example of a left-recursive program that loops if you execute it with Prolog, but note that it doesn't loop if you change the order of the clauses.

This version of the program, taken from the article, loops (I mean it enters an infinite recursion):

  last([_H|T],E) :- last(T,E).
  last([E],E).

  ?- last_(Ls,3).
  % Loops
This one doesn't:

  last([E],E).
  last([_H|T],E) :- last(T,E).

  Ls = [3] ;
  Ls = [_,3] ;
  Ls = [_,_,3] ;
  Ls = [_,_,_,3] ;
  Ls = [_,_,_,_,3] ;
  Ls = [_,_,_,_,_,3] .
  % And so on forever
To save you some squinting, that's the same program with the base-case moved before the inductive case, so that execution "hits" the base case when it can terminate. That's half of what the article is kvetching about: that in Prolog, you have to take into account the execution strategy of logic programs and can't just reason about the logical consequences of a program, you also have to think of the imperative meaning of the program's structure. It's an old complain about Prolog, as old as Prolog itself.
replies(1): >>agumon+pl1
◧◩◪◨
13. usgrou+NV[view] [source] [discussion] 2024-10-13 15:36:25
>>xelxeb+mD
No breadth first search is still complete given an infinite branching factor (i.e. a node with infinite children). "Completeness" is not about finishing in finite time, it also applies to completing in infinite time.

Breadth first search would visit every node breadth first, so given infinite time, the solution would eventually be visited.

Meanwhile, say a branch had a cycle in it, even given infinite time, a naive depth first search would be trapped there, and the solution would never be found.

replies(2): >>Legion+9X >>thethi+0k1
◧◩◪◨⬒
14. Legion+9X[view] [source] [discussion] 2024-10-13 15:47:17
>>usgrou+NV
Suppose you have a node with two children A and B, each of which has infinitely many children. If you performed an ordinary BFS, you could get trapped in A's children forever, before ever reaching any of B's children.

Or, suppose that a node has infinitely many children, but the first child has its own child. A BFS would get stuck going through all the first-level children and never reach the second-level child.

A BFS-like approach could work for completeness, but you'd have to put lower-level children on the same footing as newly-discovered higher-level children. E.g., by breaking up each list of children into additional nodes so that it has branching factor 2 (and possibly infinite depth).

replies(1): >>usgrou+4h1
◧◩◪◨⬒⬓
15. usgrou+4h1[view] [source] [discussion] 2024-10-13 18:18:09
>>Legion+9X
Countable infinity does not work like that: two countable infinities are not more than one countable infinity. I think it falls into the "not even wrong" category of statements.

The Wikipedia article is fairly useful: https://en.wikipedia.org/wiki/Countable_set

replies(1): >>Legion+Bm1
◧◩◪
16. agumon+pj1[view] [source] [discussion] 2024-10-13 18:35:52
>>usgrou+8z
Reminds me that Warren made a talk about prolog term domains to study resolution over infinite branches.
◧◩◪◨⬒
17. thethi+0k1[view] [source] [discussion] 2024-10-13 18:39:29
>>usgrou+NV
> "Completeness" is not about finishing in finite time, it also applies to completing in infinite time.

Can you point to a book or article where the definition of completeness allows infinite time? Every time I have encountered it, it is defined as finding a solution if there is one in finite time.

> No breadth first search is still complete given an infinite branching factor (i.e. a node with infinite children).

In my understanding, DFS is complete for finite depth tree and BFS is complete for finite branching trees, but neither is complete for infinitely branching infinitely deep trees.

You would need an algorithm that iteratively deepens while exploring more children to be complete for the infinite x infinite trees. This is possible, but it is a little tricky to explain.

For a proof that BFS is not complete if it must find any particular node in finite time: Imagine there is a tree starting with node A that has children B_n for all n and each B_n has a single child C_n. BFS searching for C_1 would have to explore all of B_n before it could find it so it would take infinite time before BFS would find C_1.

◧◩◪◨⬒
18. agumon+pl1[view] [source] [discussion] 2024-10-13 18:48:08
>>YeGobl+eT
IIRC Markus Triska showed a trick (with a nickname i forgot) to constrain the search space by embedded a variable length into the top level goal.
replies(1): >>YeGobl+CG1
◧◩◪◨⬒⬓⬔
19. Legion+Bm1[view] [source] [discussion] 2024-10-13 18:56:48
>>usgrou+4h1
Yes, if you put two (or three, or countably many) countable sets together, you obtain a set that is also countable. The problem is, we want to explicitly describe a bijection between the combined set and the natural numbers, so that each element is visited at some time. Constructing such a bijection between the natural numbers and a countably-infinite tree is perfectly possible, but it's less trivial than just DFS or BFS.

If we're throwing around Wikipedia articles, I'd suggest a look at https://en.wikipedia.org/wiki/Order_type. Even if your set is countable, it's possible to iterate through its elements so that some are never reached, not after any length of time.

For instance, suppose I say, "I'm going to search through all positive odd numbers in order, then I'm going to search through all positive even numbers in order." (This has order type ω⋅2.) Then I'll never ever reach the number 2, since I'll be counting through odd numbers forever.

That's why it's important to order the elements in your search strategy so that each one is reached in a finite time. (This corresponds to having order type ω, the order type of the natural numbers.)

◧◩◪◨⬒⬓
20. YeGobl+CG1[view] [source] [discussion] 2024-10-13 21:31:34
>>agumon+pl1
I think what you mean is that he adds an argument that counts the times a goal is resolved with, thus limiting the depth of resolution? That works, but you need to give a magic number as a resolution depth limit, and if the number is too small then your program fails to find a proof that it normally should be able to find. It's not a perfect solution.
replies(1): >>agumon+MS1
21. crypto+1R1[view] [source] 2024-10-13 22:52:11
>>xelxeb+(OP)
Depth-first backtracking is trivial to implement, and it's trivial to blend into the language. Breadth-first backtracking is much harder to implement, and I'm not sure that it's trivial to build into a language (though this may be a lack of imagination).

Icon had both, but depth-first backtracking was the default and trivial to use, while breadth-first backtracking required using "co-expressions" (co-routines), though at least Icon had a trivial syntax for causing procedure argument expressions to be made into co-expressions. But Icon's approach does not make breadth-first backtracking be a first-class language feature like depth-first backtracking, and this is where my imagination gets stuck. To be fair, I've not truly thought much about this problem.

◧◩◪◨⬒⬓⬔
22. agumon+MS1[view] [source] [discussion] 2024-10-13 23:09:04
>>YeGobl+CG1
Yes, well not so much a constant value. He added an unbound variable and it was enough to alter the search. Indeed it's still more or a trick, but it got me interested if there were other more fundamental ideas beyond that.
replies(1): >>YeGobl+lT2
◧◩◪◨⬒⬓⬔⧯
23. YeGobl+lT2[view] [source] [discussion] 2024-10-14 11:19:40
>>agumon+MS1
That sounds like iterative deepening without a lower bound then. I guess that's possible. Maybe if you had a link to Markus' page I could have a look.

There are techniques to constraint the search space for _programs_ rather than proofs, that I know from Inductive Logic Programming, like Bottom Clause construction in Inverse Entailment, or the total ordering of the Herbrand Base in Meta-Interpretive Learning (ILP). It would be interesting to consider applying them to constraint the space of proofs in ordinary logic progamming.

Refs for the above techniques are here but they're a bit difficult to read if you don't have a good background in ILP:

http://wp.doc.ic.ac.uk/arusso/wp-content/uploads/sites/47/20...

https://link.springer.com/content/pdf/10.1007/s10994-014-547...

replies(2): >>agumon+aD3 >>jodrel+JV4
◧◩◪◨⬒⬓⬔⧯▣
24. agumon+aD3[view] [source] [discussion] 2024-10-14 16:48:02
>>YeGobl+lT2
thanks a lot, i'll add a comment with the video I had in mind soon
◧◩◪◨⬒⬓⬔⧯▣
25. jodrel+JV4[view] [source] [discussion] 2024-10-15 02:13:47
>>YeGobl+lT2
> "Maybe if you had a link to Markus' page I could have a look."

e.g. here: https://www.metalevel.at/tist/ solving the Water Jugs problem (search on the page for "We use iterative deepening to find a shortest solution") finding a list of moves emptying and filling jugs, and using `length(Ms, _)` to find shorter list of moves first.

or here: https://www.metalevel.at/prolog/puzzles under "Wolf and Goat" he writes "You can use Prolog's built-in search strategy to search for a sequence of admissible state transitions that let you reach the desired target state. Use iterative deepening to find a shortest solution. In Prolog, you can easily obtain iterative deepening via length/2, which creates lists of increasing length on backtracking."

replies(1): >>YeGobl+bi6
◧◩◪◨⬒⬓⬔⧯▣▦
26. YeGobl+bi6[view] [source] [discussion] 2024-10-15 15:17:53
>>jodrel+JV4
Thanks! Yes, it's iterative deepening without a lower bound. The trick is that iterative deepening is used to order the space of proofs so that the shortest proof (path through an SLD-tree) is found first. I use that approach often. The cool thing with it is that it doesn't stop in the first solution and will keep generating new solutions ordered by length of the proof as long as there are any. Some times you really want to know all solutions.

There is a bit of a problem, in that if there is no solution the lack of a lower bound will cause the search to go on forever, or until the search space is exhausted- and you don't want that. If you use a lower bound, on the other hand, you may be cutting the search just short of finding the solution. It's another trade-off.

[go to top]