zlacker

LLM with Planning

submitted by mercat+(OP) on 2023-04-27 22:34:16 | 81 points 16 comments
[view article] [source] [links] [go to bottom]
replies(6): >>PaulHo+81 >>gdiamo+b3 >>behnam+64 >>coders+td >>i-use-+Bf >>sitkac+CO8
1. PaulHo+81[view] [source] 2023-04-27 22:44:00
>>mercat+(OP)
Sometimes i wonder if text generation could be formulated as a planning/optimization problem and if that facility could solve embedded planning problems as a byproduct.
replies(2): >>qumpis+o7 >>YeGobl+Vx1
2. gdiamo+b3[view] [source] 2023-04-27 22:58:55
>>mercat+(OP)
How can you connect the results from the LLM into the planner? Do you build a parser? Do you fine-tune the LLM to adhere to a format that the planner understands?
replies(2): >>mhink+O5 >>kordle+PE1
3. behnam+64[view] [source] 2023-04-27 23:05:23
>>mercat+(OP)
I had a similar idea to convert LLM inputs into Lisp language. I was pleased to see the paper takes this approach.

The underlying idea, though, has been done before. Basically we know that LLMs can do better if instead of solving a problem on their own, they write a program that solves it (look up PAL).

replies(1): >>willia+65
◧◩
4. willia+65[view] [source] [discussion] 2023-04-27 23:12:36
>>behnam+64
Here’s an example of a question-and-answer augmentation written in TypeScript that uses the exact approach used by PAL, et al:

https://github.com/williamcotton/transynthetical-engine

◧◩
5. mhink+O5[view] [source] [discussion] 2023-04-27 23:18:04
>>gdiamo+b3
> Do you fine-tune the LLM to adhere to a format that the planner understands?

This one. It looks like they're using GPT3 to translate the natural-language problem context and goal into a format called PDDL (planning domain definition language), then feeding the result into a separate program that generates a plan based on the context and goal.

With that in mind, the thing they're really testing here is how well GPT3 can translate the natural-language prompt into PDDL, evaluated on the basis of whether the generated PDDL can actually solve the problem and how long the resulting solution takes.

Naturally, I could be wrong but that's at least what it looks like.

replies(1): >>YeGobl+Yz1
◧◩
6. qumpis+o7[view] [source] [discussion] 2023-04-27 23:32:45
>>PaulHo+81
RL in ChatGPT is used for that: to generate text that maximizes reward. But if you have other domains with their reward functions, then you could plan on them
replies(1): >>PaulHo+Xg
7. coders+td[view] [source] 2023-04-28 00:32:24
>>mercat+(OP)
Summary:

The paper introduces LLM+P, a framework that combines the strengths of classical planners with large language models (LLMs) to solve long-horizon planning problems. LLM+P takes in a natural language description of a planning problem, converts it into a PDDL file, leverages classical planners to find a solution, and then translates the solution back into natural language. The authors provide a set of benchmark problems and find that LLM+P is able to provide optimal solutions for most problems, while LLMs fail to provide even feasible plans for most problems. The paper suggests that LLM+P can be used as a natural language interface for giving tasks to robot systems. The authors also propose that classical planners can be another useful external module for improving the performance of downstream tasks of LLMs. The paper highlights the importance of providing context (i.e., an example problem and its corresponding problem PDDL) for in-context learning, and suggests future research directions to further extend the LLM+P framework.

PPDL: https://en.wikipedia.org/wiki/Planning_Domain_Definition_Lan...

8. i-use-+Bf[view] [source] 2023-04-28 00:52:47
>>mercat+(OP)
I’m wondering what PDDL is for, in practical terms.

Planning, yes, but that’s a verb that casts a very wide net.

When might one write PDDL? Be it specific tasks, or industries it is used in - the examples I’ve found online all have a robotic theme, yet the idea seems much more general.

What do they do with it once they’ve written it?

What does it solve (as opposed to just having the existence of a file that outlines objects, predicates, actions etc)?

replies(3): >>SOLAR_+Nw >>adarsh+Dx >>YeGobl+Vw1
◧◩◪
9. PaulHo+Xg[view] [source] [discussion] 2023-04-28 01:09:21
>>qumpis+o7
My impression is that the complex optimization happens during training but that the actual inference is using some kind of greedy algorithm like beam search. If the inference algorithm was using simulated annealing or something like that that would be different.
◧◩
10. SOLAR_+Nw[view] [source] [discussion] 2023-04-28 04:13:10
>>i-use-+Bf
It feels like one of those things that would be really interesting to just throw into the wild and see what comes of it. It's kind of hard to hypothesize with these things when the horizon of possibilities is so wide and the level of understanding is relatively low (compared to the model's perceived capabilities).
◧◩
11. adarsh+Dx[view] [source] [discussion] 2023-04-28 04:24:06
>>i-use-+Bf
PDDL files do outline objects, predicates, actions, etc. in a machine-readable way, but in a much more expressive manner than can be done with something like JSON.

PDDL is designed to be machine readable, but also human-readable and writable. I would say you would write PDDL when you want to provide a description of the rules of a domain to an algorithm that does automated planning and acting. This could be an autonomous agent of any sort, doesn't necessarily have to be embodied/robotic in nature.

◧◩
12. YeGobl+Vw1[view] [source] [discussion] 2023-04-28 13:50:08
>>i-use-+Bf
>> Planning, yes, but that’s a verb that casts a very wide net.

The article uses "planning" to mean "classical planning", which is a very specific thing, although it's such a fundamental concept in AI research that it is very difficult to find a simple definition (there's a lot of useless stuff on the internet about it, like tutorials that don't explain what it is they're tutorial-ing, or slides that don't give much context).

Even the Wikipedia article is not very well written. I followed this link to one of its references though and there's an entire textbook, available as a free pdf:

https://projects.laas.fr/planning/

In general, classical planning is one of those domains where GOFAI approaches continue to dominate over nouveau AI, statistical machine learning-based approaches. You'll have to take my word for that, though, because that's what I know from experience, and I don't have any references to back that up. On the other hand, if it wasn't the case, you wouldn't see papers like the one linked above, I suppose.

To clarify, the paper above makes it clear that LLMs, for one, are useless for planning but at least they can translate between natural language and PDDL, so that a planning problem can be handed off to a classical planning engine, that can actually do the job. How useful is that, I don't know. A human expert would probably do a better job of writing PDDL from scratch, but that's never explored in the linked article.

◧◩
13. YeGobl+Vx1[view] [source] [discussion] 2023-04-28 13:57:19
>>PaulHo+81
Yes, it seems like casting story-generation as a planning problem was a standard approach, at least in the recent past (I'm guessing everyone is turning to LLMs now):

Story planners start with the premise that the story generation process is a goal-driven process and apply some form of symbolic planner to the problem of generating a fabula. The plan is the story.

https://thegradient.pub/an-introduction-to-ai-story-generati...

As an aside, it is obvious from that The Gradient article I link above that story generation was doing just fine until LLMs came along and claimed they can do it right for the first time ever. I can see that the earlier approaches took some careful hand-engineering, but they also seemed to more reliably generate coherent stories that made sense (although it looks like maybe they didn't have very rich themes and development etc). But then, that's the trade-off you get between classical approaches and big machine learning: either you have to roll up those sleeves and use some elbow grease, or you have to label giant reams of data and pay the giant price of compute needed to train on them. In a sense, the claimed advance of deep learning is that domain experts can be replaced by cheaply paid inexpert labellers, plus some very big GPU clusters.

◧◩◪
14. YeGobl+Yz1[view] [source] [discussion] 2023-04-28 14:10:08
>>mhink+O5
There is no fine-tuning. They simply use prompt engineering. See Section 3 for "Method", they have a short, easy-to grok motivating example, I don't think you need to be an expert in planning to see what they do.

To summarise, they assume a human expert can provide a domain description, specifying all actions that can be taken at each situation, and their effects. Then it looks like they include that domain description to the prompt, along with an example of the kind of planning task they want it to solve, and get the LLM to generate PDDL in the context of the prompt.

◧◩
15. kordle+PE1[view] [source] [discussion] 2023-04-28 14:35:15
>>gdiamo+b3
> How does GPT-4 achieve in-context learning without finetuning its parameters?

GPT-4 can use its ability to encode problems in PDDL and in-context learning to infer the problem PDDL file corresponding to a given problem (P). This can be done by providing the model with a minimal example that demonstrates what a correct problem PDDL looks like for a simple problem in the domain, as well as a problem description in natural language and its corresponding problem PDDL. This allows the model to leverage its ability to perform unseen downstream tasks without having to finetune its parameters.

16. sitkac+CO8[view] [source] 2023-05-01 05:03:15
>>mercat+(OP)
There is a link in the paper to an introduction in PDDL which is incorrect, it should point to https://www.cs.toronto.edu/~sheila/2542/s14/A1/introtopddl2....
[go to top]