I think you're getting my point confused with a tangentially related one. Your point may be "chatbots shouldn't be legally binding" and I would tend to agree. But my point was that simply throwing a disclaimer on it may not be the best way to get there.
Consider if poison control uses a chatbot to answer phone calls and give advice. They can't waive their responsibility by just throwing a disclaimer on it. It doesn't meet the current strict liability standards regarding what kind of duty is required. There is such a thing in law as "duty creep," and there may be a liability if a jury finds it a reasonable expectation that a chatbot provides accurate answers. To my point, the duty is going to be largely context-dependent, and that means broad-brushed superficial "solutions" probably aren't sufficient.