While this isn't used specifically for LLM training, it can involve aggregating insights from customer behaviour.
Merely using an LLM for inference does not train it on the prompts and data, as many incorrectly assume. There is a surprising lack of understanding of this separation even on technical forums like HN.
However, let's say I record human interactions with my app; for example when a user accepts or rejects an AI sythesised answer.
This data can be used by me, to influence the behaviour of an LLM via RAG or by altering application behaviour.
It's not going to change the weighting of the model, but it would influence its behaviour.