It's possible that an LLM that's been trained on enough examples, and that's smart enough, could actually do this. But I'm not sure how you'd review the output to know if it's right. The LLM doesn't have to be much faster than you to overwhelm the capacity of reviewing the results.