In general, fine-tuning helps a model figure out how to do the exact task that is being done in the examples it's given. So fine-tuning it on 1000 examples of an API being used in the wild is likely to teach it to use that API really effectively, but fine-tuning it on just the API docs probably won't.
That said, there are a lot of interesting ideas floating around on how to most effectively teach a model purely from instructions like API docs. Powerful models like GPT-4 can figure it out from in-context learning (ie. if you paste in a page of API docs and ask GPT-4 to write something with the API it can usually do a decent job). I suspect the community will figure out techniques either through new training objectives or synthetic training data to do it for smaller fine-tuned models as well.