You said 50-1000 examples.
Do I fine-tune when having specific q/a sets like from real customers and I want to add the right answer to the model?
Do I fine-tune facts or should I use some lookup?
Does adding some code and API docs for a current version of something I want more support make sense? Like chatgpt knows quarkus 2 but not quarkus 3
In general, fine-tuning helps a model figure out how to do the exact task that is being done in the examples it's given. So fine-tuning it on 1000 examples of an API being used in the wild is likely to teach it to use that API really effectively, but fine-tuning it on just the API docs probably won't.
That said, there are a lot of interesting ideas floating around on how to most effectively teach a model purely from instructions like API docs. Powerful models like GPT-4 can figure it out from in-context learning (ie. if you paste in a page of API docs and ask GPT-4 to write something with the API it can usually do a decent job). I suspect the community will figure out techniques either through new training objectives or synthetic training data to do it for smaller fine-tuned models as well.