> My other thoughts to extend this are that you could make it seamless. To start, it'll simply pipe the user's requests to OpenAI or their existing model. So it'd be a drop in replacement. Then, it'll every so often offer to the user - "hey we think at this point there's enough data that a fine tune might save you approx $x/month based on your current calls, click the button to start the fine tune and we'll email you once we have the results" - and then the user gets the email "here are the results, based on that we recommend switching, click here to switch to calling your fine-tuned model"
You just described our short-term roadmap. :) Currently an OpenPipe user has to explicitly kick off a fine-tuning job, but they're so cheap to run we're planning on letting users opt in to running them proactively once they have enough data so we can provide exactly that experience.