I’ve never worked on anything large in software where the time it will take can be reasonably deduced at the accuracy some people here seem to assume possible. The amount of unknown-unknowns is always way way too large and the process of discovery itself extremely time consuming. Usually it requires multiple rounds of prototypes, where prototypes usually require a massive amount of data transferred to adequately mine for work discovery.
The best you can do is set reasonable expectations with stakeholders around:
- what level of confidence you have in estimates at any point in time
- what work could uncover and reduce uncertainty (prototypes, experiments, hacks, hiring the right consultant, etc) and whether it is resourced
- what the contingency plans are if new work is discovered (reducing specific scope, moving more people (who are hopefully somewhat ramped up), moving out timelines)
But I've also seen things like the ZFS team at Sun deliver something unbelievably good that started as a skunkworks project that senior management didn't really know about until there was enough to show to justify a large investment. Sun was like DARPA: not micromanaged from the top. Sun failed, of course, but not because of this.