this is really impressive work. coverage guided and especially directed fuzing can be extremely difficult. its mentioned fuzzing is not a dumb technique. I think the classical idea is kind of dumb, in the sense of 'dumb fuzzers' but these days there is tons of intelligence built around it now aand poured into it, but i've always thought its now beyond the classic idea of fuzz testing. i had colleagues who poured their soul into trying to use git commit info etc. to try and help find potentially bad code paths and then coverage guided fuzzing trying to get in there. I really like the little note at the bottom about this. adding such layers kind of does make it lean towards machine learning nowadays, and id think perhaps fuzzing is not the right term anymore. i dont think many people are actually still simply generating random inputs and trying to crash programs like that.
this is really exciting new progress around this type of field guys. well done! cant wait to see what new tools and techniques will be yielded from all of this research.
Will you guys be open to implementing something around libafl++ perhaps? i remember we worked with that extensively. As a lot of shops use that already it might be cool to look at integration into such tools or would you think this deviates so far it'll amount to a new kind of tool entirely? Also, the work on datasets might be really valuable to other researchers. there was a mention of wasted work but labeled sets of data around cve, bug and patch commits can help a lot of folks if theres new data in there.
this kind of makes me miss having my head in this space :D cool stuff and massive congrats on being finalists. thanks for the extensive writeup!