I'm not normally a stickler for HN's rule about title preservation, but in this case the "in distributed systems" part is crucial, because IMO the urge to use both the actor model (and its relative, CSP) in non-distributed systems solely in order to achieve concurrency has been a massive boondoggle and a huge dead end. Which is to say, if you're within a single process, what you want is structured concurrency ( https://vorpus.org/blog/notes-on-structured-concurrency-or-g... ), not the unstructured concurrency that is inherent to a distributed system.
Using actors also simplified greatly other parts of the app.
Extending it however reveals some benefits, locking is often for stopping whilst waiting for something enqueued can be parallell with waiting for something else that is enqueued.
I think it very much comes down to history and philosophy, actors are philosophically cleaner (and have gained popularity with success stories) but back in the 90s when computers were physically mostly single-threaded and memory scarce, the mutex looked like a "cheap good choice" for "all" multithreading issues since it could be a simple lock word whilst actors would need mailbox buffering (allocations... brr),etc that felt "bloated" (in the end, it turned out that separate heavyweight OS supported threads was often the bottleneck once thread and core counts got larger).
Mutexes are quite often still the base primitive at the bottom of lower level implementations if compare-and-swap isn't enough, whilst actors generally are a higher level abstraction (better suited to "general" programming).