There are so many backend endpoints in the wild that do a bunch of things in a loop, many of which will require I/O or calls to slow external endpoints, transform the results with arbitrary code, and need to return the result to the original requestor. How do you do that in a minimal number of readable lines? Right now, the easiest answer is to give up on trying to do this in dataflow, define a function in an imperative programming language, maybe have it do some things locally in parallel with green threads (Node.js does this inherently, and Python+gevent makes this quite fluent as well), and by the end of that function you have the context of the original request as well as the results of your queries.
But there's a duality between "request my feed" and "materialize/cache the most complex/common feeds" that's not taken into account here. The fact that the request was made is a thing that should kick off a set of updates to views, not necessarily on the same machine, that can then be re-correlated with the request. And to do that, you need a way of declaring a pipeline and tracking context through that pipeline.
https://materialize.com is a really interesting approach here, letting you describe all of this in SQL as a pipeline of materialized views that update in real time, and compiling that into dataflow. But most programmers don't naturally describe this kind of business logic in SQL.
Rama's CPS assignment syntax is really cool in this context. I do wish we could go beyond "this unlocks an entire paradigm to people who know Clojure" towards "this unlocks an entire paradigm to people who only know Javascript/Python" - but it's a massive step in the right direction!