The server was deserializing untrusted input from the client directly into module+export name lookups, and then invoking whatever the client asked for (without verifying that metadata.name was an own property).
return moduleExports[metadata.name]
We can patch hasOwnProperty and tighten the deserializer, but there is deeper issue. React never really acknowledged that it was building an RPC layer. If you look at actual RPC frameworks like gPRC or even old school SOAP, they all start with schemas, explicit service definitions and a bunch of tooling to prevent boundary confusion. React went the opposite way: the API surface is whatever your bundler can see, and the endpoint is whatever the client asks for.My guess is this won't be the last time we see security fallout from that design choice. Not because React is sloppy, but because it’s trying to solve a problem category that traditionally requires explicitness, not magic.
Building a private, out of date repo doesn't seem great either.
The problem is this specific "call whatever server code the client asks" pattern. Traditional APIs with defined endpoints don’t have that issue.
A similar bug could be introduced in the implementation of other RPC systems too. It's not entirely specific to this design.
(I contribute to React but not really on RSC.)
Architecturally there appears to be an increasingly insecure attack surface appearing in JavaScript at large, based on the insecurities in mandatory dependencies.
If the foundation and dependencies of react has vulnerabilities, react will have security issues indirectly and directly.
This explicit issue seems to be a head scratcher. How could something so basic exist for so long?
Again I ask about react and next.js from their perspective or position of leadership in the JavaScript ecosystem. I don’t think this is a standard anyone wants.
Could there be code reviews created for LLMs to search for issues once discovered in code?
Imagine these dozens of people, working at Meta.
They sit at the table, they agree to call eval() and not think "what could go wrong"
If you remember “mashups” these were basically just using the fact that you can load any code from any remote server and run it alongside your code and code from other servers while sharing credentials between all of them. But hey it is very useful to let Stripe run their stripe.js on your domain. And AdSense. And Mixpanel. And while we are at it let’s let npm install 1000 packages for a single dependency project. It’s bad.
The fact that React embodies an RPC scheme in disguise is quite obvious if you look at the kind of functionality that is implemented, some of that simply can not be done any other way. But then you should own that decision and add all of the safeguards that such a mechanism requires, you can't bolt those on after the fact.
If I had a dollar for every time a serious vulnerability that started like this was discovered in the last 30 years...
i've been thinking basically this for so long, i'm kinda happy to be validated about this lol
> My guess is this won't be the last time we see security fallout from that design choice. Not because React is sloppy, but because it’s trying to solve a problem category that traditionally requires explicitness, not magic.
Now I'm worried, but I don't use React. So I will have to ask: how does SvelteKit fares in this aspect?
The vast majority of developers do not update their frameworks to the latest version so this is something that will linger on for years. Particularly if you're on Next something-like-12 and there's breaking changes in order to go to 16 + patch.
OTOH this is great news for bad actors and pentesters.
JS/Node can do this via import() or require().
C, C++, Go, etc can dynamically load plugins, and I would hope that people are careful when doing this when client-supplied data. There is a long history of vulnerabilities when dlopen and dlfcn are used unwisely, and Windows’s LoadLibrary has historical design errors that made it almost impossible to use safely.
Java finds code by name when deserializing objects, and Android has been pwned over and over as a result. Apple did the same thing in ObjC with similar results.
The moral is simple: NEVER use a language’s native module loader to load a module or call a function when the module name or function name comes from an untrusted source, regardless of how well you think you’ve sanitized it. ALWAYS use an explicit configuration that maps client inputs to code that it is permissible to load and call. The actual thing that is dynamically loaded should be a string literal or similar.
I have a boring Python server I’ve maintained for years. It routes requests to modules, and the core is an extremely boring map from route name to the module that gets loaded and the function that gets called.