> A pre-authentication remote code execution vulnerability exists in React Server Components versions 19.0.0, 19.1.0, 19.1.1, and 19.2.0 including the following packages: react-server-dom-parcel, react-server-dom-turbopack, and react-server-dom-webpack. The vulnerable code unsafely deserializes payloads from HTTP requests to Server Function endpoints.
React's own words: https://react.dev/blog/2025/12/03/critical-security-vulnerab...
> React Server Functions allow a client to call a function on a server. React provides integration points and tools that frameworks and bundlers use to help React code run on both the client and the server. React translates requests on the client into HTTP requests which are forwarded to a server. On the server, React translates the HTTP request into a function call and returns the needed data to the client.
> An unauthenticated attacker could craft a malicious HTTP request to any Server Function endpoint that, when deserialized by React, achieves remote code execution on the server. Further details of the vulnerability will be provided after the rollout of the fix is complete.
https://github.com/facebook/react/commit/bbed0b0ee64b89353a4...
and it looks like its been squashed with some other stuff to hide it or maybe there are other problems as well.
this pattern appears 4 times and looks like it is reducing the functions that are exposed to the 'whitelist'. i presume the modules have dangerous functions in the prototype chain and clients were able to invoke them.
- return moduleExports[metadata.name];
+ if (hasOwnProperty.call(moduleExports, metadata.name)) {
+ return moduleExports[metadata.name];
+ }
+ return (undefined: any);> Experimental React Flight bindings for DOM using Webpack.
> Use it at your own risk.
311,955 weekly downloads though :-|
https://vercel.com/changelog/cve-2025-55182
> Cloudflare WAF proactively protects against React vulnerability
It's totally fine to say you don't understand why they have benefits, but it really irks me when people exclaim they have no value or exist just for complexity's sake. There's no system for web development that provides the developer with more grounded flexibility than RSCs. I wrote a blog post about this[0].
To answer your question, htmx solves this by leaning on the server immensely. It doesn't provide a complete client-side framework when you need it. RSCs allow both the server and the client to co-exist, simply composing between the two while maintaining the full power of each.
[0] https://saewitz.com/server-components-give-you-optionality
and Deno Deploy/Subhosting: https://deno.com/blog/react-server-functions-rce
To be fair, they also haven't released (even experimental) RSC support yet, so maybe they lucked out on timing here.
> Even in systems that prevent server functions from being declared in client code (such as "use server" in React Server Components), experienced developers can be caught out. We prefer a design that emphasises the public nature of remote functions rather than the fact that they run on the server, and avoids any confusion around lexical scope. [0]
https://www.arrow-js.com/docs/
It makes it easy to have a central JSON-like state object representing what's on the page, then have components watch that for changes and re-render. That avoids the opaqueness of Redux and promise chains, which can be difficult to examine and debug (unless we add browser extensions for that stuff, which feels like a code smell).
I've also heard heard good things about Astro, which can wrap components written in other frameworks (like React) so that a total rewrite can be avoided:
https://docs.astro.build/en/guides/imports/
I'm way outside my wheelhouse on this as a backend developer, so if anyone knows the actual names of the frameworks I'm trying to remember (hah), please let us know.
IMHO React creates far more problems than it solves:
- Virtual DOM: just use Facebook's vast budget to fix the browser's DOM so it renders 1000 fps using the GPU, memoization, caching, etc and then add the HTML parsing cruft over that
- Redux: doesn't actually solve state transfer between backend and frontend like, say, Firebase
- JSX: do we really need this when Javascript has template literals now?
- Routing: so much work to make permalinks when file-based URLs already worked fine 30 years ago and the browser was the V in MVC
- Components: steep learning curve (but why?) and they didn't even bother to implement hooks for class components, instead putting that work onto users, and don't tell us that's hard when packages like react-universal-hooks and react-hookable-component do it
- Endless browser console warnings about render changing state and other errata: just design a unidirectional data flow that detects infinite loops so that this scenario isn't possible
I'll just stop there. The more I learn about React, the less I like it. That's one of the primary ways that I know that there's no there there when learning new tools. I also had the same experience with the magic convention over configuration in Ruby.What's really going on here, and what I would like to work on if I ever win the internet lottery (unlikely now with the arrival of AI since app sales will soon plummet along with website traffic) is a distributed logic flow. In other words, a framework where developers write a single thread of execution that doesn't care if it's running on backend or frontend, that handles all state synchronization, preferably favoring a deterministic fork/join runtime like Go over async behavior with promise chains. It would work a bit like a conflict-free replicated data type (CRDT) or software transactional memory (STM) but with full atomicity/consistency/isolation/durability (ACID) compliance. So we could finally get back to writing what looks like backend code in Node.js, PHP/Laravel, whatever, but have it run in the browser too so that users can lose their internet connection and merge conflicts "just work" when they go back online.
Somewhat ironically, I thought that was how Node.js worked before I learned it, where maybe we could wrap portions of the code to have @backend {} or @frontend {} annotations that told it where to run. I never dreamed that it would go through so much handwaving to even allow module imports in the browser!
But instead, it seems that framework maintainers that reached any level of success just pulled up the ladder behind them, doing little or nothing to advance the status quo. Never donating to groups working from first principles. Never rocking the boat by criticizing established norms. Just joining all of the other yes men to spread that gospel of "I've got mine" to the highest financial and political levels.
So much of this feels like having to send developers to the end of the earth to cater to the runtime that I question if it's even programming anymore. It would be like having people write the low-level RTF codewords in MS word rather than just typing documents via WYSIWYG. We seem to have all lost our collective minds ..the emperor has no clothes.
IMO, a big part of it is the lack of competition (in approach) exacerbated by the inability to provide alternatives due to technical/syntactical limitations of JavaScript itself.
Vue, Svelte, Angular, Ripple - anything other than React-y JSX based frameworks require custom compilers, custom file-types and custom LSPs/extensions to work with.
React/JSX frameworks have preferential treatment with pre-processors essentially baking in a crude compile time macro for JSX transformations.
Rust solved this by having a macro system that facilitated language expansion without external pre-processors - e.g. Yew and Leptos implement Vue-like and React-like patterns, including support for JSX and HTML templating natively inside standard .rs files, with standard testing tools and standard LSP support;
https://github.com/leptos-rs/leptos/blob/main/examples/count...
https://github.com/yewstack/yew/blob/master/examples/counter...
So either the ECMAScript folks figure out a way to have standardized runtime & compilable userland language extensions (e.g. macros) or WASM paves the way for languages better suited to the task to take over.
Neither of these cases are likely, however, so the web world is likely destined to remain unergonomic, overly complex and slow - at least for the next 5 - 10 years.
Those who are choosing JS for the backend are irresponsible stewards of their customers' data.
> In remote procedure call systems, client-side stub code must be generated and linked into a client before a remote procedure call can be done. This code may be either statically linked into the client or linked in at run-time via dynamic linking with libraries available locally or over a network file system. In either the case of static or dynamic linking, the specific code to handle an RPC must be available to the client machine in compiled form... Dynamic stub loading is used only when code for a needed stub is not already available. The argument and return types specified in the remote interfaces are made available using the same mechanism. Loading arbitrary classes into clients or servers presents a potential security problem;
https://pdos.csail.mit.edu/archive/6.824-2009/papers/waldo-r...
However, if I am reading this correctly, your PoC falls in the category described here: https://react2shell.com/
> Anything that requires the developer to have explicitly exposed dangerous functionality to the client is not a valid PoC. Common examples we've seen in supposed "PoCs" are vm#runInThisContext, child_process#exec, and fs#writeFile.
> This would only be exploitable if you had consciously chosen to let clients invoke these, which would be dangerous no matter what. The genuine vulnerability does not have this constraint. In Next.js, the list of server functions is managed for you, and does not contain these.
Context: This is from Lachlan Davidson, the reporter of the vulnerability
Delete this distraction to genuine blue teamers and stop shitting up the information landscape with this utter hogwash.
This is why infosec is dead.
https://github.com/ejpir/CVE-2025-55182-poc/issues/1#issueco...
Delete this distraction to genuine blue teamers and stop shitting up the information landscape with this utter hogwash.
This is why infosec is dead.
https://github.com/ejpir/CVE-2025-55182-poc/issues/1#issueco...
React also went through a lot of churn. (Still does.) There's no magic optimal duration for keeping API stability. Not in general and not for specific projects.
Ecosystems sometimes undergo a phase-shift. Sometimes they take a long time, based on the size. Python 3 was released in 2008, just a year before Angular 1. And the last Py2 release was in 2020, about 2-3 years before the last AngularJS version. (And of course there are many businesses running on py2 still. I know at least one.) These things take plenty of time.
Angular1 was pretty opinionated, willing to break with the tradition of just add one more jQuery plugin.
Miško was working at Google, he persuaded some people to take a look at the framework that he and Adam Abrons were tinkering with.
Angular 2 was announced in 2014 January. And then v1 still got years of support, even the component architecture was "backported" around 1.5 (in 2016?)
You can run old v1 code side-by-side in a v2+ app up until v17. (At least the v17 docs describe the process in full and later docs link to this page. https://v17.angular.io/guide/upgrade )
...
Google did a pretty good job IMHO. Google throws products under the bus, but not so much OSS projects. (Though the sate of AOSP comes to mind.)
Redux is still by far the most widely-used state management library in React apps. Some of that _is_ legacy usage, sure. But, our modern Redux Toolkit package has ~30M downloads a month. Zustand has become very popular as a client-side state option, and React Query is now the default standard data fetching tool, but you can see that even just RTK is still right up there in monthly NPM downloads:
- https://npm-stat.com/charts.html?package=redux&package=%40re...
I've frequently talked about the original reasons for Redux's creation, which of those are still relevant, and why Redux is still a very valid option to choose even for greenfield projects today:
- https://blog.isquaredsoftware.com/2024/07/presentations-why-...