We've had ~15 years of focus on the DOM with the progression of jQuery to Backbone to Vue, and many other libraries. At least what I've heard of the Figma approach almost sounds like the Adobe Flash/Flex runtime.
That might only make sense for applications with high levels of information density and snappy reactivity requirements like Figma or Google Docs or a web map -- not for content-focused websites. Still, it's interesting to wonder if our arguments would be more interesting these days if we were discussing those types of approaches, rather than just fighting about React vs. Svelte, or this JavaScript module loader vs. that one...
Every time I have used it, it feels incredibly laggy with a crapton of useless animations.
Maybe it's better on Windows?
1. Wasm based UI libraries exist already, checkout makepad [0] for example.
2. Web app standards are way higher than when flash was around. I highly doubt there would be any serious discussion that would also involve vue/react as alternatives. Almost as ridiculous as asking about using unity ui vs react.
3. Flash was world class media design tool for animation and production design. People were using it to create tv shows. Figma is nowhere close in terms of alternatives like illustrator, adobe ae, etc. So it's value as a runtime for 'cool' effects would be very limited. Which is the only reason to not use html/js because otherwise there's all kinds of small usability issues (like things not scrolling right, or not working with mobile). Figma's biggest value is it's collaborative features which is really important for ui design. I think devs (and other non-designers) overvalue Figma's importance because of lack of famliarity. It's kind of like if people thought you could use google docs or notion as an ide. After all they're both text editors right, even if jetbrains or vscode is better, it's maybe 10-20% better? But of course devs know that they're world's apart, like comparing apples to oranges.
[0] https://makepad.nl/makepad/examples/ironfish/src/index.html
The default 20% margin of error is indeed pretty wide, and it is intended to catch large obvious regressions (e.g. an algorithm accidentally becoming quadratic instead of being linear)
As we described in the blog post, we have the second system based on the real hardware. This system is on-demand. If an engineer has a suspect commit or an experimental change that might affect performance, they would schedule a 30min-1hr run on that queue, where we run selected benchmarks 9-15 times each on laptops of various strength. In this configuration, the margin of error is closer to 2-3% from our observations so far. To get more confidence, you would want to run even more trials, typically we advise 9 iterations, though.
We also do all our daily benchmarking on those laptops too.
Edit: in addition to preventative testing, we also track production metrics in a similar way as described by the sibling comment
It is currently the best tool for me, as a non designer, to smash together (design) icons and export in my choice of format.
Apart from the admin overhead (things got stuck on OS updates) we ended up abandoning the setup because the variance was too big to get anything useful out of running tests for every pr.
The most reliable way for us to monitor performance today is to count slow frames (>16ms) across all actual users and divide them by total time spent in app. It’s a rough proxy, but pretty accurate at showing us when we mess up.
WASM gave Figma a lot of speed by default for a lot of perf-sensitive code like rendering, layouts, applying styles and materializing component instances, our GUI code is mostly React and CSS.
WASM engine performance has not been a problem for us, instead we are constantly looking forward improvements in the devex department: debugging, profiling and modularization.
One of the largest challenges of the platform we face today is the heap size limit. While Chrome supports up to 4GB today, that's not yet the case for all browsers. And even with that, we are still discovering bugs in the toolchain (see this recent issue filed by one of our engineers) https://github.com/emscripten-core/emscripten/issues/20137
The challenge of the perf-testing at scale in our company is helping developers to detect perf regressions when they don't expect them - accidental algorithmic errors, misused caches, over-rendering React components, dangerously inefficient CSS directives, etc.
Overall we're pretty minimal when it comes to animations in product (i.e. here's a quick 22s recording of navigating between screens/opening properties panels in product today: https://video.non.io/video-2940009905.mp4) as we really want to convey that the app is snappy/performant. Definitely keen on diving in if you're experiencing otherwise. Happy to chat here or my email is jake@figma.com.
Regarding platform specific performance - it shouldn't affect things as long as you have GPU acceleration enabled. The majority of us over here at Figma are using OSX FWIW.
Would love to see that cap get raised across the board, it'd enable us and others to do so much more.
Has there been any UI overhaul since acquisition?
I worked on a really perf sensitive system and for perf tests we would run the last x commits each time to get rid of the busy vm syndrome.
It meant that the margin of error could be much less.
You might want to consider it as a mid way step between vm’s and scheduling on laptops (those poor laptop batteries!)
Ed
That I have files with enormous thousands of components and Figma doesn't miss a beat. If you're able to get vector versions of those, then the speed shouldn't be an issue. Obviously if there are a lot of photos, that might be tough.
Having been a long-time user of Figma's somewhat-trailing (at this point) competitor, Sketch, speed is surprisingly one of Figma's most immediately-apparent advantages despite being web-based rather than MacOS-native.
At a previous job, we had a Sketch file that contained more or less an entire B2B app, and at hundreds of megabytes, it took tens of seconds to load (not asynchronously, either, blocking the UI until the whole thing was in memory). A similar everything-file at a more recent job where we used Figma was like night and day: something like three seconds until the file was usable, and perhaps a few more seconds for any big images to load.
Then there's Figma's upstart competitor, Penpot. In my initial explorations, it felt about as responsive as Figma, but when I loaded one of their tutorial files – not even a mega-B2B-app file – everything slowed considerably. The load time wasn't bad, but the frame rate for simply scrolling around the artboards dropped like a rock. While I'm bullish on Penpot, they have a long optimization road ahead of them.
Of course re-running the code from main and the PR on the same VM side by side would be the best, and it would cost a lot more money (especially once you factor in GPUs). We considered it but opted to the strategy I outlined above, it's mainly a trade-off between accuracy vs costs
https://www.figma.com/blog/webassembly-cut-figmas-load-time-... https://www.figma.com/blog/figma-faster/
Then it takes a few seconds to load the file, which I don’t care much about.
But locking my entire browser is inexcusable.
It’s nice they’re working on performance once you get going, but launch performance is abysmal for me (x86-64 Mac).
Each booking likely represents dozens to hundreds of requests. Then, for every visit resulting in a booking there’s probably hundreds of non-booking visits.
I don't really know what any of this means. You're running a single-thread graphics API (OpenGL) in a browser (does WASM support background threads?). On any modern PC, that's gonna look like 1 CPU thread doing all the work and the GPU idling 80% of the time waiting for something to happen.
Because if anything takes up 100% of CPU, other things starts being unresponsive as there is not enough CPU to go around. Happens easily when dealing with concurrency and parallism, which I'm guessing Figma happily uses.
The computer you mentioned comes with a 65 watt AC/DC converter which should last 77 hours minimum on a 5kw battery.
Then there is the nonsense about blaming a web page for not having enough power after choosing to do a tech job while living off a battery.
There's a great post by Ian Hickson, the project lead of Flutter, about how we might have more WASM based web apps while keeping the DOM for content based websites:
> This document proposes to enable browsers to render web pages that are served not as HTML files, but as Wasm files, skipping the need for HTML, JS, and CSS parsing in the rendering of the page, by having the WebGPU, ARIA, and WebHID APIs exposed directly to Wasm.
I want to acknowledge that the load-times for Figma go up linearly with the complexity of your design file + its dependencies. It is always painful when users rightfully complain about giant design files taking a while to load and fully render.
The team is working on changing that so hopefully your experience gets better over time.
This is not my area of expertise, so I am not in the position to promise anything on this forum but I just want to say that the testing framework described in the article is also used to continuously test and measure the file load/parsing time as folks are working towards algorithmic improvements.
Circular interfaces? No worries. Persistent always on top but small and low utilization? Not a problem. spatial contact aware apps? Gimme.
Dev Ex Is going to suck hard for a long time, but good AR hardware will be the opposite of constrained.
In my experience they dive in and don't think about it, it's just natural to use. Not to say the UX is ideal, because imho it sucks hard, but no one complains about *having to use a webapp when they open figma.
The problem is your anchor point. Your idea of what software can and should be is based on? Gmail? Salesforce? The GTAV loading screen? The iOS warning that you’re running out of icloud storage space?
Silicon Valley software expectations are tainted by recency bias and private equity excrement.
The current ceiling to aspire to is literally “CI passes”
yes
Figma is able to use more than 1 CPU core and also claim a lot of your GPUs capabilities.
Don't believe me? Open Figma, load a large file and watch it use multiple CPU cores using htop. Driving the GPU is not all the application does. It also has to work with a largish model and abstract syntax tree and do lots of complex things with that before it goes anywhere near the GPU and opengl. Driving that probably happens on the main thread.
On a modern M1, the app is fast and responsive. It only uses a few worker threads (less than the number of CPU cores) and the GPU is fast enough to keep framerates high if you do things like zooming and panning the view. It's the older and slower laptops that are going to be more challenging.
Often CPU steals are visible in cloud environments. This could be useful for finding some noisy neighbor behaviors, and deciding to either adjusting expectations or rerun.
But things like IO, GPU or memory contestation also could be responsible. There are some fancy new-ish extensions for controlling memory throughput. Intel has Memory Bandwidth Allocation controls in their Resource Director Technology, which is a suite of capabilities all designed for observing & managing cross system resources. There's also controls available for setting up cache usage/allocation.
But there is that other 1% that is making new things possible. And if they are extremely lucky, they’ll build new tech and figure out how to turn it into a product. Figma is an example of what can happen when that mix hits.
Most technical innovations, though, will be licensed/sold/etc. to other companies which fall into the first group.
All that said, it turns out customers love software that works well. Startups (and most software companies, really) undervalue quality because it’s hard and not strictly necessary when there’s no direct competition.
That will take a while to come together. Key blockers are removal of the feature flags currently needed to enable things like garbage collection in Firefox and Chrome and the completion of Kotlin 2.0 and it's new compiler (K2) which is currently available via a feature flag in the Kotlin 1.9 stable release. My estimation is that we're about 6 months away from those blockers being resolved. IOS support is likely to transition to beta around the same time. From there to being stable and well supported is probably another year or so.
But a lot of stuff works right now already. However, it's not suitable for production usage because of the early development status the various bits and pieces you need and the need for toggling browser feature flags.
There are some nice examples of IOS support in the recent release notes for compose web: https://blog.jetbrains.com/kotlin/2023/08/compose-multiplatf...
They don't really mention Wasm there as this mostly focuses on the IOS support. They had a lot of presentations about that at kotlin conf and the compose web channel in the kotlin slack is very active.
Basically, anyone currently doing mobile development that is used to modern UI frameworks for that, will soon be able to target browsers effortlessly without compromising on their UI frameworks. Compose is one of the frameworks. But there are others. I've seen some nice kotlin-js frameworks targeting canvas and vector graphics. Doodle is a nice example: https://nacular.github.io/doodle/. I have not used that yet but it looks quite slick. A lot of kotlin-js stuff will transition to wasm once the compiler stabilizes.
Web developers seem to be mostly unable to see beyond their comfort zone of DOM/CSS/JS. There are alternative ways of doing UI/UX that are common outside of browsers. Applying that in a browser is transitioning from impossible (a few years ago) to being hard but very feasible (the last few years) to being easy, very common, and widely supported across different developer ecosystems (the next few years). Not a matter of if but when.
Dragging any object is laggy--particularly so over the slightly pinkish/purplish background area.
I.e I'm curious if there's a cloud provider managing them for you or you guys keep them in a closet somewhere.
I don't doubt that they don't prcess a lot of money - that's besides the point.
They're a cookie cutter CRUD app (that happens to process a lot of money) that takes _hundreds_ of requests and 12 seconds to load on a 32 core workstation with a gigabit fibre internet connection. They have no business writing a blog on performance engineering.
Figjam on Firefox on Ubuntu is painfully slow and laggy. On a 32-core Ryzen machine, drawing lines on an empty file visibly lags. Doing the same thing in Excalidraw, on the other hand, is extremely responsive.
Would prefer to be sent a PDF, and you won't often catch me saying that.
Is your GPU doing any work? What are the results from here? https://pmndrs.github.io/detect-gpu/
we test on macs, windows and linux laptops, it is very surprising that drawing 1 rectangle is painfully slow.
Sometimes it happens when your browser does not enable hardware acceleration or when your Linux distro does not know how to switch to the discrete GPU.
We won't be able to tell without getting more of your hw specs and debug information, feel free to reach out to the Figma support or email me at skim@figma.com - this is exactly the type of issues that elud us when looking at prod metrics in aggregation.
For things like system updates and taking care of the hardware - we do it manually today. The fleet is still small, so it is manageable but in the future we would like to consider a vendor, if we can find one.
Honestly I guess it's similar in capabilities to the early web before the introduction of JavaScript, where all custom code had to run on the server. Maybe Apple's plan is to increase capabilities over time the same way the web did. I kind of don't think so, though.