Updated: July 8, 2025
YOLO. When I did my Plasma 6.4 review, I didn't really have any idea how much time I was going to invest into actually testing and benchmarking Wayland and X11 performance. But post that review I did, received some feedback I did, and so I began doing more and more and more testing. First, I showed you idle system GPU numbers on an AMD graphics laptop. Then, CPU and power data on that same machine. Then, as I had some spare brain cycles, I went further, and I showed you idle and loaded system performance benchmarks on a different laptop with hybrid Intel-Nvidia graphics.
Now, I want to go back to my IdeaPad 3 machine, with its AMD integrated graphics and repeat the tests from my Nvidia adventure. Namely check the CPU, GPU and power utilization while playing a 4K 60FPS video clip and running a WebGL simulation in Firefox. I will do this for both Wayland power efficiency (PE) and color accuracy (CA) modes, and for X11 session with compositing on and off. I'll do this on a fully updated KDE neon desktop, but I want to let you know already, I shall endeavor to redo this test in Fedora 42 Gnome very soon. Now, let's commence.
4K video playback figures
The rules remain the same as in the previous article. VLC, fire up the clip, monitor CPU data with vmstat, power with powertop, GPU data with radeontop, with 1s interval refresh. As before, I made sure that VLC is utilizing hardware acceleration.
That said, the X11 session showed this (VLC command-line output):
avcodec decoder: Using G3DVL VDPAU Driver Shared Library version 1.0 for hardware decoding
Wayland showed this:
libva info: VA-API version 1.20.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/radeonsi_drv_video.so
libva info: Found init function __vaDriverInit_1_20
libva info: va_openDriver() returns 0
Failed to open VDPAU backend libvdpau_nvidia.so: cannot open shared object file: No such file or
directory
Failed to open VDPAU backend libvdpau_nvidia.so: cannot open shared object file: No such file or
directory
I had to check with System Monitor and radeontop that Wayland was indeed using the card for video decoding, and indeed it was. But the inconsistency, the errors, such nonsense. An overly nerdy and badly implemented media experience, in 2025.
Why would you even see a VDPAU Nvidia output on an AMD-graphics machine? I did mention this in my Nvidia testing, and the problem persists here, too, except, it's even worse, because the error should never ever show up. Then, to make things even worse, the error only shows in the Wayland session.
CPU data
Vmstat results:
Metric | Wayland (PE) | Wayland (CA) | X11 (Comp ON) | X11 (Comp OFF) |
Average no. of tasks in the runqueue | 2.4 | 84 | 0.8 | 0.26 |
Total tasks in the runqueue | 2.37 | 83 | 28 | 9 |
Interrupts (in) | 7772 | 7033 | 6703 | 4581 |
Context switches (cs) | 6561 | 6357 | 7117 | 4455 |
Idle CPU % (id) | 68.49 | 74.29 | 90.20 | 96.28 |
From there results, we gain similar conclusions to the Nvidia & Wayland testing:
- X11 with compositing off gives best results, with only 3.72% CPU versus 9.8% with compositing on, and 25.71% for Wayland (CA) and 31.51% for Wayland (PE). About 8-10x more, even though Wayland is supposedly supposed to be modern and leaner and better at GPU tasks. Also, the Wayland utilization reversal figures are quite intriguing.
- Wayland generated more interrupts even against X11 (ON), 5-16% more.
Now, let's correlate to GPU and power figures!
GPU data
As I did with the idle desktop, all percentages:
Metric | Wayland (PE) | Wayland (CA) | X11 (Comp ON) | X11 (Comp OFF) |
Graphics pipe | 56.33 | 57.33 | 57.98 | 56.81 |
Event Engine | 0.00 | 0.00 | 0.00 | 0.00 |
Vertex Grouper + Tesselator | 0.64 | 0.36 | 0.5 | 0.24 |
Texture Addresser | 50.50 | 51.83 | 39.55 | 39.36 |
Shader Export | 49.43 | 51.21 | 28.88 | 27.19 |
Sequencer Instruction Cache | 0.09 | 0.12 | 0.12 | 0.09 |
Shader Interpolator | 51.74 | 52.83 | 52.76 | 52.62 |
Scan Converter | 50.83 | 52.60 | 29.05 | 27.38 |
Primitive Assembly | 0.48 | 0.40 | 0.38 | 0.26 |
Depth Block | 50.69 | 52.29 | 28.86 | 27.33 |
Color Block | 50.69 | 52.50 | 28.93 | 27.21 |
VRAM | 23.29 | 24.60 | 44.36 | 41.83 |
GTT | 17.58 | 17.67 | 4.34 | 4.31 |
Memory Clock | 76.39 | 76.05 | 76.46 | 76.18 |
Shader Clock | 18.37 | 18.79 | 21.75 | 20.34 |
Now, now, these numbers are far more interesting, and not as clear cut as before:
- Wayland (PE) had the best graphics pipe usage, followed by X11 (OFF). X11 (ON) was the worst. The relative percentage differences are slightly under 2%.
- Wayland also performed better, as in used fewer resources in the following categories: Shader Interpolator, VRAM, and Shader Clock. Again, we see a 2% gain. It's the PE mode that's more performant, as expected. The biggest difference comes in the VRAM use, almost half the X11 sessions, and the shader clock showed an 18% gain over the worst X11 scenario.
- That said, Wayland sessions used 4x more GTT, 82% more Shader Export utilization, 2x Scan Converter utilization, and 2x Color and Depth Block.
Overall, Wayland performs better in some of the categories of the GPU stress test, but not all, and it underperforms heavily in many others. Wayland most notably uses the least amount of memory, but requires several multiples of cycles elsewhere. This needs to be taken into account side by side with the 3-9x CPU utilization.
Power data
Now, what about power data you ask?
Metric | Wayland (PE) | Wayland (CA) | X11 (Comp ON) | X11 (Comp OFF) |
Battery drain (W) | 13.8-20.4 | 13.8-14.1 | 10.7-12.1 | 11.4-14.9 |
I performed several power test cycles, and there was quite a bit of range in the results. By and large, if we can assume the middle of the battery usage range to be representative, then Wayland used 17 and 14 W, respectively, whereas X11 used 11.4 and 13 W, respectively. The power usage inversion is similar to what I've seen in the Nvidia test (WebGL scenario). Again, it requires a more careful analysis. Assuming the power numbers are accurate, then Wayland used 8-49% more power.
If we look at the extremes, then Wayland used at best 13.8 W, whereas X11 used at best 10.7 W, which represents a 29% difference in favor of X11, and nicely correlates to CPU figures. If we again look at Wayland's best numbers against X11, then we have either a 14% penalty or a 8% gain on Wayland's side.
From here, combining the CPU, GPU and power data, X11 performed better in the 4K video playback test overall. Wayland did better in a few areas, GPU numbers wise. Let's see if these GPU numbers actually affect FPS performance. To wit, the WebGL test.
WebGL Aquarium simulation
Once again, we render oh fishy fishy fishes, and see what happens, with and without the battery charger plugged in, for all four scenarios. Charger plugged in:
FPS | Wayland (PE) | Wayland (CA) | X11 (Comp ON) | X11 (Comp OFF) |
Fish count: 15,000 | 16-38 | 18-37 | 16-42 | 21-42 |
Fish count: 20,000 | 18-30 | 13-30 | 15-23 | 14-34 |
Fish count: 25,000 | 12-23 | 10-23 | 14-27 | 14-27 |
On battery charge:
FPS | Wayland (PE) | Wayland (CA) | X11 (Comp ON) | X11 (Comp OFF) |
Fish count: 15,000 | 17-29 | 16-27 | 21-29 | 17-30 |
Fish count: 20,000 | 13-22 | 12-21 | 18-30 | 14-24 |
Fish count: 25,000 | 10-18 | 10-18 | 13-19 | 13-19 |
Lots of things happening here, so let's summarize:
- Unlike the Nvidia test, where the results were steady, here, the FPS counter oscillated wildly. So I captured the range of values shown.
- If we look at the results under power, X11 had consistently better results, with X11 (OFF) as the best choice. The one exception is the 20,000 fish count, where X11 (ON) had unusually low FPS.
- If we look at the battery charge use, again, X11 had better results, but X11 (ON) performed a bit better.
- If we look at the most optimistic differences, then, with power on, Wayland was "slower" about 4 FPS in all of the permutations. Look at all of the tests and compare the highest FPS value for any fish number. It comes out at a neat 4 FPS delta.
- With battery on, then, the deltas are 0-1, 2, and 1 FPS. A smaller difference, but it's there. Considering the overall results being lower, it's fair to say that Wayland lagged at least 10% with the charger plugged in and at least 5% without it. These numbers do add up over time.
Conclusion
From my "load" test on a machine with AMD processor and integrated graphics, we can see that Wayland underperforms X11. It uses a lot more CPU, a bit less GPU, it drains more power, and renders fewer FPS in a WebGL simulation. The differences vary widely, but they stand at significantly more CPU (at least 3x), GPU delta that results in ~1 and ~4 fewer FPS across the board (battery on, charger on), and power drain that seems higher on average, at least 8% more. But please, I must reserve this last figure, as the results showed some gain and some loss for X11. It is possible that Wayland used less power (one of the tests shows this in the most optimistic power range measurements), but this would not correlate with all the other data, or the idle desktop results.
There you go. We now have a much fuller picture. However, this is Plasma 6.4, on top of KDE neon. And you could say, or ask, what if KDE's team has "under-optimized" their implementation of Wayland? Well, first, I doubt it, as all my past testing shows Plasma to be extremely lean. But, to remove doubt, I am also going to test Fedora 42 Gnome, on this very same laptop. So, in the next article, most likely tomorrow, I shall repeat these tests and pit Wayland Plasma against Wayland Gnome, plus some X11 data for good measure. Take care, fellow Tuxians.
Cheers.