One big difference that I’ve noticed between Windows and Linux is that Windows does a much better job ensuring that the system stays responsive even under heavy load.
For instance, I often need to compile Rust code. Anyone who writes Rust knows that the Rust compiler is very good at using all your cores and all the CPU time it can get its hands on (which is good, you want it to compile as fast as possible after all). But that means that for a time while my Rust code is compiling, I will be maxing out all my CPU cores at 100% usage.
When this happens on Windows, I’ve never really noticed. I can use my web browser or my code editor just fine while the code compiles, so I’ve never really thought about it.
However, on Linux when all my cores reach 100%, I start to notice it. It seems like every window I have open starts to lag and I get stuttering as the programs struggle to get a little bit of CPU that’s left. My web browser starts lagging with whole seconds of no response and my editor behaves the same. Even my KDE Plasma desktop environment starts lagging.
I suppose Windows must be doing something clever to somehow prioritize user-facing GUI applications even in the face of extreme CPU starvation, while Linux doesn’t seem to do a similar thing (or doesn’t do it as well).
Is this an inherent problem of Linux at the moment or can I do something to improve this? I’m on Kubuntu 24.04 if it matters. Also, I don’t believe it is a memory or I/O problem as my memory is sitting at around 60% usage when it happens with 0% swap usage, while my CPU sits at basically 100% on all cores. I’ve also tried disabling swap and it doesn’t seem to make a difference.
EDIT: Tried nice -n +19
, still lags my other programs.
EDIT 2: Tried installing the Liquorix kernel, which is supposedly better for this kinda thing. I dunno if it’s placebo but stuff feels a bit snappier now? My mouse feels more responsive. Again, dunno if it’s placebo. But anyways, I tried compiling again and it still lags my other stuff.
All the comments here are great. One other suggestion I didn’t see: use chrt to start the build process with the sched_batch policy. It’s lower than sched_other, which most processes will be in, so the compilation processes should be bumped off the CPU for virtually everyone else
nice +5 cargo build
nice is a program that sets priorities for the CPU scheduler. Default is 0. Goes from -19, which is max prio, to +19 which is min prio.
This way other programs will get CPU time before cargo/rustc.
So the better approach would be to spawn all desktop and base GUI things with
nice -18
or something?No. This will wreak havoc. At most at -1 but I’d advise against that. Just spawn the lesser-prioritised programs with a positive value.
Could you elaborate?
Critical operating system tasks run at -19. If they don’t get priority it will create all kinds of problems. Audio often runs below 0 as well, at perhaps -2, so music doesn’t stutter under load. Stuff like that.
Ok, nice. Do you know what other undefined processes are spawned with?
Default is 0. Also, processes inherit the priority of their parent.
This is another reason why starting the desktop environment as a whole with a different prio won’t work: the compiler is started as a child of the editor or shell which is a child of the DE so it will also have the changed prio.
Damn… thanks thats complicated
It’s more of a workaround than a solution. I don’t want to have to do this for every intensive program I run. The desktop should just be responsive without any configuration.
Yes, this is a bad solution. No program should have that privilege, it needs to be an allowlist and not a blocklist.
You could give your compiler a lower priority instead of upping everything else.
I’d still need to lower the priority of my C++ compiler or whatever else intensive stuff I’d be running. I would like a general solution, not a patch just for running my Rust compiler.
How do you expect the system to know what program is important to you and which isn’t?
The windows solution is to switch tasks very often and to do a lot of accounting to ensure fair distribution. This results in a small but significant performance degradation. If you want your system to perform worse overall you can achieve this by setting the default process time slice value very low - don’t come back complaining if your builds suddently take 10-20% longer though.
The correct solution is for you to tell the system what’s important and what is not so it can do what you want properly.
You might like to configure and use the auto nice deamon: https://and.sourceforge.net/
How do you expect the system to know what program is important to you and which isn’t?
Hmm
The windows solution is to switch tasks very often and to do a lot of accounting to ensure fair distribution.
Sounds like you have a good idea already!
It sounds like the issue is that the Rust compiler uses 100% of your CPU capacity. Is there any command line option for it that throttles the amount of cpu it will use? This doesn’t sound like an issue that you should be tackling at the OS level. Maybe you could wrap the compiler in a docker container and use resource constraints?
It sounds like the issue is that the Rust compiler uses 100% of your CPU capacity.
No, I definitely want it to use as many resources it can get. I just want the desktop and the windows I interact with to have priority over the compiler, so that the compiler doesn’t steal CPU time from those programs.
No, I definitely want it to use as many resources it can get.
taskset -c 0 nice -n+5 bash -c 'while :; do :; done' & taskset -c 0 nice -n+0 bash -c 'while :; do :; done'
Observe the cpu usage of
nice +5
job: it’s ~1/10 of thenice +0
job. End one of the tasks and the remaining jumps back to 100%.Nice’ing doesn’t limit the max allowed cpu bandwidth of a task; it only matters when there is contention for that bandwidth, like running two tasks on the same CPU thread. To me, this sounds exactly what you want: run at full tilt when there is no contention.
Sure but that’s not what the person I replied to suggested.
Why is that a problem? You’d want a compiler to be as fast as possible.
nice
would be way easier to use than a container…
EDIT: Tried nice -n +19, still lags my other programs.
yea, this is wrong way of doing things. You should have better results with CPU-pinning. Increasing priority for YOUR threads that interact all the time with disk io, memory caches and display IO is the wrong end of the stick. You still need to display compilation progress, warnings, access IO.
There’s no way of knowing why your system is so slow without profiling it first. Taking any advice from here or elsewhere without telling us first what your machine is doing is missing the point. You need to find out what the problem is and report it at the source.
While I ultimately think your solution is to use a different scheduler, and that the most useful responses you’ve gotten have been about that; and that I agree with your response that Linux distros should really be tuning the scheduler for the UI by default and let developers and server runners take the burden of tuning differently for their workloads… all that said, I can’t let this comment on your post go by:
which is good, you want it to compile as fast as possible after all
If fast compile times are your priority, you’re using the wrong programming language. One of Go’s fundamental principles is fast compile times; even with add-on caching tooling in other languages, Go remains one of the fastest-compiling statically compiled, strongly typed programming languages available. I will not install Haskell programs unless they’re precompiled bin packages, that’s a hard rule. I will only reluctantly install Rust packages, and will always choose bins if available. But I’ll pick a
-git
Go package without hesitation, because they build crazy fast.Anyway, I hope you find the scheduler of your dreams and live happily ever after.
I only said as fast as possible - I generally think the compile times are fine and not a huge problem. Certainly worth it for all the benefits.
There’s no free lunch after all. Go’s quick compilation also means the language is very simple, which means all the complexity shifts to the program’s code.
That’s an interesting take - that Go program code is more complex than Rust - if I understood you correctly. I came across a learning curve and cognitive load readability comparison analysis a while back, which I didn’t save and now can’t find. I haven’t needed it before because I think this is the first time I’ve heard anyone suggest that Rust code is less complex than Go.
Your point about the tradeoff is right, but for different reasons. Go executables have a substantial runtime (with garbage collection, one of those things that make Go code less complex), making them much larger and measurably slower. And then there’s Rust’s vaunted safety, which Go - outside of the most basic compile-time type safety - lacks. Lots of places for Rust to claim superiority in the trade-offs, so it tickles me that you choose the one truly debatable argument, “complexity.”
Rust is simpler than Go or Python when a system scales.
A program with 1000 lines will be simplest in Python because it’s just 1000 lines right? Doesn’t matter.
A program with 1000000 lines will be much easier and simpler to work with in Rust than in Python or Go. The static analysis and the guarantees that the compiler provides suddenly apply to a much larger piece of code, making it more valuable.
Python offloads type checking to the programmer, meaning that’s cognitive space you gotta use instead of the compiler. Go does the same with error handling and for inexplicable reasons use the billion dollar mistake even though it’s a relatively modern language.
It is in this way that Rust is simpler than Go and Python. Also, because a system is likely to grow to a larger size over time in a corporate setting, Rust should be preferred in your professional workplace rather than Python or Go. That’s my take on it.
Honestly, Go is a weird language. It’s so… “basic”. It doesn’t really provide anything new that other languages haven’t done already, perhaps aside from fast static compilation. If it wasn’t because Google was pushing it, I don’t believe Go would ever have become as popular as it is.
You’re right that garbage collection makes Go simpler, and maybe other patterns do contribute to prevent complexity from piling up. I never worked with Go outside of silly examples to try it out, so I’m no authority about it.
What I meant was more of a “general” rule that the simpler a language is, the more code is necessary to express the same thing and then the intent can become nebulous, or the person reading might miss something. Besides, when the language doesn’t offer feature X, it becomes the programmer’s job to manage it, and it creates an extra mental load that can add pesky bugs (ex: managing null safety with extra checks, tracking pointers and bounds checking in C and so on…).
Also there are studies that show the number of bugs in a software correlate with lines of code, which can mean the software is simply doing more, but also that the more characters you have to read and write, the higher the chance of something to go wrong.
But yeah, this subject depends on too many variables and some may outweigh others.
I face similar issue when updating steam games although I think that’s related to disk read write
But either way, issues like these gonna need to be address before we finally hit the year of Linux desktop lol
Sounds like Kubuntu’s fault to me. If they provide the desktop environment, shouldn’t they be the ones making it play nice with the Linux scheduler? Linux is configurable enough to support real-time scheduling.
FWIW I run NixOS and I’ve never experienced lag while compiling Rust code.
I have a worrying feeling that if I opened a bug for the KDE desktop about this, they’d just say it’s a problem of the scheduler and that’s the kernel so it’s out of their hands. But maybe I should try?
The kde peeps are insanely nice so I guess you should try.
The System76 scheduler helps to tune for better desktop responsiveness under high load: https://github.com/pop-os/system76-scheduler I think if you use Pop!OS this may be set up out-of-the-box.
I distro hop occasionally but always find myself coming back to popos. There are so many quality of life improvements that seem small but make all the difference.
Linux defaults are optimized for performance and not for desktop usability.
If that is the case, Linux will never be a viable desktop OS alternative.
Either that needs to change or distributions targeting desktop needs to do it. Maybe we need desktop and server variants of Linux. It kinda makes sense as these use cases are quite different.
EDIT: I’m curious about the down votes. Do people really believe that it benefits Linux to deprioritise user experience in this way? Do you really think Linux will become an actual commonplace OS if it keeps focusing on “performance” instead of UX?
“Desktop” Linux exists in this state for decades. Who cares? Maybe we won’t have consumer desktops as a niche soon. Existing users are fine with that. Don’t say you are waiting that Linux will become “a viable desktop OS alternative” in next few years.
It’s also not about “desktop and sever variants”. Desktop Linux is either conservative or underresourced. Conservatives will told you that you are wrong and there is no issue. And they are major Linux zealots. For the other side someone need to write code and do system design, and there are not many of people for that. So, it’s better not to expect a solution anytime soon, if you are not planning to work on it by yourself.
“Desktop” Linux exists in this state for decades. Who cares?
I mean, I’d like to think a lot of people care? I think a lot of people in this community would love if Linux was more widespread and less niche.
Maybe we won’t have consumer desktops as a niche soon. Existing users are fine with that.
“Existing users” are not fine with that (I am also an existing user). But even if they were, that is not an attitude that will make Linux into a Windows/macOS competitor.
Don’t say you are waiting that Linux will become “a viable desktop OS alternative” in next few years.
We need a viable desktop alternative today or very soon more now than ever before. Microsoft is tightening the noose on Windows 11 and introducing more and more enshittification. Apple also announced AI partnerships recently. We need alternatives.
It is not good for society for operating systems to be boiled down to two mega-corporate choices. An OS is not something that can be easily made - this is not a space that a competitor can quickly enter and shake things up. If we don’t push MS/Apple off the throne, Linux will stay niche forever and society will suffer.
Society will suffer anyway. It doesn’t make solutions magically appear. You only said why you want it, but not how to do it. To transform GNU/Linux distros into a viable desktop OS is not an easy task, especially when people don’t have a consensus about what it should be.
Of course - I have actually lately been thinking if Linux is suffering from it’s “decentralisation”. There are so many distributions, all with their own structure and teams behind them. On the one hand, this is great, more choice is almost universally good.
However, on the other hand, it leads to a much more fractured movement. Imagine instead of there being 100 or whatever distros, there were maybe just like… 5 or 10 or something. I feel like it’d be easier to rally under fewer flags to consolidate effort and avoid double work. But it’s just a thought I’ve had lately.
Distros are unnecessary entities and don’t improve anything here. What is needed it’s separation of the system and the apps, where apps are provided in sandboxed bundles with permissions. It will solve a lot of issues, not only one you have mentioned. And try to imagine amount of years needed for understanding or explaining importance of this to the GNU/Linux community. A viable desktop OS, huh?
Linux is already a popular and viable desktop OS - for its target audience.
The downvote comes from you implying people cannot dev in Linux when its the platform of choice for this workload.
Now surely the user experience could be polished, but advanced users are at this point used to the workflow, and basic ones will stick to Windows out of inertia no matter what. Therefore the incentive for improving this kind of things is extremely low.
That might be the case, but that makes me sad though. That implies that Linux is only targeting technical people who are willing to tinker with all these things themselves.
I would personally want Linux to be broader than that. I’d want it to be the option for everyone - free computing shouldn’t be limited to technical people, it should be provided to all.
Are you on x11 or wayland? For me x11 behaves really bad on these situations, and wayland is much much snappier.
I am on Wayland actually
Then it’s wayland fault haha! Nah, hopefully it gets better!
deleted by creator
Wasn’t CFS replaced in 6.6 with EEDVF?
I have the 6.6 on my desktop, and I guess the compilations don’t freeze my media anymore, though I have little experience with it as of now, need more testing.
I’d say
nice
alone is a good place to start, without delving into the scheduler rabbit hole…deleted by creator
The Linux kernel uses the CPU default scheduler, CFS,
Linux 6.6 (which recently landed on Debian) changed the scheduled to EEVDF, which is pretty widely criticized for poor tuning. 100% busy which means the scheduler is doing good job. If the CPU was idle and compilation was slow, than we would look into task scheduling and scheduling of blocking operations.
“they never know what you intend to do”
I feel like if Linux wants to be a serious desktop OS contender, this needs to “just work” without having to look into all these custom solutions. If there is a desktop environment with windows and such, that obviously is intended to always stay responsive. Assuming no intentions makes more sense for a server environment.
I see what you mean but I feel like it’s more on the distro mainters to set niceness and prioritize the UI while under load.
What do you even mean as serious contender? I’ve been using Linux for almost 15 years without an issue on CPU, and I’ve used it almost only on very modest machines. I feel we’re not getting your whole story here.
On the other hand whenever I had to do something IO intensive on windows it would always crawl in these machines
You are getting the whole story - not sure what it is you think is missing. But I mean a serious desktop contender has to take UX seriously and have things “just work” without any custom configuration or tweaking or hacking around. Currently when I compile on Windows my browser and other programs “just works” while on Linux, the other stuff is choppy and laggy.
100% agree. Desktop should always be a strong priority for the cpu.
One of my biggest frustrations with Linux. You are right. If I have something that works out of the box on windows but requires hours of research on Linux to get working correctly, it’s not an incentive to learn the complexities of Linux, it’s an incentive to ditch it. I’m a hobbyist when it comes to Linux but I also have work to do. I can’t be constantly ducking around with the OS when I have things to build.
Even for a server, the UI should always get priority, because when you gotta remote in, most likely shit’s already going wrong.
Totally agree, I’ve been in the situation where a remote host is 100%-ing and when I want to ssh into it to figure out why and possibly fix it, I can’t cause ssh is unresponsive! leaving only one way out of this, hard reboot and hope I didn’t lose data.
This is a fundamental issue in Linux, it needs a scheduler from this century.
You should look into IPMI console access, that’s usually the real ‘only way out of this’
SSH has a lot of complexity but it’s still the happy path with a lot of dependencies that can get in your way- is it waiting to do a reverse dns lookup on your IP? Trying to read files like your auth key from a saturated or failing disk? syncing logs?
With that said i am surprised people are having responsiveness issues under full load, are you sure you weren’t running out of memory and relying heavily on swapping?
My work windoz machine clogged up quite much recompiling large projects (GB s of C/C++ code), so I set it to use 19/20 “cores”. Worked okayish but was not some snappy experience IMO (64GB RAM & SSD).
What desktop?
Wooden IKEA one.
You could try using nice to give the rust compiler less priority (higher number) for scheduling.
This seems too complicated if I need to do that for other programs as well.
You can just alias to do this in the programs you do use
Sure, the first time you won’t have this enabled, but after that it just works.
TLDR you might be interested in the rust-based scheduler one of the Canonical Devs released as a PoC. Seemed to be designed similar to your needs of keeping the system (particularly games) responsive even whilst running heavy tasks like kernel compilations. You can swap out schedulers at run time on Linux iirc?
https://www.phoronix.com/news/Rust-Linux-Scheduler-Experiment
Interesting, thanks for sharing
Yeah I think the philosophy of Linux is to not assume what you are going to be use it for. Why should Linux know where your priorities are better than you?
Some people want to run their rustc, ffmpeg or whatever intensive program and don’t mind getting a coffee while that happens, or it’s running on a non-user facing server anyway, to ensure that the process happens as soon as technically possible. Mind you that your case is not an “average usecase” either, not everyone is a developer that does compilation tasks.
So you’ve got a point that the defaults could be improved for the desktop software developer user or somehow made more easily configurable. As suggested downthread, try the
nice
command, an optimized scheduler or kernel, or pick a distribution equipped with that kind of kernel by default. The beauty of Linux is that there are many ways to solve a problem, and with varying levels of effort you can get things to pretty much exactly where you want them, rather than some crowdpleasing default.Why should Linux know where your priorities are better than you?
Because a responsive desktop is basic good UX that should never ever be questioned. That should at least be the default and if you don’t want your desktop to have special priority, then you can configure it yourself.
pick a distribution equipped with that kind of kernel by default.
I’m running Kubuntu, an official variant of Ubuntu which is very much a “just works” kind of distribution - yet this doesn’t just work.
What if I know it will compile for several minutes so I leave it alone to go office chair jousting? It would be fair to lock up the UI in this case.
Sure, it could lock up the UI if there is no input for a while I suppose. But if there is still input, then it should be responsive.
I believe it can achieve both.
There’s a setting in windows to change the priority management, most people never see it.
By default it’s configured for user responsiveness, but you can set it for service responsiveness.
Though this is nothing like the process priority management in Linux, it’s one setting, that frankly I’ve never seen it make any difference. At least with Linux you can configure all sorts of priority management, on the fly no less.
Even with a server, you’d still want the UI to have priority. God knows when you do have to remote in, it’s because you gotta fix something, and odds are the server is gonna be misbehavin’ already.
Even with a server, you’d still want the UI to have priority. God knows when you do have to remote in, it’s because you gotta fix something, and odds are the server is gonna be misbehavin’ already.
That’s a fair point.
I still contend that regularly using processes that hog every available cpu cycle it can get its hands on was not a common enough desktop use case that warranted changing the defaults. It should be up to the user to configure to their needs. That said, a toggle switch like the hidden windows setting you described would be nice.