I’m curious how software can be created and evolve over time. I’m afraid that at some point, we’ll realize there are issues with the software we’re using that can only be remedied by massive changes or a complete rewrite.
Are there any instances of this happening? Where something is designed with a flaw that doesn’t get realized until much later, necessitating scrapping the whole thing and starting from scratch?
dmesg
/jk
Cough, wayland, cough (X is just old and wayland is better)
Alt text: Thomas Jefferson thought that every law and every constitution should be torn down and rewritten from scratch every nineteen years–which means X is overdue.
Your alt text doesn’t describe what is mentioned in the image though?
In this case “alt text” refers to Randall Munroe’s bonus punchlines he hides in the alt text on xkcd.org.
I’m not sure what people do there who need actual alt text.xkcd.com uses
title
text, notalt
text.Probably go to explainxkcd.com
deleted by creator
Strange. I’m not exactly keeping track. But isn’t the current going in just the opposite direction? Seems like tons of utilities are being rewritten in Rust to avoid memory safety bugs
You got it right, the person you replied to made a joke.
The more the code is used, the faster it ought to be. A function for an OS kernel shouldn’t be written in Python, but a calculator doesn’t need to be written in assembly, that kind of thing.
I can’t really speak for Rust myself but to explain the comment, the performance gains of a language closer to assembly can be worth the headache of dealing with unsafe and harder to debug languages.
Linux, for instance, uses some assembly for the parts of it that need to be blazing fast. Confirming assembly code as bug-free, no leaks, all that, is just worth the performance sometimes.
But yeah I dunno in what cases rust is faster than C/C++.
Rust is faster than C. Iterators and mutable noalias can be optimized better. There’s still FORTRAN code in use because it’s noalias and therefore faster
But yeah I dunno in what cases rust is faster than C/C++.
First of all C and C++ are very different, C is faster than C++. Rust is not intrinsically faster than C in the same way that C is faster than C++, however there’s a huge difference, safety.
Imagine the following C function:
void do_something(Person* person);
Are you sure that you can pass NULL? Or that it won’t delete your object? Or delete later? Or anything, you need to know what the function does to be sure and/or perform lots of tests, e.g. the proper use of that function might be something like:
if( person ) { person_uses++; do_something(person); } ... if( --person_uses == 0 ) free( person )
That’s a lot more calls than just calling the function, but it’s also a lot more secure.
In C++ this is somewhat solved by using smart pointers, e.g.
void do_something(std::unique_ptr<Person> person); void something_else(std::shared_ptr<Person> person);
That’s a lot more secure and readable, but also a lot slower. Rust achieves the C++ level of security and readability using only the equivalent of a single C call by performing pre-compile analysis and making the assembly both fast and secure.
Can the same thing be done on C? Absolutely, you could use macros instead of ifs and counters and have a very fast and safe code but not easy to read at all. The thing is Rust makes it easy to write fast and safe code, C is faster but safe C is slower, and since you always want safe code Rust ends up being faster for most applications.
C/C++ isn’t really faster than Rust. That’s the attraction of Rust; safety AND speed.
Of course it also depends on the job.
https://benchmarksgame-team.pages.debian.net/benchmarksgame/box-plot-summary-charts.html
C/C++ isn’t
You’re talking about two languages, one is C, the other is C++. C++ is not a superset of C.
Yes thank you. But my statement remains true nevertheless.
Agree, call me unreasonable or whatever but I just don’t like Rust nor the community behind it. Stop trying to reinvent the wheel! Rust makes everything complicated.
On the other hand… Zig 😘
deleted by creator
Zig!!
Zig!!
Bold
Italics
I feel tracked… Better strike this all through
Basically, install Linux on your daily driver, and hide your keyboard for a month. You’ll discover just what needs quality of life revising
Omg nobody has mentioned FHS?!
What that?
Amended my post 😉
Damn I was close. I knew it was something to do with filesystems. Thanks
I think someone did :P
The gatekeeping community
Can I keep a gate too and join the community?
Wayland could already do with a replacement.
Yup, Wayland is so old it already has old concepts. But it is also changing a lot
Needs to be replaced already. They’re having to change to explicit sync, which they should have done from the start. So throw it out, start over, make X12.
It is so much better than X
Wayland is incomplete and unfinished, not broken and obsolete and hopelessly bad design. PulseAudio was bad design. Wayland is very well designed, just, most things haven’t been ported for it yet and some design by committee hell, but even that one is kind of a necessary tradeoff so that Wayland actually lasts a long time.
What people see: lol Firefox can’t even restore its windows to the right monitors
What the Wayland devs see: so how can we make it so Firefox will also restore its windows correctly on a possible future VR headset environment where the windows maintain their XYZ and rotation placement correctly so the YouTube window you left above the stove goes back above the stove.
The Wayland migration is painful because they took the occasion to redo everything from scratch without the baggage of what traditional X11 apps could do, so there is less likely a need for a Wayland successor when new display tech arrives and also not a single display server that’s so big its quirks are now features developers relied on for 20 years and essentially part of the standard.
There’s nothing so far that can’t be done in Wayland for technical implementation reasons. It’s all because some of the protocols aren’t ready yet, or not implemented yet.
Agreed, Wayland has a monumental task to do: replacing a 30+ year old windowing system.
Can’t even update Firefox in place. Have to download a new copy, run it from the downloads folder, make a desktop shortcut myself, which doesn’t have the Firefox icon.
Can’t remember if that was mint or Ubuntu I was fiddling with, but it’s not exactly user friendly.
This has nothing to do with Wayland, it’s just AppImages kinda sucking. Use Flatpak or the one in your distro’s repos, not the AppImage. AppImages are the equivalent of portable apps on Windows, like the single exe ones you’d put on a flash drive to carry around.
Also the AppImage developer is very against Wayland and refuses to support it, which is why Wayland support is a shitshow on AppImages.
If you pick the Flatpak it’ll get updated in the background, have a proper launcher and everything.
Do not download Firefox of the internet. Use your package manager or flatpak
There’s nothing so far that can’t be done in Wayland for technical implementation reasons.
Then make it fully X11 backwards compatible. Make Wayland X12. C’mon, they already admitted NVidia was right and are switching the sync and working to finally support the card they’ve been busting a hate boner over the driver simply because they’re bigots against the licensing. Time to admit breaking the world was a mistake, too.
It’s slowly happening. KDE can now do global Xwayland shortcuts, they also implemented XWaylandVideoBridge and compositor restart crash recovery for apps. We’re getting proper HDR, we have proper per-monitor refresh rates and VRR, I can even hotplug GPUs. Some of that stuff works better in XWayland because we can just run multiple instances with different settings. For the particularly stubborn cases, there’s rootful XWayland. X12 would have to break things too, and I doubt an Xorg rewrite would be all that much further than Wayland is. Canonical had a go at it too with Mir which was much less ambitious.
NVIDIA was right on that one indeed, but Wayland also predates Vulkan and was designed for GLES, pretty much at the tail end of big drivers and the beginning of explicit and low level APIs like Vulkan. They could very well have been right with EGLStream too, but graphics on Linux back then was, erm, bad. But in the end they’re all still better than the kludge that is 3D in Xorg.
It’s getting a lot of momentum and a lot of things are getting fixed lately. It went from unusable to “I can’t believe it’s not Xorg!” just this year for me. It’s very nice when it works well. We’ll get there.
At this point they could make it the best thing in the world. Won’t ever fix the resentment they earned against us NVidia users, might fix some of the resentment from x11 folks… but that it needs a separate XWayland will always be a pain point. That’s a kluge.
I can’t up-vote this enough. The “architectural purists” have made the migration a nightmare. Always blaming everyone else for simply not seeing their genius. I’m honestly surprised it’s gotten as far as it has.
X11 is 40 years old. I’d say it’s been rather successful in the “won’t need to be replaced for some time” category. Some credit where due.
There’s nothing so far that can’t be done in Wayland for technical implementation reasons. It’s all because some of the protocols aren’t ready yet, or not implemented yet.
I mean … It doesn’t matter why it can’t be done. Just that it can’t be done.
40 years old is also what makes it so hard to replace or even reimplement. The bugs are all decade old features, everything is written specifically for Xorg, all of which needs to be emulated correctly. It sure did serve us well, it’s impressive how long we’ve managed to make it work with technology well beyond the imagination of the engineers in the 80s.
There’s this for the protocols: https://github.com/probonopd/wayland-x11-compat-protocols
It can be done, it’s just nobody wants to do it. It’s not really worth the effort, when you can work on making it work properly in Wayland instead. That way you don’t need XWayland in the first place, but also XWayland can then implement it using the same public API everyone else does so it works on every compositor.
Seriously, I’m not a heavy software developer that partakes in projects of that scale nor complexity but just seeing it from the outside makes me hurt. All these protocols left-right and center, surely just an actual program would be cleaner? Like they just rewrite X from scratch implementing and supporting all modern technology and using a monolithic model.
Then small projects could still survive since making a compositor would almost be trivial, no need to rewrite Wayland from scratch cause we got “Waykit” (fictional name I just thought of for this X rewrite), just import that into your project and use the API.
That would work if the only problem they wanted to solve was an outdated tech stack for X. But there are other problems that wayland addresses too, like: how to scale multiple monitors nicely, is it a good idea to give all other apps the keystrokes that you do in the one in focus (and probably a lot more)
Wayland and X are very very different. The X protocol is a protocol that was designed for computer terminals that connected into a mainframe. It was never designed for advanced graphics and the result is that we have just built up a entire system that balances on a shoe box.
Wayland is a protocol that allows your desktop to talk to the display without a heavy server. The result is better battery life, simplified inputs, lower latency, better performance and so on
deleted by creator
It is complex to build a Wayland compositor. When none existed, you had to build your own. So it took quite a while for even big projects like GNOME and KDE to work through it.
At this stage, there are already options to build a compositor using a library where most of the hard stuff is done for you.
https://github.com/swaywm/wlroots
https://github.com/CuarzoSoftware/Louvre
There will be more. It will not be long before creating Wayland compositors is easy, even for small projects.
As more and more compositors appear, it will also become more common just to fork an existing compositor and innovate on top.
One of the longer term benefits of the Wayland approach is that the truly ambitious projects have the freedom to take on more of the stack and innovate more completely. There will almost certainly be more innovation under Wayland.
All of this ecosystem stuff takes time. We are getting there. Wayland will be the daily desktop for pretty much all Linux users ( by percentage ) by the end of this year. In terms of new and exciting stuff, things should be getting pretty interesting in the next two years.
It’s what happens when you put theory over practicality.
What we wanted: Wayland.
What we needed: X12, X13…
The X standard is a really big mess
That’s kind of what I was trying to imply.
We needed a new X with some of the archaic crap removed. I.e. no one needs X primitives anymore, everything is its own raster now (or whatever it’s called).
Evolving X would have given us incremental improvements over time… Eventually resulting in something like Wayland.
You can’t evolve something that old.
No body wanted Wayland except the mad scientists and anti nvidia bigots that made it.
Imagine calling developers who have a cold relationship with Nvidia due to Nvidia doing the bare minimum for Linux development “bigots” lol
I think you must be a fanboy. “Bigotry” towards a multi trillion dollar company lmao. What an absurd thought.
I’m no fanboy of any video card. I just have ton of laptops with NVidia in them, and the bigots making Wayland never gave a darn about our plight… and then they started pushing distros to switch before they did anything to fix it. Their callous attitude toward the largest desktop linux userbase is insulting and pushing the distros before they fix the problem should be criminal. Every one of them should be put away for trying to ruin Linux by abandoning it’s largest desktop user base. We dislike them, dislike them so much.
Now, will it keep us from using that crap when it finally works? No. We don’t have much choice. They’ve seen to that. x11 will go the way of the dodo. But can we dislike them forever for dragging us through the mud until they were finally forced to fix the darn thing? Yeah. Wish them nothing but the worst.
Nobody is being “bigoted” to Nvidia lmao, get some perspective.
And if you’re this butthurt Bout Wayland, don’t use it. I’ve been using it for years without issue, because I didn’t choose a hardware manufacturer that’s actively hostile to Linux. Nvidia is too bigoted for me, unfortunately.
What was stopping X just undergoing some gutting? I get it’s old and covered in dust and cobwebs but look, those can be cleaned off.
“Scoop out the tumors, and put some science stuff in ya”, the company that produced that quote went on to develop the most advanced AGI in the world and macro-scale portable on-demand indestructible teleportation.
I would rather X didn’t get access to deadly neurotoxin, thanks
I dunno, sounds kinda cool.
X12 it’s got 15% less X11!
Because we no longer have mainframes in computer labs. Each person now has there own machine.
And yet I play modern games on modern hardware with X just fine. It’s been extended a little bit since the 80s.
Yes it works but it everything is glued together with duct tape
Libxz
One might exist already: lzlib.
I admit I haven’t done a great deal of research, so maybe there are problems, but I’ve found that
lzip
tends to do better at compression thanxz
/lzma
and, to paraphrase its manual, it’s designed to be a drop-in replacement forgzip
andbzip2
. It’s been around since at least 2009 according to the copyright messages.That said,
xz
is going to receive a lot of scrutiny from now on, so maybe it doesn’t need replacing. Likewise, anything else that allows random binary blobs into the source repository is going to have the same sort of scrutiny. Is that data really random? Can it be generated by non-obfuscated plain text source code instead? etc. etc.Personally I quite like
zstd
, I find it has a pretty decent balance of speed to ratio at each of its levels.
Happens all the time on Linux. The current instance would be the shift from X11 to Wayland.
The first thing I noticed was when the audio system switched from OSS to ALSA.
And then ALSA to all those barely functional audio daemons to PulseAudio, and then again to PipeWire. That sure one took a few tries to figure out right.
And the strangest thing about that is that neither PulseAudio nor Pipewire are replacing anything. ALSA and PulseAudio are still there while I handle my audio through Pipewire.
How is PulseAudio still there? I mean, sure the protocol is still there, but it’s handled by
pipewire-pulse
on most systems nowadays(KDE specifically requires PipeWire).Also, PulseAudio was never designed to replace ALSA, it’s sitting on top of ALSA to abstract some complexity from the programs, that would arise if they were to use ALSA directly.
Pulse itself is not there but its functionality is (and they even preserved its interface and pactl). PipeWire is a superset of audio features from Pulse and Jack combined with video.
For anyone wondering: Alsa does sound card detection and basic IO at the kernel level, Pulse takes ALSA devices and does audio mixing at the user/system level. Pipe does what Pulse does but more and even includes video devices
And then from ALSA to PulseAudio haha
They’re at different layers of the audio stack though so not really replacing.
Join the hive mind. Rust is life.
This is going to make my bicycle workshop much easier to run.
My only two concerns are one, Rust is controlled by a single entity, and two, it is young enough we don’t know about all of its flaws.
Third concern: dependencies.
I installed a fairly small rust program recently (post-XZ drama), and was a bit concerned when it pulled in literally hundreds of crates as dependencies. And I wasn’t planning on evaluating all of them to see if they were secure/trustworthy - who knows if one of them had a backdoor like XZ? Rust can claim to be as secure as Fort Xnox, but it means nothing if you have hundreds of randoms constantly going in and out of the building, and we don’t know who’s doing the auditing and holding them accountable.
Not really software but, personally I think the FHS could do with replacing. It feels like its got a lot of historical baggage tacked on that it could really do with shedding.
Fault handling system?
Filesystem Hierarchy Standard
/bin
,/dev
,/home
and all that stuffWhat’s wrong with it?
$PATH
shouldn’t even be a thing, as today disk space is cheap so there is no need to scatter binaries all over the place.Historically,
/usr
was created so that you could mount a new disk here and have more binaries installed on your system when the disk with/bin
was full.And there are just so many other stuff like that which doesn’t make sense anymore (
/var/tmp
comes to mind,/opt
,/home
which was supposed to be/usr
but name was already taken, etc …).How would virtual environment software, like conda, work without $PATH?
Today’s software would probably break, but my point is that
$PATH
is a relic from ancient times that solved a problem we don’t have anymore.
deleted by creator
You missed my point. The reason $PATH exists in the first place is because binaries were too large to fit on a single disk, so they were scattered around multiple partitions (
/bin
,/sbin
,/usr/bin
, etc…). Now, all your binaries can easily fit on a single partition (weirdly enough,/usr/bin
was chosen as the “best candidate” for it), but we still have all the other locations, symlinked there. It just makes no sense.As for the override mechanism you mention, there are much better tools nowadays to do that (overlayfs for example).
This is what plan9 does for example. There is no need for
$PATH
because all binaries are in/bin
anyways. And to override a binary, you simply “mount” it over the existing one in place.deleted by creator
Would be a crazy expensive migration though
Definitely. As nice as it would be, I don’t think it will significantly change any time soon, for several reasons. Not least of which is because several programs would likely just flatly refuse to implement such a change, judging by some of them refusing to even consider patches to implement the XDG Base Directory Specification.
So much of that is PDP-11 baggage or derived from it.
Or more generally Very Small Disk baggage.
We haven’t rewritten the firewall code lately, right? checks Oh, it looks like we have. Now it’s nftables.
I learned ipfirewall, then ipchains, then iptables came along, and I was like, oh hell no, not again. At that point I found software to set up the firewall for me.
Damn, you’re old. iptables came out in 1998. That’s what I learned in (and I still don’t fully understand it).
UFW → nftables/iptables. Never worry about chains again
I was just thinking that iptables lasted a good 20 years. Over twice that of ipchains. Was it good enough or did it just have too much inertia?
Nf is probably a welcome improvement in any case.
Wayland, Pipewire, systemd, btrfs/zfs, just to name a few.
Wayland is THE replacement to broken, hack-driven, insecure and unmaintainable Xorg.
Pipewire is THE replacement to the messy and problematic audio stack on Linux to replace Pulseaudio, Alsa etc.
SystemD is THE replacement to SysVinit (and is an entire suite of software)
Yes, I know. I was answering the question of if there were instances of this happening.
Like many, I am not a fan of SystemD and hope something better comes along.
The only thing I personally dislike about systemd is the “waiting for service to stop 5mins/1h30mins” stuff during shutdowns and reboots. I know I can limit them to 10s or something but how about just making systemd force-stop these services like, say, runit.
When I’m using my bemenu script to shutdown and feel like a hackerman, don’t take that way from me by being an annoyance, systemd!!!
Edit: Yes, I’m considering switching to Void, how could you tell?
- TPM encryption or LUKS in general
- general distro architecture like ostree
Starting anything from scratch is a huge risk these days. At best you’ll have something like the python 2 -> 3
rewriteoverhaul (leaving scraps of legacy code all over the place), at worst you’ll have something like gnome/kde (where the community schisms rather than adopting a new standard). I would say that most of the time, there are only two ways to get a new standard to reach mass adoption.-
Retrofit everything. Extend old APIs where possible. Build your new layer on top of https, or javascript, or ascii, or something else that already has widespread adoption. Make a clear upgrade path for old users, but maintain compatibility for as long as possible.
-
Buy 99% of the market and declare yourself king (cough cough chromium).
Python 3 wasn’t a rewrite, it just broke compatibility with Python 2.
In a good way. Using a non-verified bytes type for strings was such a giant source of bugs. Text is complicated and pretending it isn’t won’t get you far.
-