Avatar

LeFantome

LeFantome@programming.dev
Joined
0 posts • 1.2K comments
Direct message

Except that they are not expecting to merge this into RHEL. They are sending it to CentOS Stream.

permalink
report
parent
reply

I am a pretty big fan of Open Source and have used Linux myself since the early 90’s. Most governments are not going to save money switching to Open Source. At least not within say the term of a politician or an election cycle. Probably the opposite.

This kind of significant shift costs money. Training costs. Consultants. Perhaps hardware. It would not be at all surprising if there are custom software solutions in place that need to be replaced. The dependencies and complexities may be significant.

There are quite likely to be savings over the longer term. The payback may take longer than you think though.

I DO believe governments should adopt Open Source. Not just for cost through. One reason is control and a reduction of influence ( corruption ). Another is so that public investment results in a public good. Custom solutions could more often be community contributions.

The greatest savings over time may actually be a reduction in forced upgrades on vendor driven timelines. Open Source solutions that are working do not always need investment. The investment could be in keeping it compatible longer. At the same time, it is also more economic to keep Open Source up to date. Again, it is more about control.

Where there may be significant cost savings is a reduction in the high costs of “everything as a service” product models.

Much more important than Open Source ( for government ) are open formats. First, if the government uses proprietary software, they expect the public to use it as well and that should not be a requirement. Closed formats can lead to restrictions on what can be built on top of these formats and these restrictions need to be eliminated as well. Finally, open formats are much, much more likely to be usable in the future. There is no guarantee that anything held in any closed format can be retrieved in the future, even if the companies that produced them still exist. Can even Microsoft read MultiPlan documents these days? How far back can PageMaker files be read? Some government somewhere is sitting on multimedia CD projects that can no longer be decoded.

What about in-house systems that were written in proprietary languages or on top of proprietary databases? What about audio or video in a proprietary format? Even if the original software is available, it may not run on a modern OS. Perhaps the OS needed is no longer available. Maybe you have the OS too but licenses cannot be purchased.

Content and information in the public record has to remain available to the public.

The most important step is demanding LibreOffice ( or other open ) formats, AV1, Opus, and AVIF. For any custom software, it needs to be possible to build it with open compilers and tools. Web pages need to follow open standards. Archival and compression formats need to be open.

After all that, Open Source software ( including the OS ) would be nice. It bothers me less though. At that lobby, it is more about ROI and Total Cost of Ownership. Sometimes, proprietary software will still make sense.

Most proprietary suppliers actually do stuff for the fees they charge. Are governments going to be able to support their Open Source solutions? Do they have the expertise? Can they manage the risks? Consultants and integrators may be more available, better skilled, amd less expensive on proprietary systems. Even the hiring process can be more difficult as local colleges and other employers are producing employees with expertise in proprietary solutions but maybe not the Open Source alternatives. There is a cost for governments to take a different path from private enterprise. How do you quantify those costs?

Anyway, the path to Open Source may not be as obvious, easy, or inexpensive as you think. It is a good longer term goal though and we should be making progress towards it.

permalink
report
reply

It is hard to tell is that article is written to obscure or misrepresent facts accidentally or on purpose.

It says stuff like hydro “normally represents 60%” of the power generation without saying what it is now. It for sure doss not tell you if hydro generates more or less electricity now vs the past.

The closest we get to a fact that illustrates the narrative is that Quebec hydro exports are down 18% in 2023 from 2022. Again, it does not say how much was generated. Obviously there is still enough hydro power available as they are still exporting a lot of it. Does the drop have anything at all to do with caapacity?

It says that BC “imported” almost 20% of its power but does not tell us how much it exported. This tell us absolutely nothing. Why? Because of how BC uses power.

Unlike most other sources, hydro power is easily turned on or off whenever you want. You cannot control when the sun shines or wind blows. Turning coal or nuclear plants on or off is expensive.

Electricity is deregulated in the US which means that prices spike when demand is high ( daytime ) and drop when it is low ( night ). BC generates excess hydro power during the day and sells it to the US grid. At night, when prices drop, BC buys power back from the US grid ( or Alberta ) and lets the reservoirs fill back up. How much BC imports has more to do with market price than anything else.

Saying BC buys 20% of its electricity tells us nothing as a fact on its own.

The article shares important truths but does it in a biased and misleading way. I do not trust the narrative.

The most important truth is likely the mushrooming demand. The world ( not just Canada ) is requiring more and more electricity every year. It is quite likely that existing hydro power in Canada will have to be increasingly used to meet domestic demand and that new sources of electricity will need to be identified.

As a global phenomenon, we are creating much more “green” electricity than we expected to. However that has mostly gone to new demand and older power plants ( like coal ) have not always been decommissioned as planned. As a planet, we are using more fossil fuel than ever despite all the green progress made. That does not mean somehow that green power generation has not worked out or is somehow a failure. It at least we are not building coal plants to meet all the demand. I bet we are still building natural gas plants though. Still better, but still.

The “lakes drying up” story is also real and not just in Canada. I am not really debating that as a backdrop. However, in the absence of actual head to head facts showing otherwise, I call BS that hydro power plants in Canada have had to turn off or that production has materially dropped. Also, places like BC have certainly not been building coal plants and are not going to. If I did not know any better, that article would have left me with a profound misunderstanding of what is actually going on.

permalink
report
reply

Manjaro because it is a bait and switch trap. Seems really polished and user friendly. You will find out eventually it is a system destroying time-bomb and a poorly managed project.

Ubuntu because snaps.

The rest are all pros and cons that are different strokes for different folks.

permalink
report
reply

An amazing accomplishment and a real feather in the cap for Rust.

When are we going to see this incorporated into the mainline kernel?

permalink
report
reply

Jellyfin has been rock solid for me, especially since the move to .NET 8. Looking forward to this release.

permalink
report
reply

I read this on my 2013 MacBook Air 2013 running EndeavourOS. It runs amazingly well including video meetings.

permalink
report
reply

If we are marking the birth of Linux and trying to call it GNU / Linux, we should remember our history.

Linux was not created with the intention of being part of the GNU project. In this very announcement, it says “not big and professional like GNU”. Taking away the adjectives, the important bit is “not GNU”. Parts of GNU turned out to be “big and professional”. Look at who contributes to GCC and Glibc for example. I would argue that the GNU kernel ( HURD ) is essentially a hobby project though ( not very “professional” ). The rest of GNU never really not that “big” either. My Linux distro offers me something like 80,000 packages and only a few hundred of them are associated with the GNU project.

What I wanted to point out here though is the license. Today, the Linux kernel is distributed via the GPL. This is the Free Software Foundation’s ( FSF ) General Public License—arguably the most important copyleft software license. Linux did not start out GPL though.

In fact, the early goals of the FSF and Linus were not totally aligned.

The FSF started the GNU project to create a POSIX system that provides Richard Stallman’s four freedoms and the GPL was conceived to enforce this. The “free” in FSF stands for freedom. In the early days, GNU was not free as in money as Richard Stallman did not care about that. Richard Stallman made money for the FSF by charging for distribution of GNU on tapes.

While Linus Torvalds as always been a proponent of Open Source, he has not always been a great advocate of “free software” in the FSF sense. The reason that Linus wrote Linux is because MINIX ( and UNIX of course ) cost money. When he says “free” in this announcement, he means money. When he started shipping Linux, he did not use the GPL. Perhaps the most important provision of the original Linux license was that you could NOT charge money for it. So we can see that Linus and RMS ( Richard Stallman ) had different goals.

In the early days, a “working” Linux system was certainly Linux + GNU ( see my reply elsewhere ). As there was no other “free” ( legally unencumbered ) UNIX-a-like, Linux became popular quickly. People started handing out Linux CDs at conferences and in universities ( this was pre-WWW remember ). The Linux license meant that you could not charge for these though and, back then, distributing CDs was not cheap. So being an enthusiastic Linux promoter was a financial commitment ( the opposite of “free” ).

People complained to Linus about this. Imposing financial hardship was the opposite of what he was trying to do. So, to resolve the situation, Linus switched the Linux kernel license to GPL.

The Linux kernel uses a modified GPL though. It is one that makes it more “open” ( as in Open Source ) but less “free” ( as in RMS / FSF ).

Switching to the GPL was certainly a great move for Linux. It exploded in popularity. When the web become a thing in the mid-90’s, Linux grew like wild fire and it dragged parts of the GNU project into the limelight wit it.

As a footnote, when Linus sent this announcement that he was working on Linux, BSD was already a thing. BSD was popular in academia and a version for the 386 ( the hardware Linus had ) had just been created. As BSD was more mature and more advanced, arguably it should have been BSD and not Linux that took over the world. BSD was free both in terms or money and freedom. It used the BSD license of course which is either more or less free than the GPL depending on which freedoms you value. Sadly, AT&T sued Berkeley ( the B in BSD ) to stop the “free”‘ distribution of BSD. Linux emerged as an alternative to BSD right at the moment that BSD was seen as legally risky. Soon, Linux was reaching audiences that had never heard of BSD. By the time the BSD lawsuit was settled, Linux was well on its way and had the momentum. BSD is still with us ( most purely as FreeBSD ) but it never caught up in terms of community size and / or commercial involvement.

If not for that AT&T lawsuit, there may have never been a Linux as we know it now and GNU would probably be much less popular as well.

Ironically, at the time that Linus wrote this announcement, BSD required GCC as well. Modern FreeBSD uses Clang / LLVM instead but this did not come around until many, many years later. The GNU project deserves its place in history and not just on Linux.

permalink
report
reply

Wayland is the future. It has already surpassed X11 in many ways. My favourite comment on Phoronix was “When is X11 getting HDR? I mean, it was released 40 years ago now.”

That said, the fact that this pull request came from Valve should carry some weight. Perhaps Wayland really is not ready for SDL.

I do not see why we need to break things unnecessarily as we transition. This is on the app side. Sticking with X11 for SDL ( for now ) does not harm the Wayland transition in any way. These applications will still work fine via Xwayland.

Sure, a major release like 3.0 seems like a good place to make the switch. In the end though, it is either ready or it is not. If the best path for SDL is to keep the default at X11 then so be it ( for now ).

permalink
report
reply

Microsoft must make 40% of their revenue off of Azure at this point. I would not be surprised if more than 50% of that is on Linux. Windows is probably down to 10% ( around the same as gaming ).

https://www.kamilfranek.com/microsoft-revenue-breakdown/

Sure there are people in the Windows division who want to kill Linux and some dev dev folks will still prefer Windows. At this point though, a huge chunk of Microsoft could not care less about Windows and may actually prefer Linux. Linux is certainly a better place for K8S and OCI stuff. All the GPT and Cognitive Services stuff is likely more Linux than not.

Do people not know that Microsoft has their own Linux distro? I mean an installation guide is not exactly their biggest move in Linux?

permalink
report
reply