Just some Internet guy

He/him/them 🏳️‍🌈

  • 0 Posts
  • 38 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle





  • Yeah it’ll depend on how good your coreboot implementation is. AFAIK it’s pretty good on Chromebooks because Google whereas a corebooted ThinkPad might have some downsides to it.

    The slowdowns I would attribute to likely bad power management, because ultimately the code runs on the CPU with no involvement with the BIOS unless you call into it, which should be very little.

    Looking up the article seems to confirm:

    The main reason it seems for the Dasharo firmware offering lower performance at times was the Core i5 12400 being tested never exceeded a maximum peak frequency of 4.0GHz while the proprietary BIOS successfully hit the 4.4GHz maximum turbo frequency of the i5-12400. Meanwhile the Dasharo firmware never led to the i5-12400 clocking down to 600MHz on all cores as a minimum frequency during idle but there was a ~974MHz.

    I’d expect System76 laptops to have a smaller performance gap if any since it’s a first-party implementation and it’s in their interest for that stuff to work properly. But I don’t have coreboot computers so I can’t validate, that’s all assumptions.

    That said for a 5% performance loss, I’d say it counts as viable. My games VM has a similar hit vs native. I’ve been gaming on Linux well before Proton and Steam and have taken much larger performance hits before just to avoid closing all my work to reboot for break time games.


  • Yes dual GPU. I set that up like 6 years ago, so its use changed over time. It used to be Windows but now it’s another Linux VM.

    The reason I still use it is it serves as a second seat and is very convenient at that. The GPU’s output is connected to the TV, so the TV gets its own dedicated and independent OS. So my wife can use it when I’m not. When the VM isn’t running I use the card as a render offload, so games get the full power of the better card as well.

    I also use it for toying with macOS and Windows because both of those are basically unusable without some form of 3D acceleration. For Windows I use Looking Glass which makes it feel pretty native performance. I don’t play games in it anymore but I still need to run Visual Studio to build the Windows exes for some projects.

    This week I also used the second card to test out stuff on Bazzite because one if my friends finally made the switch and I need to be able to test things out in it as I have no fucking clue how uBlue works.


  • The BIOS does a lot less than you’d expect, it doesn’t really have an impact on gaming performance. For what it’s worth, I’ve been gaming in a VM for years, and it uses the TianoCore/OVMF/EDK2 firmware, and no issues. Once Linux is booted, it doesn’t really matter all that much. You’re not even allowed to use firmware services after the OS is booted, it’s only meant for bootloaders or simple applications. As long as all the hardware is initialized and configured properly it shouldn’t matter.


  • That should be mostly the default. My secondary Vega 64 is reporting using only 3W which, on a laptop would be worth it but I doubt 3W affects your electricity. It’s nothing compared to the overall power usage of the rest of the desktop, the monitors. Pretty sure even my fans use more.

    The best way to address this would be to first take proper measurements. Maybe get a kill-a-watt and measure usage with and without the card installed to get the true usage at the wall. Also maybe get a baseline with as little hardware as possible. With that data you can calculate roughly how much it costs to run the PC and how much each component costs, and from there it’s easier to decide if it’s worth.

    Just the electric bill being higher isn’t a lot to go with. Could just be that it’s getting cold, or hot. Little details can really throw expectations off. For example, mining crypto during the winter is technically cheaper than not for me because I have electric heat, so between 500W in a heating strip or 500W mining crypto, they both produce the same amount of heat in the room but one of them also made me a few cents as a byproduct. You have to consider that when optimizing for cost and not maximizing battery life on a laptop.


  • It’s nicknamed the autohell tools for a reason.

    It’s neat but most of its functionality is completely useless to most people. The autotools are so old I think they even predate Linux itself, so it’s designed for portability between UNIXes of the time, so it checks the compiler’s capabilities and supported features and tries to find paths. That also wildly predate package managers, so they were the official way to install things so there was also a need to make sure to check for dependencies, find dependencies, and all that stuff. Nowadays you might as well just want to write a PKGBUILD if you want to install it, or a Dockerfile. Just no need to check for 99% of the stuff the autotools check. Everything it checks for has probably been standard compiler features for at least the last decade, and the package manager can ensure you have the build dependencies present.

    Ultimately you eventually end up generating a Makefile via M4 macros through that whole process, so the Makefiles that get generated look as good as any other generated Makefiles from the likes of CMake and Meson. So you might as well just go for your hand written Makefile, and use a better tool when it’s time to generate a Makefile.

    (If only c++ build systems caught up to Golang lol)

    At least it’s not node_modules





  • I think it is a circular problem.

    Another example that comes to mind: the sanctions on Huawei and whether Google would be considered to be supplying software because Android is open-source. At the very least any contributions from Huawei is unlikely to be accepted into AOSP. The EU is also becoming problematic with their whole software origin and quality certifications they’re trying to impose.

    This leads to exactly what you said: national forks. In Huawei’s case that’s HarmonyOS.

    I think we need to get back to being anonymous online, as if you’re anonymous nobody knows where you’re from and your contributions should be based solely on its merit. The legal framework just isn’t set up for an environment like the Internet that severely blurs the lines between borders and no clear “this company is supplying this company in the enemy country”.

    Governments can’t control it, and they really hate it.


  • The problem isn’t even where the software is officially based, it can become a problem for individual contributors too.

    PGP for example used to be problematic because US exports control on encryption used to forbid exporting systems capable of strong encryption because the US wanted to be able to break it when it’s used by others. Sending the tarball of the PGP software by an american to the soviets at the time would have been considered treason against the US, let alone letting them contribute to it. Heck, sharing 3D printable gun models with a foreign country can probably be considered supplying weapons like they’re real guns. So even if Linux was based in a more neutral country not subject to US sanctions, the sanctions would make it illegal to use or contribute to it anyway.

    As much as we’d love to believe in the FOSS utopia that transcends nationality, the reality is we all live in real countries with laws that restrict what we can do. Ultimately the Linux maintainers had to do what’s best for the majority of the community, which mostly lives in NATO countries honoring the sanctions against Russia and China.



  • Everyone’s approaching this from the privacy aspect, but the real reason isn’t that the cashier thought you were weird, they’re just underpaid and under a lot of pressure from management to try multiple times and in some cases they even get written up for not doing it because it’s deemed part of their job. They hate it just as much as you. Same when you try to cancel your cable subscription or whatever: the calls are recorded and their performance is monitored and they make damn sure they try at least 3 times to upsell you, even when it’s painfully obvious you’re done with them.

    Just politely decline until they asked however many times they’re required to ask and move on.


  • The sandboxing is almost always better because it’s an extra layer.

    Even if you gain root inside the container, you’re not necessarily even root on the host. So you have to exploit some software that has a known vulnerable library, trigger that in that single application that uses this particular library version, root or escape the container, and then root the host too.

    The most likely outcome is it messes up your home folder and anything your user have access to, but more likely less.

    Also, something with a known vulnerability doesn’t mean it’s triggerable. If you use say, a zip library and only use it to decompress your own assets, then it doesn’t matter what bugs it has, it will only ever decompress that one known good zip file. It’s only a problem if untrusted files gets involved that you can trick the user in causing them to be opened and trigger the exploit.

    It’s not ideal to have outdated dependencies, but the sandboxing helps a lot, and the fact only a few apps have known vulnerable libraries further reduces the attack surface. You start having to chain a lot of exploits to do anything meaningful, and at that point you target those kind of efforts to bigger more valuable targets.