PC won't start? Our software fixes that!



  • I know people in marketing are known for exaggerating the truth slightly, but.. 

     

    • PC Errors <font color="#626262">- Random crashes, freezes and restarts</font><font color="#626262"></font>
      Cleans & Defrags Registry, clears hard drive clutter.

    • Slow Computer<font color="#626262">- Sluggish programs, downloads & startup</font>
      Accelerates Windows startup, unclogs Registry bottlenecks.

    • Dead System<font color="#626262"> - PC won't start, system or drive is down</font>
      Revives unstartable systems, repairs corrupt drives, rescues files.

    Boosts PC start time and web speed up to 300%

     

    • Expensive Repairs<font color="#626262"> - Costly expert help from strangers to fix your PC</font>
      Patent-pending [Censored] automatically keeps PCs running like new.
    •  
     

    I know software claiming to "fix" your registry isn't new, and neither is software claiming to speed up your internet by an arbitrary figure, but I'm interested in the claim of running software to fix a PC that is unstartable or has a dead drive.  

    and I'm not sure about the "Costly expert help from strangers", or the fact that I have to edit the html of this post just because I did a copy & paste and confused the editor.

    Oh, and I forgot - this is a web download (naturally)

    [Trademarks censored]


  •  What's so unbelieveable about software that makes a system that is unstartable (eg. Windows doesn't boot) work again? In fact, a linux boot cd would make a system that doesn't start because Windows is effed up work again. The Windows recovery procedure might make the unstartable system startable again...

    Of course, it would have to be a boot cd or similar, as you can't run the application on a system that doesn't boot properly otherwise.



  • @Mole said:

    Expensive Repairs<font color="#626262"> - Costly expert help from strangers to fix your PC</font>

  • Patent-pending [Censored] automatically keeps PCs running like new.
  • Wait, they were actually advertising costly expert help from strangers?

     

    Also, I thought you found an ad saying "patent-pending shit" until I got to the bottom. "Redacted" or "removed" would have been better.



  • Reimage (http://www.reimage.com/) actually works quite well.



  • @Evo said:

    What's so unbelieveable about software that makes a system that is unstartable (eg. Windows doesn't boot) work again?
    If they shipped a CD out to you to use then fair enough, but its a web download (as stated in the first post), and from there claims, I'd say there target audience wouldn't know how to burn a disk image (if it even contains one) to a cd, make it bootable, and then actually use it. 



  • @Mole said:

    I'd say there target audience wouldn't know how to burn a disk image (if it even contains one) to a cd, make it bootable, and then actually use it. 

    I'd even go one step further and say that those developers wouldn't even know how to code a bootable software and make an image of it. Or know what a "CD" is.



  • To be fair, I remember back in the ISA bus days there were certain special "diagnostic" expansion cards with their own mini-display (usually a LED segment display) that claimed to display detailed error codes about why a certain PC would not start, but even those didn't claim to magically fix everything.Plus they were marketed at technicians, not home users.

     Edit: they still make similar cards, apparently..

    These ads, just like those "driver wizards" that claim to be a one-stop solution for all drivers, or eliminate driver hunting (ha! If only that was true...) or those "codecless" media players etc. etc. play on desperate Joe Consumers' hopes of an easy, no-strings attached fix. And usually they are pure snake oil or malware.

    When working  as a  computer service station commander in the Army (and way before and way after that) the most usual question I got is if there was some sort of magical "universal" windows installation CD that has ALL drivers in it and somehow tailor-suits itself to whatever system you try installing it. "All" could mean having the latest HP printer drivers, all the way down to some ESS Media ISA "SB compatible" soundcard, and that defunct mobo chipset nobody remembers.

    Linux distros may appear able to do just that, but actually they merely have more generic built-in support for certain chipsets (VIA, nVidia) that Windows does not have, plus they often download stuff as they install, which is not standard procedure on Windows. It may be possible to subjectively slipstream some of the most common drivers that are usually missing from windows (e.g. VIA chipsets, RAID controllers, generic ATI and nVidia drivers) but that's a far cry from a truly "universal" installer that includes even drivers for your crappy ISA FM tuner card.

    As for the rest of the claims....heh, the usual "TUNE UP  YOU'RE WINDOS (sic) PERFORMANCE ΑDN AXXELR8 J00R 1NT3RN3TS K0NN3KTION 111!!1one11!!" snake oil. Υou'll be lucky if you get a half-assed "registry cleaner" (more like registry bogo-sorter), tmp-file deleter,and one of those infamous "modem accelerators" that claimed up to 600% performance boosts on dial-up (usually via an extra layer of caching  for HTTP requests or by turning on modem compression in case it wasn't used for some reason (never happened to me, and I only used crappy softmodems). Nothing new here, moving right along.



  • @C4I_Officer said:

    To be fair, I remember back in the ISA bus days there were certain special "diagnostic" expansion cards with their own mini-display (usually a LED segment display) that claimed to display detailed error codes about why a certain PC would not start, but even those didn't claim to magically fix everything.Plus they were marketed at technicians, not home users.

     Edit: they still make similar cards, apparently..


    That looks vaguely similar to the port 80 display many modern motherboards have. A rather nice thing to have when overclocking.

    @C4I_Officer said:

    Linux distros may appear able to do just that, but actually they merely have more generic built-in support for certain chipsets (VIA, nVidia) that Windows does not have, plus they often download stuff as they install, which is not standard procedure on Windows. It may be possible to subjectively slipstream some of the most common drivers that are usually missing from windows (e.g. VIA chipsets, RAID controllers, generic ATI and nVidia drivers) but that's a far cry from a truly "universal" installer that includes even drivers for your crappy ISA FM tuner card.


    Linux also has a vastly different philosophy with drivers. The majority of all existing hardware drivers are actually in the kernel tree, making it very easy for distributors to create a kernel that has all the drivers compiled as modules. In many cases a single driver can handle a whole bunch of similar hardware, like multiple revisions of a SATA controller. And since the drivers are just drivers and not bundled with a metric fuckton of crapware, the resulting bundle has a manageable size.

    Network install also makes sense given the robustness and flexibility of the various package managers available for Linux. Most distros do offer offline installation discs, but with a network install you can only download what you need and will get the latest versions of the software. Windows, on the other hand, is just a giant blob for the most part, with relatively few optional features. And if it's been more than a week since that version was released, there will likely be hundreds of patches waiting to be downloaded in Windows Update.



  • My previous motherboard had one of those LED displays built into it for the purpose of "If you system doesn't boot, check the value on the display and refer to manual". The only problem was that the values it normally stopped on were not documented... 



  • @tdb said:

    Network install also makes sense given the robustness and flexibility of the various package managers available for Linux.

    I LOL'd.

     

    @tdb said:

    And if it's been more than a week since that version was released, there will likely be hundreds of patches waiting to be downloaded in Windows Update.

    And if you have any quantity of desktop software installed in Linux, there will also be tons of patches.  The main difference is that Windows Update downloads all the hotfixes and applies them individually whereas Linux package managers usually just download a full binary of the latest version.



  • @morbiuswilters said:

    @tdb said:

    Network install also makes sense given the robustness and flexibility of the various package managers available for Linux.

    I LOL'd.

    Care to elaborate on exactly how Windows is superior in this aspect?


    @morbiuswilters said:

    @tdb said:

    And if it's been more than a week since that version was released, there will likely be hundreds of patches waiting to be downloaded in Windows Update.

    And if you have any quantity of desktop software installed in Linux, there will also be tons of patches.  The main difference is that Windows Update downloads all the hotfixes and applies them individually whereas Linux package managers usually just download a full binary of the latest version.

    Sure, Linux distros have updates too. My point with this was that Windows downloads stuff from the net too, not that Linux would somehow magically be bug-free and complete from day one. A plus side for downloading the entire binary is that you only need to download the latest version, not every one of the million patches between release and now.



  • @scgtrp said:

    @Mole said:

    Expensive Repairs<font color="#626262"> - Costly expert help from strangers to fix your PC</font>

  • Patent-pending [Censored] automatically keeps PCs running like new.
  • Wait, they were actually advertising costly expert help from strangers?

     

    That actually isn't part of the WTF; 4 out of the 5 first-half-bolded lines (including this one) are of the form "Bad Thing - Slightly wordier explanation of bad thing" (followed on the next line by "what our snake oil supposedly does to cure it").  The WTF is "Boosts PC start time and web speed up to 300%", which breaks this pattern.

     



  • @tdb said:

    @morbiuswilters said:

    @tdb said:

    Network install also makes sense given the robustness and flexibility of the various package managers available for Linux.

    I LOL'd.

    Care to elaborate on exactly how Windows is superior in this aspect?

    Typical Linux fanboi.  Somebody criticizes something about Linux and the first response is "Well Windoze isn't very good, either!"  Maybe, just maybe, it's possible Linux package managers suck independent of how good or bad Windows is.

     

    @tdb said:

    @morbiuswilters said:

    @tdb said:

    And if it's been more than a week since that version was released, there will likely be hundreds of patches waiting to be downloaded in Windows Update.

    And if you have any quantity of desktop software installed in Linux, there will also be tons of patches.  The main difference is that Windows Update downloads all the hotfixes and applies them individually whereas Linux package managers usually just download a full binary of the latest version.

    Sure, Linux distros have updates too. My point with this was that Windows downloads stuff from the net too, not that Linux would somehow magically be bug-free and complete from day one. A plus side for downloading the entire binary is that you only need to download the latest version, not every one of the million patches between release and now.

    I'm not sure how you think you made that point.   It was fairly boilerplate Windows bashing ("Oh, hey, Windoze has patches!") that ignored the identical reality of Linux.  Downloading the entire binary has benefits and downsides.  For one, a dozen small patches is a lot less to download than one big binary.  Also, most commercial software has not historically been available for full download due to piracy concerns (whether this is a valid strategy to combat piracy is up for debate), although that is changing.  The patch -> patch -> patch strategy is very well-established in commercial software and I guess it's nice FOSS has the ability to choose full binaries, but I'd hardly call it a major selling point.



  • @morbiuswilters said:

    @tdb said:

    @morbiuswilters said:

    @tdb said:

    Network install also makes sense given the robustness and flexibility of the various package managers available for Linux.

    I LOL'd.

    Care to elaborate on exactly how Windows is superior in this aspect?

    Typical Linux fanboi.  Somebody criticizes something about Linux and the first response is "Well Windoze isn't very good, either!"  Maybe, just maybe, it's possible Linux package managers suck independent of how good or bad Windows is.

    Well, Windows is the most common point of comparison. Pardon me for assuming that if it wasn't what you meant. It's well within the realm of possibility that Linux package managers have shortcomings without being compared to anything. If you have some specific points in mind, I'd be interested to hear of them so that the software can be improved.

    @morbiuswilters said:

    @tdb said:
    @morbiuswilters said:

    @tdb said:

    And if it's been more than a week since that version was released, there will likely be hundreds of patches waiting to be downloaded in Windows Update.

    And if you have any quantity of desktop software installed in Linux, there will also be tons of patches.  The main difference is that Windows Update downloads all the hotfixes and applies them individually whereas Linux package managers usually just download a full binary of the latest version.

    Sure, Linux distros have updates too. My point with this was that Windows downloads stuff from the net too, not that Linux would somehow magically be bug-free and complete from day one. A plus side for downloading the entire binary is that you only need to download the latest version, not every one of the million patches between release and now.

    I'm not sure how you think you made that point.   It was fairly boilerplate Windows bashing ("Oh, hey, Windoze has patches!") that ignored the identical reality of Linux.  Downloading the entire binary has benefits and downsides.  For one, a dozen small patches is a lot less to download than one big binary.  Also, most commercial software has not historically been available for full download due to piracy concerns (whether this is a valid strategy to combat piracy is up for debate), although that is changing.  The patch -> patch -> patch strategy is very well-established in commercial software and I guess it's nice FOSS has the ability to choose full binaries, but I'd hardly call it a major selling point.

    It was a response to this (relevant part bolded):

    @C4l_Officer said:
    Linux distros may appear able to do just that, but actually they merely have more generic built-in support for certain chipsets (VIA, nVidia) that Windows does not have, plus they often download stuff as they install, which is not standard procedure on Windows.

    That sentence struck me as criticism of the common practice of Linux distribution vendors to provide netinstall disks as the favoured method of installing. I provided counterpoints (or at least tried to) but noting that (1) there are full offline install disks available to Linux as well (all right, the original post has the word "often" in it), and (2) unlike the above sentence insinuates, downloading stuff is standard procedure on Windows (although it happens after the bulk of the installation is completed).

    In short: Yes, Linux netinstallers download stuff from the net. Windows downloads stuff from the net too. I don't see how downloading stuff from the net is a bad thing.



  • @tdb said:

    I don't see how downloading stuff from the net is a bad thing.

    It doesn't work well if the net is not available or if you pay for traffic. Duh.



  • @tdb said:

    Sure, Linux distros have updates too. My point with this was that Windows downloads stuff from the net too, not that Linux would somehow magically be bug-free and complete from day one. A plus side for downloading the entire binary is that you only need to download the latest version, not every one of the million patches between release and now.

    First it was hundreds in the first week, and now it's a million.  Whatever next?  An infinite number of patches within the first second?  Meanwhile I'm looking at my Windows 7 machine that has had 27 - count 'em - OS patches since RTM.  Surely something must be wrong, and it's either reality or the opinion of Weenix Lunies who can't be bothered even getting the facts right before spouting ridiculous hyperbole.



  • @mfah said:

    @tdb said:

    Sure, Linux distros have updates too. My point with this was that Windows downloads stuff from the net too, not that Linux would somehow magically be bug-free and complete from day one. A plus side for downloading the entire binary is that you only need to download the latest version, not every one of the million patches between release and now.

    First it was hundreds in the first week, and now it's a million.  Whatever next?  An infinite number of patches within the first second?  Meanwhile I'm looking at my Windows 7 machine that has had 27 - count 'em - OS patches since RTM.  Surely something must be wrong, and it's either reality or the opinion of Weenix Lunies who can't be bothered even getting the facts right before spouting ridiculous hyperbole.

    Judging from your heated reply, evidently my use of hyperbole was a success. Sure, Windows 7 has not accumulated all that much patches yet. And I don't really know exactly how much XP or Vista has either, since I rarely use the former and the latter not at all.



  • @Mole said:

    @Evo said:
    What's so unbelieveable about software that makes a system that is unstartable (eg. Windows doesn't boot) work again?
    If they shipped a CD out to you to use then fair enough, but its a web download (as stated in the first post), and from there claims, I'd say there target audience wouldn't know how to burn a disk image (if it even contains one) to a cd, make it bootable, and then actually use it. 

    But a computer that won't start (ie, won't power on)  is not the same as a computer that won't boot.


  • @tdb said:

    Well, Windows is the most common point of comparison. Pardon me for assuming that if it wasn't what you meant. It's well within the realm of possibility that Linux package managers have shortcomings without being compared to anything. If you have some specific points in mind, I'd be interested to hear of them so that the software can be improved.
     

    My biggest gripe is how if you install anything *outside* the package manager, it's likely to either f-up the package manager's libraries, or get f-ed-up in turn as the package manager changes library versions without compensating for your existing installs.

    And of course it completely cuts-out commercial software developers, although being Linux I'm sure that's half the point of the thing.



  • @El_Heffe said:

    @Mole said:

    @Evo said:
    What's so unbelieveable about software that makes a system that is unstartable (eg. Windows doesn't boot) work again?
    If they shipped a CD out to you to use then fair enough, but its a web download (as stated in the first post), and from there claims, I'd say there target audience wouldn't know how to burn a disk image (if it even contains one) to a cd, make it bootable, and then actually use it. 

    But a computer that won't start (ie, won't power on)  is not the same as a computer that won't boot.

    Try telling that to your average Joe User, who says there's "nothing" on the screen when it's in fact showing a Windows desktop without any applications, and sends you an email about his internet connection being broken when someone has hijacked his browser startup page.


  • Discourse touched me in a no-no place

    @blakeyrat said:

    My biggest gripe is how if you install anything outside the package manager, it's likely to either f-up the package manager's libraries, or get f-ed-up in turn as the package manager changes library versions without compensating for your existing installs.
    Never had that problem myself on the few occasions I've had to do it. For example, some set up instructions I've got on our work wiki for fresh development-box installations for a package that isn't in the distro:

    <download the source>
    tar -xvf libsmi-0.4.8.tar.gz
    cd libsmi-0.4.8
    ./configure
    make
    make install
    cd ..
    rm -rf libsmi-0.4.8 libsmi-0.4.8.tar.gz
    
    Granted (in this particular case) it's a package that's unlikely to get borked when things around it change, but there are a couple of other packages we use that aren't in the distro repository. (And I've found the maintainers of said distro to be most un-co-operative when it comes to fixing blatant bugs in updates to the distro, let alone attempt to encourage them to add more stuff.)


  • @blakeyrat said:

    @tdb said:

    Well, Windows is the most common point of comparison. Pardon me for assuming that if it wasn't what you meant. It's well within the realm of possibility that Linux package managers have shortcomings without being compared to anything. If you have some specific points in mind, I'd be interested to hear of them so that the software can be improved.
     

    My biggest gripe is how if you install anything outside the package manager, it's likely to either f-up the package manager's libraries, or get f-ed-up in turn as the package manager changes library versions without compensating for your existing installs.

    That's certainly true, especially in the case of libraries and drivers. A common case I've had to deal with a few times is when someone installs Nvidia drivers outside the package manager and then upgrades the Mesa OpenGL package, which now overwrites Nvidia's OpenGL. I suppose a (partial) solution would be to make the package manager keep track of the checksums of files and abort upgrade or ask confirmation from the user if it detects a file that has been tampered with.

    Preventing the package manager from upgrading libraries from under a self-compiled application is a tougher problem. Fortunately most well-known libraries have a policy about versioning and breaking binary compatibility and allow having multiple versions installed in parallel. A scan of the system binary directories might prevent some cases of the package manager removing a library package as "unused" when the libraries are in fact in use by some custom programs.

    Applications have an easier time staying out of the package manager's way, as they can install themselves to a directory not under the package manager's control, such as /usr/local. Offering the user a way to change the installation directory is preferable. I've installed some games this way and the only problem I've had was when I forgot that libstdc++5 was required by UT2004. It was quickly reinstalled when I discovered the game no longer worked.

    @blakeyrat said:

    And of course it completely cuts-out commercial software developers, although being Linux I'm sure that's half the point of the thing.

    I'm not quite sure what makes you think that? Opera is at least proprietary if not commercial, and they offer packages for various Linux distributions, even going as far as providing an apt source for Debian and Ubuntu. I have a vague memory of spotting some other commercial programs as packages as well. Distribution maintainers have various degrees of bias against closed-source packages (and for a good reason - they can't very well fix bugs in packages they don't have sources for), but then again a vendor that requests money for their software is unlikely to want it included in a distribution anyway. The specs of the package formats are freely available though, so certainly there's nothing on the Linux side to prevent commercial vendors from entering that market.



  • @PJH said:

    @blakeyrat said:
    My biggest gripe is how if you install anything *outside* the package manager, it's likely to either f-up the package manager's libraries, or get f-ed-up in turn as the package manager changes library versions without compensating for your existing installs.
    Never had that problem myself on the few occasions I've had to do it. For example, some set up instructions I've got on our work wiki for fresh development-box installations for a package that isn't in the distro:
    <download the source>

    tar -xvf libsmi-0.4.8.tar.gz
    cd libsmi-0.4.8
    ./configure
    make
    make install
    cd ..
    rm -rf libsmi-0.4.8 libsmi-0.4.8.tar.gz
    

    Granted (in this particular case) it's a package that's unlikely to get borked when things around it change, but there are a couple of other packages we use that aren't in the distro repository. (And I've found the maintainers of said distro to be most un-co-operative when it comes to fixing blatant bugs in updates to the distro, let alone attempt to encourage them to add more stuff.)

    Compiling from source generally doesn't have that problem (although it can have other problems) since you are linking to whatever version of the library is installed.  Now, if your package manager upgrades the system library and that breaks binary compatibility, you'll have to re-compile.



  • @tdb said:

    It's well within the realm of possibility that Linux package managers have shortcomings without being compared to anything. If you have some specific points in mind, I'd be interested to hear of them so that the software can be improved.

    Linux package management is a fucking joke.  Seriously, if you can't see that you've already drank the kool-aid.

     

    Bullet points!

    • Binary compatibility on GNU/Linux sucks, big time.  This is partly deliberate, to allow for more flexible development and to "break" proprietary software.  This makes library management on Linux a nightmare.  Package managers get around this by keeping several versions of each library, which is a kludge, at best.  It undermines the point of share libraries when you have a dozen versions of the same library for a dozen different applications.   Plus, package managers are really retarded about removing necessary libraries, especially if you're installing something outside the package manager.  Now, obviously, the package manager can't easily know there are external dependencies (and there usually is a way to inform it of these dependencies, although by that point you're talking some really obscure stuff most people will never use).  However, the mere fact that we have to rely on a mediocre package management utility in the first place is a crime.
    • There's no fucking consistency between package managers.  Paths, names of packages, configuration locations, linked libraries, features, etc..  It's just fucking random.  Each meta-distro just makes it up as it goes along.  Attempts to unify things have been largely unsuccessful.  djb can be a bit of a crank, but he's absolutely right that distro maintainers should not be arbitrarily enforcing their ridiculous standards on packages, having legions of programmers who spend all their time hacking some application apart and gluing it back together the "redhat/debian/suse/etc. way".
    • Everything is out-of-date.  FOSS development moves fast, especially for popular applications.  Some are better at keeping head of the curve than others (Ubuntu is far better than Debian, for example) but it's still ridiculous that I have to rely on some package maintainer to update a package just so I can get a new version.  And often the less-popular packages are barely even touched, meaning you have to "go off the rails" even more often, risking "DLL hell".  See, one great thing about Windows is that when someone creates a successful application, they control the releases.  Having one, single authoritative release that works anywhere is a lot better than having dozens of releases that vary in all sorts of details, including latest version.  Having all releases funneled through a central "package authority" is bullshit.
    • Some developers just say "fuck it" and release their own versions directly.  They might have a .deb, .rpm and a tar.gz, but the core release is identical and everything is statically linked so they don't have to rely on the shitty package manager for libraries.  Of course, this is absurd.  Now I can choose between an out-of-date, arbitrarily-modified (IceWeasel anyone?) "official" package from my distro or I can go "off the rails" and install the bulky, statically-linked binary directly from the developer, which nicely circumvents the benefits of using shared libraries.  The Windows or Mac methods are hardly perfect, but I'm pretty sure having stable core libraries that can be linked against safely, allowing developers to control their own releases and to know it will "just work" in virtually all cases is superior to the GNU/Linux way.
    • Every single goddamn distro has a hard-on for reinventing the wheel.  "Hey, let's write our own mediocre replacements for init and cron!  And while we're at it, let's abandon the 'standard' Linux network management model and implement our own bulky daemon that does things completely differently and has its own obscure bugs and quirks, because that will make it easier for newbies!  And we'll integrate it so tightly with Gnome that it's impossible to even get a network connection if X isn't running!  I'm a fucking genius!"
    • The stupid, infuriating bullshit that results from all of this.  For example, rubygems is managed by the package manager on Ubuntu.  However, you are often stuck with a woefully outdated version of rubygems unless you want to go whole-hog and upgrade the entire fucking OS.  Oh, and since rubygems is itself a type of package manager, it has a built-in functionality to upgrade itself.  However, the Debian maintainers had to remove this because if an unsuspecting user were to upgrade rubygems via rubygems rather than apt, apt would shit kittens, so it's not even left as an option.  So your only options are 1) take an entire day and upgrade the entire OS and pray the house of cards doesn't implode (hint: it will); or 2) install the rubygems package directly from the developers so you can actually have a reasonably up-to-date version of the package that can actually self-upgrade as the rubygems developers intended.  Of course, then you're "off the rails" and might find out in a couple of months that apt decided to take a shit all over some library your "contraband" version of rubygems relied on.  And the thing is, it's not even really apt's fault.  It's just the inherent shittiness of package management on Linux.

     

    Now, I understand that a lot of these complaints are just inherent to GNU/Linux.  It's not Windows or Mac OS (which, frankly, have their own problems); there is no central authority that decides which idioms will be used, which libraries will be standard, etc.  However, these problems are still very real.  The fact is, Windows and Mac OS do a lot of things far, far better than the various distros.  Sometimes a distro will even have a brilliant idea, or will do one small thing better than Windows or Mac OS.  Overall, though, Linux is a mess.  The fact that 90% of the time I find it easier to hunt down the source and compile myself than to fuck around with a cranky, shitty package manager is pretty telling.  The fact that I find it preferable to build the entire system, ground-up, from source is a depressing statement on the general crappiness of package managers.  I'm lucky that I know enough about Unix that I can get away with ignoring the package manager.  And when going "off the rails" breaks something I can work around the problem, which might be a tad annoying to me, but for somebody with less knowledge it's painful.

     

    For the most basic end-user, the package manager might be good enough.  And for me, it's usually just an annoyance I can brush aside and work around, although it makes me feel real pity for people who aren't able to do that.  For your moderately-skilled user who still isn't skilled enough to completely ignore the package manager, it's a nightmare, and the worst part is the general attitude of Linux folks, in addition to their wanton blindness to the failures of Linux package management, results in the user being made to feel like he's a fucking idiot for not knowing how to resolve complex dependencies or how to install something outside the package manager and configure things correctly so it doesn't get stomped on.  And that's just a massive fucking joke.



  • @tdb said:

    Well, Windows is the most common point of comparison. Pardon me for assuming that if it wasn't what you meant. It's well within the realm of possibility that Linux package managers have shortcomings without being compared to anything. If you have some specific points in mind, I'd be interested to hear of them so that the software can be improved.
     

    My gripe is that if you want to install a device whose driver ISN'T in the kernel, you have to rebuild the kernel.

    So, you have to download the kernel source, download a compiler, download 27 different library sources (which takes a while, since you don't know which ones you need until you try to build it, so it's a recursive process), work out how to compile it, then compile the kernel, then hope that things work and you don't end up having to start from scratch again.

    If you try to ask anyone for help with this (because it's not intuitive for 99.9999% of the population) you get looked at (metaphorically, since you won't be able to actually FIND someone who knows how to do it, so you'll have to ask online) as if you're stupid, and told to RTFM - but there isn't one, or if there is, no one can tell you where it is.  If there are docs, they'll say something like 'download the driver source, then recompile the kernel, then do x, y, z'

    I've been programming computers for 30 years, and even I lost 2 or 3 days of my life when I wanted to install a video capture card in Linux a couple of years ago. I'm not sure what an average person on the street could have done. I recently wanted to install a telephone card, and that was quicker - just a day and a half, but it didn't work, so I had to remove it and do a fresh Linux install to get it working again.

    With Windows, you download a file/insert a disk, and hey presto.

     

    Yes, Linux has many advantages over Windows, but ease of use isn't one of them.

    If you want a plain server with standard apps, then you're fine, but if you want anything special, you really need to know what you're doing.

     

     



  • @morbiuswilters said:

    @tdb said:

    It's well within the realm of possibility that Linux package managers have shortcomings without being compared to anything. If you have some specific points in mind, I'd be interested to hear of them so that the software can be improved.

    Linux package management is a fucking joke.  [...]

    And than there is portage ...



  • @tdb said:

    My point with this was that Windows downloads stuff from the net too, not that Linux would somehow magically be bug-free and complete from day one.

     

     Wait, what? ...you mean it is not actually true that Linux has no bugs, doesn't ever suffer from disk fragmentation or crashes and is always faster and better than Windows? Oh Woe! I bet it won't even make my penis larger like Jeff K had promised, then :-(

    @tdb said:

    @C4l_Officer said:
    Linux distros may appear able to do just that, but actually they merely have more generic built-in support for certain chipsets (VIA, nVidia) that Windows does not have, plus they often download stuff as they install, which is not standard procedure on Windows.

    That sentence struck me as criticism of the common practice of Linux distribution vendors to provide netinstall disks as the favoured method of installing. I provided counterpoints (or at least tried to) but noting that (1) there are full offline install disks available to Linux as well (all right, the original post has the word "often" in it), and (2) unlike the above sentence insinuates, downloading stuff is standard procedure on Windows (although it happens after the bulk of the installation is completed).

    In short: Yes, Linux netinstallers download stuff from the net. Windows downloads stuff from the net too. I don't see how downloading stuff from the net is a bad thing



    It's not bad, but it's not a very good thing either when you need to have pure disk-based or LAN-based installations. And before suggesting using pre-made disk images, think "heterogeneous" here. Exactly, there's no magic preinstalled disk image that will work on anything from a Pentium II to a Core 2 with the windows apps we need, so everything needs to be installed by hand -or at most by automated scripts- with full access to installation disks and no internet access. And no, I'm not setting a repository server just to install software on a couple of "new" workstations.

     Yes, there's are some downloader titles on Windows too (Google Chrome and MSN Live are the most notable) but I really try avoiding it and getting offline, standalone installers when possible. I guess because of this whole "distributed computing" thing, you actually have to dig around to find those.



  • @El_Heffe said:

    But a computer that won't start (ie, won't power on)  is not the same as a computer that won't boot.
    Did you not notice that the same company sells software that automatically repairs your motherboard and power supply? It's completely automatic from a user-friendly GUI too. Just $999 for instant access via download. 

    As for Linux I hate package managers. Every distro has its own, and even if you stay with the same one, give it 6 months and the default repo is no longer maintained as they brought out a newer version of the distro where the repo is not backwards compatible. I've had this with redhat. Installed as server, brilliant, no problems, no crashes. Come to upgrade MySQL and they only support the latest Redhat, not the version installed on the server. Meaning you have to go back to .tar.gz files as they are only things generic enough to be compatible. 

    So I scrap Linux and install Windows Server. Sure it costs more to install, but it costs less to maintain. 



  • @Nelle said:

    @morbiuswilters said:

    @tdb said:

    It's well within the realm of possibility that Linux package managers have shortcomings without being compared to anything. If you have some specific points in mind, I'd be interested to hear of them so that the software can be improved.

    Linux package management is a fucking joke.  [...]

    And than there is portage ...

    Eh, sort of.  Portage gets around a lot of the little irritations of binary package managers but ends up being far less accessible and having a higher learning curve.  Don't get me wrong, portage is pretty awesome and I sometimes crib config options from its packages as a starting point for my own install scripts, but it's hardly an easy tool to use if you are not already familiar with GNU/Linux.  And, it still ends up with some of the same problems of shared library hell, although it's less likely to occur than with a binary package manager and oftentimes it can be fixed by just rebuilding the problematic libraries and any apps that use them.  Of course, that brings us to another problem: it's fucking slow.  Sure, I don't mind that since I'm compiling everything myself, but most users expect a Firefox upgrade to take less than an hour.

     

    Really, I think portage proves my original point perfectly: it's the best package manager on Linux (imho) and it does this by dispensing with binary packages almost completely.   It's basically a series of shortcuts (extremely short shortcuts, really) for building everything by-hand.  How fucked is package management on Linux when portage is actually the best tool for the job, and not just an uber-geek toy that's fun to play with?



  • @tdb said:

    Linux also has a vastly different philosophy with drivers. The majority of all existing hardware drivers are actually in the kernel tree, making it very easy for distributors to create a kernel that has all the drivers compiled as modules. In many cases a single driver can handle a whole bunch of similar hardware, like multiple revisions of a SATA controller. And since the drivers are just drivers and not bundled with a metric fuckton of crapware, the resulting bundle has a manageable size.

    Network install also makes sense given the robustness and flexibility of the various package managers available for Linux. Most distros do offer offline installation discs, but with a network install you can only download what you need and will get the latest versions of the software. Windows, on the other hand, is just a giant blob for the most part, with relatively few optional features. And if it's been more than a week since that version was released, there will likely be hundreds of patches waiting to be downloaded in Windows Update.

     While I wholeheartedly agree with some of the rants on Linux package manager, especially all the variations, and the fact that the interfaces to the kernel are a permanently moving target making binary drivesr so much fun. As for packages managers, the Windows MSI stuff is just as big a mess.

    I have something else that I noticed makes Linux kernel drivers a lot simpler:

    Their level of hardware abstraction is lower compared to Windows

    My personal experience is Modems. Linux just implements a basic character device and you're on your own and you can pray that ATDT dials a number. In Windows you can tell the OS (or get the commands from the registry, the TAPI is a clusterfu) to "call this number" and the driver will solve that for you. I've had to do exotic things with modems, and then Linux ends up being useless, as the information for the command to set data/parity/stop bits, control connection speed, flowcontrol, handshake order and such are just not available.

    That often the kernel is a lot more primitive than the Windows APIs.

     (I use a Mac, but I have no experience coding for it, so I can't compare
    there, they do make things easy for the users though).



  • @morbiuswilters said:

    Plus, package managers are really retarded about removing necessary libraries, especially if you're installing something outside the package manager.  Now, obviously, the package manager can't easily know there are external dependencies (and there usually is a way to inform it of these dependencies, although by that point you're talking some really obscure stuff most people will never use).
    You could always make your own package, of course it doesn't help that debian's package creation process is 17 points long, Arch's is 5 IIRC (Edit PKGBUILD, run makepkg, pacman -U newpackage.pkg.tar.gz, ok, so only 3), and Redhat's are similar to Arch's.
    @morbiuswilters said:
    There's no fucking consistency between package managers.  Paths, names of packages, configuration locations, linked libraries, features, etc..  It's just fucking random.  Each meta-distro just makes it up as it goes along.  Attempts to unify things have been largely unsuccessful.  djb can be a bit of a crank, but he's absolutely right that distro maintainers should not be arbitrarily enforcing their ridiculous standards on packages, having legions of programmers who spend all their time hacking some application apart and gluing it back together the "redhat/debian/suse/etc. way".

    Of all your points I agree with this the most. Of course not every distro does this, AFAIK Arch, Gentoo, Lunar and Slackware are all examples of distros that avoid this (Arch does break qt into qt and qt-doc, but that's because qt-doc is 150MB).
    @morbiuswilters said:
    Everything is out-of-date.
    Depends on the distro, the mainstreams do this for a reason, which is they have an ass-fuck retarded backports scheme.
    @morbiuswilters said:
    Every single goddamn distro has a hard-on for reinventing the wheel.  "Hey, let's write our own mediocre replacements for init and cron!  And while we're at it, let's abandon the 'standard' Linux network management model and implement our own bulky daemon that does things completely differently and has its own obscure bugs and quirks, because that will make it easier for newbies!  And we'll integrate it so tightly with Gnome that it's impossible to even get a network connection if X isn't running!  I'm a fucking genius!"
    Amen. Also, this is a retardation that's mostly limited to the " mainstream" distros. Most of the more obscure distros have something simple (editing /etc/rc.local or similar).@morbiuswilters said:
    And the thing is, it's not even really apt's fault.  It's just the inherent shittiness of package management on Ubuntu/Debian.

    FTFY.



  • @Mole said:

    As for Linux I hate package managers. Every distro has its own, and even if you stay with the same one, give it 6 months and the default repo is no longer maintained as they brought out a newer version of the distro where the repo is not backwards compatible. I've had this with redhat. Installed as server, brilliant, no problems, no crashes. Come to upgrade MySQL and they only support the latest Redhat, not the version installed on the server. Meaning you have to go back to .tar.gz files as they are only things generic enough to be compatible. 

    There are two main types of package management schemes. There's the retarded backports-system that all the mainstream distros use (which has a name that I'm too lazy to look up), which is exactly what you're describing, then there's a rolling release system where the distro has no version number, which sounds like what you want (Arch does this, I think Gentoo does too, maybe Lunar).



  • @morbiuswilters said:

    Really, I think portage proves my original point perfectly: it's the best package manager on Linux (imho) and it does this by dispensing with binary packages almost completely.   It's basically a series of shortcuts (extremely short shortcuts, really) for building everything by-hand.  How fucked is package management on Linux when portage is actually the best tool for the job, and not just an uber-geek toy that's fun to play with?
    There are other source-based package management schemes. IMO one of the best tools for this is actually yoaurt (sp?), which is a front-end for both Arch and AUR. It uses Arch's pacman for binary packages and AUR (which isn't a package manager, it's just a repository of PKGBUILDs, pacman's file for creating new packages) for source-based packages when there isn't a usable binary version.


  • Discourse touched me in a no-no place

    @morbiuswilters said:

    The fact that 90% of the time I find it easier to hunt down the source and compile myself than to fuck around with a cranky, shitty package manager is pretty telling.  The fact that I find it preferable to build the entire system, ground-up, from source is a depressing statement on the general crappiness of package managers. 
    isn't this, in essence, what all those repository maintainers do. Only ever so slowly, which is why the stuff in the repositories are so often out of date? Even 'critical' updates to things like FireFox take a while to wend their way through.



  • @morbiuswilters said:

    Every single goddamn distro has a hard-on for reinventing the wheel.  "Hey, let's write our own mediocre replacements for init and cron!
     

    Last time I used Linux, and given it's been awhile, cron was completely ignorant of the concept of power management. If your computer was suspended when the event ticked, it wouldn't happen even after your computer woke up. (Given: depending on the event, that might be what you wanted. But consider a cron job that makes backups, or runs a virus/disk scan... you want that to happen next time it can, not just to give up because the computer was asleep.)

    Point is: rewriting it isn't necessarily a bad idea, as long as the rewrite is better than the original.

    Or maybe they're just counting on the fact that Linux doesn't support suspend/hibernate on most hardware. :)

    It's one of those things that Linux fans insist works perfectly, so you grill them a little bit, and find out all they've ever tried it with is servers, or they've only tried the absolute most dead-simple use-case.

    I have similar complaints about multiple-monitor support: "Oh it works perfectly!" "Yeah, with monitors that are the same size. Now try one monitor at 1680x1050 and another at 1280x1024. Now it don't work worth shit."

    And it used to be the same with copy and paste: "Copy and paste works perfectly!" "Yeah, with text. Now try copying some spreadsheet cells and pasting them into a bitmap paint program." (I think copy and paste bugs are mostly ironed out.)

    And hardware support: "All my laptop hardware works perfectly!" "Yeah, on your Lenovo, which is the brand that 95% of Linux users own. Now try the OS on an HP laptop... oops suddenly nothing works!"

    Linux is basically what happens when you code an entire OS/desktop environment with absolutely no concept of QA. Even on servers now, I much prefer Windows, just because stuff works much more often. (And yes, I know that Linux servers stay working longer, but my servers are all Amazon AWS VMs, so I really don't care about that... if it stops working, I just spawn a new one.)


  • Discourse touched me in a no-no place

    @blakeyrat said:

    Last time I used Linux, and given it's been awhile, cron was completely ignorant of the concept of power management. If your computer was suspended when the event ticked, it wouldn't happen even after your computer woke up. (Given: depending on the event, that might be what you wanted. But consider a cron job that makes backups, or runs a virus/disk scan... you want that to happen next time it can, not just to give up because the computer was asleep.)
    This problem was sorted a while back. It's called anacron. That was also sorted a while back. It's called fcron.



  • @PJH said:

    This problem was sorted a while back. It's called anacron. That was also sorted a while back. It's called fcron.
     

    See, this is one of the things I don't get about Linux. Why didn't they just fix cron instead of making two new probably-bad tools? How's a new user supposed to know which to use for their task?

    The Windows Schedule Task tool, while it has an EVIIIL GUI! manages all those issues all by its lonesome. Of course it probably stores it's data in a more sophisticated way then cron's crudely-formatted text file.



  •  Sorry I'm grumpy this morning. Viewing that ass-tactular Java GUI in the other thread, and realizing that thousands of people are subjected to use it every day, ruined my morning.



  • @morbiuswilters said:

    For your moderately-skilled user who still isn't skilled enough to completely ignore the package manager, it's a nightmare...
     

    This is about my skill-level, and this is basically why I just don't end up using linux day-to-day. I don't have the interest (or time) to want to learn how to be a through-and-through linux admin, which is what is necessary to really have good control over things it seems. I do have a linux machine I use for basic, simple home-network stuff, but it's all pretty stock standard, and I still worry when I do an apt-get upgrade what it is going to fuck up. 

    I have often wondered why there aren't more standards for doing things in linux. Like, your distro can do it however you want, but if you want some sort of "seal of approval" you need to do things a particular way. Is it just because it's like trying to herd cats? It would be nice for one linux distro to have enough traction to be the "default" and that becomes the standard due to momentum.



  • @EJ_ said:

    I have often wondered why there aren't more standards for doing things in linux. Like, your distro can do it however you want, but if you want some sort of "seal of approval" you need to do things a particular way. Is it just because it's like trying to herd cats? It would be nice for one linux distro to have enough traction to be the "default" and that becomes the standard due to momentum.

    There's the LSB, but that's headed by people who think the /etc/init.d/ and /etc/rcX.d/ directories are a sane way to control startup, and that everyone should be using RPM. There's also the FHS, but that's only for filesystem layout. Sure some people will still try to do LSB, but that doesn't mean it's a decent standard.



    LSB: Linux Standards Base

    FHS: Filesystem Hiearchy Standard



  • @blakeyrat said:

    @PJH said:

    This problem was sorted a while back. It's called anacron. That was also sorted a while back. It's called fcron.
     

    See, this is one of the things I don't get about Linux. Why didn't they just fix cron instead of making two new probably-bad tools? How's a new user supposed to know which to use for their task?


    @EJ_ said:

    I have often wondered why there aren't more standards for doing things in linux. Like, your distro can do it however you want, but if you want some sort of "seal of approval" you need to do things a particular way. Is it just because it's like trying to herd cats? It would be nice for one linux distro to have enough traction to be the "default" and that becomes the standard due to momentum.

    That's a curse of free software, unfortunately. Different people have different ideologies on how, say, a desktop environment should work. If you try to declare one alternative the standard, half of your user base will tell you to fuck off. If you're the first one to write a piece of software for some specific purpose, soon someone else will have an idea how to do that thing better. Perhaps they'll contact you and ask you to make the change. If you refuse, someone will either fork your codebase (if the license allows that) or write a new program from scratch. Then there are factors like the original developer losing interest in the project, NIH syndrome and quality of the codebase.

    There's also the fact that writing an operating system with even a basic application suite is a huge task. If it's to be done in a coordinated manner to produce a coherent result, it requires people to make decisions and oversee the development. And since hobbyist coders generally just want to code, you'll have to hire those managers and pay them money. It's a huge investment, and if you're going to give the product away for free, you won't ever get your money back. There's not many people in the world with that kind of financial resources, and it seems like no one of them has the motivation.

    PS. Sorry if this seems disorganized, I had a lot of thoughts about this in my head but it seems like I'm suffering from writer's block or something. 40 minutes of hard thinking and two short paragraphs is all I can get out...



  • Some free software projects make it work. Firefox, for example, has a strong and good community that actually is focused on customer experience and ease-of-use. There are also lots of free software projects on OS X that make it work, like Adium.

    Ironically, Firefox is more technically behind than other browsers but has good usability, while the average free software project is technically advanced but has ass-poor usability.

    The problem is more "Linux developers" and not "free software developers." I don't think there's any inherant problem with free software that makes integrated, usable software impossible.



  • @blakeyrat said:

    The problem is more "Linux developers" and not "free software developers." I don't think there's any inherant problem with free software that makes integrated, usable software impossible.

    I think you may be confusing something. First of all, a Linux developer implies they write only for Linux, which is very rare (even in the FOSS community), secondly if you're thinking of KDE/Gnome/Firefox when you state that then you have to realize that they are being overseen by corperations, they actually have a professional development environment, and the devs get paid. For the bulk of the projects many of them are obscure (eg: conky), but provide information that isn't needed by a user, they are power-user tools, that is the audience. For the bulk of the GUI tools many of them are outside of the view of someone to oversee them and provide usability feedback, which is why open-usability exists.



  • @Lingerance said:

    @blakeyrat said:

    The problem is more "Linux developers" and not "free software developers." I don't think there's any inherant problem with free software that makes integrated, usable software impossible.

    I think you may be confusing something. First of all, a Linux developer implies they write only for Linux, which is very rare (even in the FOSS community), secondly if you're thinking of KDE/Gnome/Firefox when you state that then you have to realize that they are being overseen by corperations, they actually have a professional development environment, and the devs get paid. For the bulk of the projects many of them are obscure (eg: conky), but provide information that isn't needed by a user, they are power-user tools, that is the audience. For the bulk of the GUI tools many of them are outside of the view of someone to oversee them and provide usability feedback, which is why open-usability exists.

    I'd like to add that what I really meant was an entire F/OSS operating system, complete with a comprehensive suite of applications. Taken one at a time, many applications are good, and some even have a user-friendly UI ("for hackers by hackers" is another issue, but I won't get into it now). However, when you try to make an entire OS distribution, higher-level problems like the multiple choices of software for a single task become apparent.

    But isn't having choices a good thing you say? It is, to an extent. On the level of end-user applications, choices allow users to choose an application that works best for them. However, when the choices penetrate all the way down to system utilities and even the OS kernel (Debian has a variant with FreeBSD kernel), it prevents the operating system from forming a clear identity. There's no such thing as "the Linux operating system"; hell, even "Debian GNU/Linux" doesn't really tell you anything about what's really on the system. Ubuntu is doing somewhat better, so perhaps there's hope for having an F/OSS operating system yet.



  • @blakeyrat said:

    Or maybe they're just counting on the fact that Linux doesn't support suspend/hibernate on most hardware. :)

    [citation needed]  I've used S3 on several laptops and it works really well.  It actually does resume better than the MBP I had to use for a few months, which would crash out every 2 dozen resumes or so.

     

    @blakeyrat said:

    I have similar complaints about multiple-monitor support: "Oh it works perfectly!" "Yeah, with monitors that are the same size. Now try one monitor at 1680x1050 and another at 1280x1024. Now it don't work worth shit."

    Yes and no.  Conceptually, multiple monitors on Linux is insane.  Basically, it creates a large "virtual monitor" and then renders sections of that to the actual screens.  With different sized monitors it technically works fine, but behind the scenes it's actually rendering a fair amount of screen which is just ignored since it is "invisible".  Sorry, not explaining this well, but for example let's say you have a 1920x1080 and a 800x600.  You would get a virtual monitor of (1920 + 800)x1080.  The 1920x1080 screen would display the left part of that, the 800x600 screen would display the right part, but you still end up with an 800x480 section underneath that isn't displayed, even though it is rendered.  And, yes, it's possible to get windows "stuck" in that area.

     

    Oh, and that's only one of several schemes for multiple monitors on X, although it is becoming the standard.

     

     

    @blakeyrat said:

    And hardware support: "All my laptop hardware works perfectly!" "Yeah, on your Lenovo, which is the brand that 95% of Linux users own. Now try the OS on an HP laptop... oops suddenly nothing works!"

    Never used Lenovo, never had a big problem with Dell or HP.



  • And as for choice...

    With Linux you have choice, you can use either Linux or Windows.

    But...

    Vit zer Vindows you haf not ze choice, only ze Vindows.

     

    And so...

     

    In order to preserve that choice we must force everyone to use Linux!



  • @morbiuswilters said:

    Never used Lenovo, never had a big problem with Dell or HP.

     

    I recall laptop support was horrible in the early (2001-2002) days: I still have a Compaq Presario 910EA which never worked with any of the distros of the day, because they all bombed when they tried loading the driver for the Texas Instrument cardbus controller: apparently I had to manually intervene in the mid of the install, disable it, create a special installation floppy and proceed to reboot....no thanks, I kept the -then new- XP.

    OTOH, on my Dell Inspiron 1720, Linux actually works better with the Intel 3945ABG Wireless than the Vista drivers: actually, neither Intel nor Microsoft bothered fixing the WPA disconnection bug, which makes connecting to WPA secured networks with that adapter impossible under Vista (works under XP, 7 must have pthe same problem though), and for some obscure reason they don't provide a wireless manager utility for Vista/7, only for XP.

    Ubuntu Linux 64-bit had no problem detecting and connecting to WPA2-secured points with the above adapter, and even achieved much better speeds than Windows XP in encrypted connections (WIndows XP couldn't top the nominal bandwidth even when downloading Ubuntu torrents from a 100 seeds).



  • @C4I_Officer said:

    I recall laptop support was horrible in the early (2001-2002) days:

    True.  And you also have to stick with solid hardware.  Intel wifi, nvidia or intel graphics, etc.  More obscure hardware will not have good drivers and if you are even considering Linux, check hardware compatibility before you buy.



  • @tdb said:

    On the level of end-user applications, choices allow users to choose an application that works best for them. However, when the choices penetrate all the way down to system utilities and even the OS kernel (Debian has a variant with FreeBSD kernel), it prevents the operating system from forming a clear identity. There's no such thing as "the Linux operating system"; hell, even "Debian GNU/Linux" doesn't really tell you anything about what's really on the system. Ubuntu is doing somewhat better, so perhaps there's hope for having an F/OSS operating system yet.
     

    I wholly agree with this.

    It's the reason I still have Ubuntu as a toy OS on a toy box, and only VNC to it once in a while for fun, curiosity and/or system management.

    On the other hand, my computernoob roommate's Windows machine broke, so I gave him my Ubuntu 9 liveCD to bridge the gap between then and when I help him reinstall Windows, and he seems to be using it nicely. That's because all he does is browsin' and emailin', I think.



  • @blakeyrat said:

    And hardware support: "All my laptop hardware works perfectly!" "Yeah, on your Lenovo, which is the brand that 95% of Linux users own. Now try the OS on an HP laptop... oops suddenly nothing works!"
     

    For the last decade, I've used Linux on half a dozen notebooks from 4 different vendors (neither Lenovo nor HP though) and it always worked pretty well. The last one, a cheap Acer, caught me in the cold though: Everything worked fine out-of-the-box with Ubuntu 9.04, that is everything but the LAN adapter. Since I use WLAN at home, I didn't notice until I was at a customer's site and couldn't connect... Nothing a little driver download and compile wouldn't solve, but that day was mostly ruined for me.


Log in to reply