WTF is happening with Windows 10? And nothing else



  • @JBert said:

    You seem to have never seen a user start panicking when an app goes fullscreen and their basic instinct of "close this, now!" kicks in. They quickly realize they can't spot the close button and get in a real frenzy.

    A frenzy?

    You know the thing I like best about where I live is people are all laid-back all the time. I can't imagine someone going into a "frenzy" over a window opening up on a computer.


  • ♿ (Parody)

    @blakeyrat said:

    I can't imagine someone going into a "frenzy" over a window opening up on a computer.

    Uh huh. What if it tried to use %HOME%?



  • The humanity!


  • kills Dumbledore

    @Mikael_Svahnberg said:

    The humanity!





  • @JBert said:

    (or is this even a Windows 8.1 feature, explaining outcry after 8's release?)

    It is, implemented because of the outcry. Also, Win 8 Beta had no way to close apps at all, as a test of their memory management system. The system passed, but the UX failed.



  • @TwelveBaud said:

    The system passed, but the UX failed.

    In 8, the UX for closing apps was downright insane on desktop (drag down from the top edge, WTF?!). Like with many other things, they refined it in 8.1, but by that time the damage was done.

    I've complained about Modern experience on desktops before. The worst thing they did is that default associations in Explorer pointed to Modern apps, which made the whole "ignore everything Modern on desktop" thing a bit more difficult to achieve, and made people hit Modern UI exactly when they wanted it the least.

    Also, zooming out a photo still brings you back to the library. WHAT WERE THEY THINKING.



  • @Maciejasjmj said:

    In 8, the UX for closing apps was downright insane on desktop

    Or alt+f4

    @Maciejasjmj said:

    (drag down from the top edge, WTF?!)

    Agreed.

    What was really annoying in 8.0 was the tutorial "helpers". "Just swipe from the edge" Goddamnit, I can't.



  • @Maciejasjmj said:

    The worst thing they did is that default associations in Explorer pointed to Modern apps

    It's worse than that. By having that be the default, you break every program that spawns and waits (via ShellExecute). Because you can't wait for a modern app.



  • @Maciejasjmj said:

    zooming out a photo still brings you back to the library. WHAT WERE THEY THINKING.

    Do either 8.1 or 10 still have the thing where if you go to change your avatar, you get a Metro screen full of utterly useless generic picture icons instead of thumbnails?


  • FoxDev

    I can't speak for Win10, but I know for a fact that Win8(.1) shows proper thumbnails



  • Never has for me on a fresh install, not even once.

    When you do an 8.x install, do you create/use a MS account or force the use of a local-Windows-only account? That's the only non-default setting I consistently pick.


  • FoxDev

    I use an MS account


  • kills Dumbledore

    8.1, no MS account (work PC though so it's an AD account. Don't know if that changes things)


  • FoxDev

    @flabdablet said:

    When you do an 8.x install, do you create/use a MS account or force the use of a local-Windows-only account? That's the only non-default setting I consistently pick.

    that is annoyingly obscure how to trigger in the 8.1/10 installer. (basically the only way i've found to do it consistently is ti unplug the network while installing. you can also do it by trying to login to some @example.com email address and when that fails selecting local account, but that doesn't always work.)



  • I've always done it by telling the installer I don't already have an account, then clicking on the unobtrusive link on the new account creation page that lets you go without.


  • FoxDev

    @flabdablet said:

    then clicking on the unobtrusive link on the new account creation page that lets you go without.

    that's one hell of an unobtrusive link then... because i haven't found it yet.

    I'll be doing a couple of VM installs this weekend though and i'll look out for it.


  • Discourse touched me in a no-no place

    @accalia said:

    that is annoyingly obscure how to trigger in the 8.1/10 installer.

    It's not too hard, but it's not intuitive: choose a Microsoft account, and then you get an option to say "no, thanks, let me have a regular one".

    You should not be surprised that that comes from the company that has you click start to shut down.


  • Discourse touched me in a no-no place

    @flabdablet said:

    unobtrusive

    Congratulations, you won the understatement of the hour award.


  • FoxDev

    @FrostCat said:

    It's not too hard, but it's not intuitive:

    which is why i used the word "obscure" :-P


  • Discourse touched me in a no-no place

    @FrostCat said:

    not intuitive

    I think you won “understatement of the hour” back with that…



  • @accalia said:

    that is annoyingly obscure how to trigger in the 8.1/10 installer.

    The Introducing Windows 10 Preview pdf (at http://www.microsoftvirtualacademy.com/ebooks) talks about account types starting on page 34. I've been using my MS account, so I don't know how different 10 is from 8.1... (Also, who knows if that has changed in subsequent builds...)



  • Your just jealous that the visions chose me.



  • @Buddy said:

    Your just jealous that the visions chose me.

    *twitch*



  • @dcon said:

    @Buddy said:
    Your just jealous that the visions chose me.

    *twitch*

    This guy knows.



  • Network World Article on this here.

    I like this excerpt:

    So far, not bad, only minor issues requiring the occasional reinstall. Then I realized my PC was awfully quiet. There was a speaker icon in the system tray, but all the audio buttons were grayed out. On left click there is an option to troubleshoot audio problems. What came up? "An error occurred while troubleshooting. A problem is preventing the troubleshooter from starting."

    The problem isn't new, either. I found a blog post that shows Creative Labs problems have existed since the beginning of the year. "Just wanted to let you know, we are working on the Creative issues that popped up in 9926. There are several threads on this forum that mention them. Thanks!" wrote a Microsoft staffer in a January post. Six months later, sound drivers for the most widely-used brand of sound cards are still broken.

    Way to go, devs.



  • I did get one pleasant surprise: Wise Care 365 and Wise Memory Optimizer both work.

    Using a memory optimizer in 2015 automatically invalidates whatever this guy has to say.



  • @Maciejasjmj said:

    Using a memory optimizer in 2015 automatically invalidates whatever this guy has to say.

    So he likes to waste his time. We can "help" him more by recommending he defrag his SSD drive. :trollface:


    Filed under: Leaves him less time to make reviews



  • This third party disk defragmenter still produces noticeable performance gains for Windows installations on mechanical drives; it's pretty much the only performance optimizer I've ever seen that actually achieves what it claims to.



  • @Maciejasjmj said:

    Using a memory optimizer in 2015 automatically invalidates whatever this guy has to say

    Except this, about which he is 100% correct:

    Another thing, and I haven't seen this mentioned much, is that Windows 10 is aesthetically boring, spare and ugly. App windows look like something out of 1990s era X-Windows. This thing looks like a first revision by a first year UI designer. It's just plain ugly and it makes all of the apps look ugly.
    I didn't think it was possible for the 7 -> 8 uglification to get worse; 10 proved me conclusively wrong.


  • Something that actually does a little more than Defraggler. Nice!

    Return tip: If you ever have to recover a crashed mechanical drive where the controller and mechanics still work, best chance is Spinrite. Yes you have to pay; that being said, I've recovered a number of systems from the dead with it.



  • @redwizard said:

    If you ever have to recover a crashed mechanical drive where the controller and mechanics still work, best chance is Spinrite.

    Personally I prefer GNU ddrescue, but I'm a CLI kind of guy.

    Edit: Jesus fn wept I have just sat through Steve Fn Gibson taking 12 f*n minutes to handwave (literally) his way around the fact that all his "magic" Spinrite actually does is force a drive to rewrite and/or reallocate unreadable sectors after hammering it with enough retries to generate some kind of semi-educated guess at their contents.

    And he kind of glosses over the face that Spinrite rewrites data in place once it's guessed what's supposed to be there. So you take a drive that's growing defects fast enough to be troublesome, and then you run his magic thing on it in order to silently corrupt a bunch of sectors with data that fudges the same ECC ("interpolate", my huge wobbly arse!) and then you put it back in service? :wtf:

    If you really, really want to do this, of course you can do it with ddrescue as well. But surely the first instinct of any responsible technician or sysadmin when confronted with a failing drive is to copy as much of it as possible to something else.

    One of the nicest things about ddrescue is that it prioritizes reading what can be read over what can't; the first pass copies the entire drive using big reads with no error retries, and only once that's finished does it come back to places it couldn't read to have another go.



  • @flabdablet said:

    And he kind of glosses over the face that Spinrite rewrites data in place once it's guessed what's supposed to be there. So you take a drive that's growing defects fast enough to be troublesome, and then you run his magic thing on it in order to silently corrupt a bunch of sectors with data that fudges the same ECC ("interpolate", my huge wobbly arse!) and then you put it back in service?

    Either you didn't read that right, or the video does a poor job of outlining exactly how this works (don't have time to watch it right now).

    It first deactivates the drive's sector auto-replacement feature (I think it's the SMART feature of replacing bad sectors with good "spare" ones), then recovers the data, then reactivates the SMART feature and makes the drive realize the sector's bad by exercising that very sector, the sector gets replaced with a spare, the spare is tested and verified as good, THEN the recovered data is written back to the now good sector.

    I have taken a server's bad RAID 0 Novell Netware drive and recovered it with Spinrite (took 10 hours to read one sector at one point, but it did it). (Why anyone would put the OS on a RAID 0 is a :wtf: in its own right, but I didn't create the problem, just asked to fix it.) Because Spinrite recovers the bits and places them back in exactly the same LOGICAL locations it recovers them from (remember, it's a totally different location on the physical platter) is why it works regardless of OS or architecture.

    Oh, and the customer whose server I recovered? Still a client 14 years later. :)

    EDIT:
    Better explanation on these links:

    Relevant excerpt:

    THIS IS VERY IMPORTANT: While SpinRite is performing this extensive surface scrubbing -- with your data being held safely off the disk -- the drive's automatic defect relocation capabilities are fully active and SpinRite is working to show the drive its own defective sectors! This stimulates the drive to replace any that SpinRite can demonstrate are bad, with brand new spares!

    Return the User's Data
    After the region has passed all of SpinRite's tests, and the drive has relocated any sectors that "it" may have discovered to be bad, SpinRite replaces the user's data into the absolutely safe and freshly tested region, then moves on to the next region to test.

    TL,DR: Grab data off bad sector, get it replaced with spare sector, replace data onto safe spare sector.



  • @redwizard said:

    It first deactivates the drive's sector auto-replacement feature (I think it's the SMART feature of replacing bad sectors with good "spare" ones), then recovers the data, then reactivates the SMART feature and makes the drive realize the sector's bad by exercising that very sector, the sector gets replaced with a spare, the spare is tested and verified as good, THEN the recovered data is written back to the now good sector.

    First off, drives don't need you to fartarse about with their SMART settings in order to do any of that. Normal course of business for any modern drive (i.e. one with SMART) is that any sector that trips the drive's bad-ECC threshold on read will be marked as "pending reallocation"; the drive will then put it somewhere physically different when you next rewrite it.

    Spinrite's only real claim to fame appears to be this "DynaStat" foolery, which I'm assuming basically amounts to doing lots of long reads and trying to reconstruct the original data values based on guesswork and the ECC that a long read includes. Well, drives already do that internally. If a longer burst of bits is damaged than the ECC can fix, then your original data bits are gone. If you do a bunch of long reads and compare them, you might be able to find the location of the error burst, at which point you could then back-calculate values that fit the existing ECC; but this is equivalent to guessing a set of solutions for an under-determined set of simultaneous equations. You cannot "interpolate" (Steve Gibson's word) unreadable bits. Best you can do is read the same sector over and over and over in the hope that one of those reads is going to get you a correctable error, which it might do if the bad spot has not yet flaked completely off the medium.

    @redwizard said:

    Because Spinrite recovers the bits and places them back in exactly the same LOGICAL locations it recovers them from (remember, it's a totally different location on the physical platter) is why it works regardless of OS or architecture.

    GNU ddrescue does exactly the same thing if you tell it to copy a drive to itself, including as many read retries as you care to ask it for, except without costing $90 and without a bunch of self-aggrandizing marketing fluff. But you wouldn't usually use it that way unless the SMART logs show less than a handful of sectors pending reallocation after doing a full read pass over the whole drive. Usually you'd use it to copy your failing drive to a new one of the same size, including zillions of retries if you think that will help, then toss the drive whose medium has started to flake off.

    You can stop a ddrescue session at any time, and it will remember which sectors it hasn't managed to copy yet. So you can do stuff like copy most of a drive until it's just stuck on retries for the failing sectors, then interrupt it, stick the source drive in the freezer for an hour and restart your ddrescue session; it will go straight back to trying to read the failed sectors without bothering to redo work that's already succeeded.


  • FoxDev

    @flabdablet said:

    onboard NIC after lightning took out an ADSL modem.

    That reminds me... I need to put my cat5 line in through my surge protector... not that i expect any issues but it's not currently protected and should be.



  • Meh. NICs are cheaper than surge protectors :-)


  • FoxDev

    Even those integrated with the motherboard? 😛
    INB4 'just get a discrete card you damn fool hedgy'


  • :belt_onion:

    @accalia said:

    @flabdablet said:
    When you do an 8.x install, do you create/use a MS account or force the use of a local-Windows-only account? That's the only non-default setting I consistently pick.

    that is annoyingly obscure how to trigger in the 8.1/10 installer. (basically the only way i've found to do it consistently is ti unplug the network while installing. you can also do it by trying to login to some @example.com email address and when that fails selecting local account, but that doesn't always work.)

    Which is really obnoxious when you want to install a local admin account before setting up Active Directory and having your personal account connected to the live address...


  • FoxDev

    @flabdablet said:

    Meh. NICs are cheaper than surge protectors

    the surge protector is free. the last one i bought included RJ-45

    my UPS also has RJ-45 surge protection

    i just have to get off my lazy butt and make a 3' patch cable to take care of it.



  • @flabdablet said:

    First off, drives don't need you to fartarse about with their SMART settings in order to do any of that. Normal course of business for any modern drive (i.e. one with SMART) is that any sector that trips the drive's bad-ECC threshold on read will be marked as "pending reallocation"; the drive will then put it somewhere physically different when you next rewrite it.

    Actually, if you don't disable said feature on the drive, the action of trying to recover the data would cause the drive to replace the first unreadable pass with a blank spare. NOT GOOD if you're trying to recover data.

    @flabdablet said:

    Spinrite's only real claim to fame appears to be this "DynaStat" foolery, which I'm assuming basically amounts to doing lots of long reads and trying to reconstruct the original data values based on guesswork and the ECC that a long read includes.

    Of course. If the sector's already degraded, multiple reads to attempt to reconstruct the data is the thing to do, short of sending it to a dust free lab.

    @flabdablet said:

    Well, drives already do that internally.

    Only to a point. As grc.com points out, the drive gives up far too easily and replaces the sector. There is a lot that can be done (but most utilities don't bother to do) to recover data. Details here: https://www.grc.com/srrecovery.htm

    This part has saved several servers, including one whose Novell SYS partition was lost before employing Spinrite:

    Accept Partial Data
    If the Dynastat analysis is unable to perfectly reconstruct the sector's data, it will at least be able to identify the data bits that differed from one reading to the next. This allows it to greatly minimize the uncertainty within the sector's damaged area and to recover most of the sector's 4096 individual data bits.
    SpinRite will log the name of the file whose sector was not completely recovered and replace the file's completely unreadable sector (which any other software would have simply discarded) with this "mostly correct" now-readable sector so that all but a few data bits of the file can still be read and used.

    Since it does this at the sector level, this doesn't just reconstruct files. It reconstructs whatever was there, including partition markings, which is exactly what I needed that one time I was recovering a GroupWise server on the Novell SYS partition.

    While your utility seems to do more than many I've seen, I don't see your utility doing anything to that degree to recover information from an otherwise unreadable drive.

    @flabdablet said:

    You cannot "interpolate" (Steve Gibson's word) unreadable bits.

    He means this (from same link above):

    Dynastat Analysis
    If several thousand sector re-reads all fail to produce a single perfect reading, SpinRite next employs the database it has been building from each failed sector reading. By performing a statistical analysis of this data, SpinRite is frequently able to reconstruct all of the sector's data, even though no single reading was perfect.

    So yes you can - sometimes, if you're lucky. Better than "sorry, you're hosed, we won't try."

    @flabdablet said:

    Best you can do is read the same sector over and over and over in the hope that one of those reads is going to get you a correctable error, which it might do if the bad spot has not yet flaked completely off the medium.

    I think the above just disproved that.

    @redwizard said:

    Because Spinrite recovers the bits and places them back in exactly the same LOGICAL locations it recovers them from (remember, it's a totally different location on the physical platter) is why it works regardless of OS or architecture.

    @flabdablet said:

    GNU ddrescue does exactly the same thing if you tell it to copy a drive to itself, including as many read retries as you care to ask it for, except without costing $90 and without a bunch of self-aggrandizing marketing fluff.

    Please show me in the documentation of your utility where it covers this.

    @flabdablet said:

    Usually you'd use it to copy your failing drive to a new one of the same size, including zillions of retries if you think that will help, then toss the drive whose medium has started to flake off.

    In each case where I've used Spinrite to recover a drive, I would image the drive to another working drive afterwards (the one instance where a RAID 0 was involved was tricky as I had to find an older drive that matched specs to copy to). In this case, the fact that your utility can do this after recovery makes it better in that aspect.

    In summary:
    Spinrite appears to me to be better at data recovery
    Your utility is a more "complete" solution for recovering data in terms of moving it to a fresh drive in one process.



  • @redwizard said:

    Actually, if you don't disable said feature on the drive, the action of trying to recover the data would cause the drive to replace the first unreadable pass with a blank spare.

    Never seen a drive do that, not even once. Sparing out happens on write, not read.

    @redwizard said:

    As grc.com points out, the drive gives up far too easily and replaces the sector.

    Steve Gibson has, over the years, pointed out many things that have at best a tangential relationship with the truth.

    In this instance, in particular, he glosses over the fact that drives have not supported Read Long (which used to return a whole sector including its ECC field) for many years now, which means that the only data his magical DynaStat process will ever have available is whatever turns up during a rare and wonderful successful standard read.

    @redwizard said:

    In each case where I've used Spinrite to recover a drive, I would image the drive to another working drive afterwards

    I would never even contemplate doing this with a failing drive, on the grounds that I want to get all the data off it while touching it as little as possible.

    There are many utilities that will rebuild busted partition tables or whatever other kinds of logical damage have occurred on a working drive. The responsible thing to do is run those against the second clone you've made of your busted drive.

    The reason you won't find explicit instructions in the ddrescue documentation for copying a failing drive onto itself (even though this is certainly within ddrescue's capabilities) is because doing so is almost always entirely the Wrong Thing.



  • @flabdablet said:

    In this instance, in particular, he glosses over the fact that drives have not supported Read Long (which used to return a whole sector including its ECC field) for many years now, which means that the only data his magical DynaStat process will ever have available is whatever turns up during a rare and wonderful successful standard read.

    Noted for future discussion. It's 1AM here, just finished an upgrade and have to be back at work in <7hrs to handle any fallout.

    @flabdablet said:

    The reason you won't find explicit instructions in the ddrescue documentation for copying a failing drive onto itself (even though this is certainly within ddrescue's capabilities) is because doing so is almost always entirely the Wrong Thing.

    I'll accept responsibility for the communication breakdown here. When I asked you to provide documentation, I was attempting not to have you show me the Wrong Thing [caveat: it's not wrong to take data from a failing sector and transfer recovered data to a good sector on a drive that is apparently failing because of bad sector(s)] as you pointed out - I wanted to see how your utility attempted to pull data from the failing drive. This information is apparently in an absent/hard-to-find/I'm-blind-and-cannot-see-it condition.

    Since you're on the other side of the planet, until I'm conscious again (at least), have a good day. 😄



  • @redwizard said:

    I wanted to see how your utility attempted to pull data from the failing drive.

    Ah, OK.



  • @redwizard said:

    it's not wrong to take data from a failing sector and transfer recovered data to a good sector on a drive that is apparently failing because of bad sector(s)

    That depends entirely on how many failed sectors there are on the drive. I always have a sniff at the SMART logs before deciding on a data recovery approach. If there are only a handful of already reallocated sectors plus sectors pending reallocation, then that's reflective of normal wear and tear, and rewriting failed sectors in place is acceptable.

    With ddrescue, I'd do that by initially copying the drive to /dev/null to generate a logfile identifying all the unreadable sectors; then running ddrescue again using the same logfile and a very high retry count to copy the drive to an ordinary file on some other disk, which will end up as a sparse file holding only the initially unreadable sectors; then running it again using the same logfile, to copy the recovered sectors from the sparse file back to a raw device mapped over the original drive (which will then spare them out). Various filesystem-aware utilities exist that let me find out which files (if any) those bad sectors are parts of, letting me check their integrity afterwards.

    If the initial SMART log shows more than about 20 bad sectors, or ddrescue's progress meter shows more errors during an initial copy pass to /dev/null than I'm expecting, I'll consider the original drive to be not worth persisting with and just ddrescue it to a second one; if that process ends with sectors still unread, I'll freeze the original drive for an hour and then try it again (this gets good results quite surprisingly often, as can altering its orientation or gently vibrating it).

    If I were happy to be as cavalier about data integrity as Steve Gibson, I could emulate Spinrite with ddrescue by mapping a raw device over the original disk device to avoid unintended reads caused by the kernel's block device cache, then ddrescuing that to itself with the sector size set to match the physical sector size on the drive and a very high maximum retry count.


  • ♿ (Parody)

    I drove 70 sound cards to a new USB port: Soundcards are bullshit



  • WTF? How are soundcards more off-topic than two guys talking about recovering Hard Disk Drives?


  • ♿ (Parody)

    That one got flagged and it was its own sub-flamewar. I don't mind a bit of work now and then, but let's not get crazy.


  • BINNED

    Curse you useless Jeff-ifications</abr>



  • @flabdablet said:

    In this instance, in particular, he glosses over the fact that drives have not supported Read Long (which used to return a whole sector including its ECC field) for many years now, which means that the only data his magical DynaStat process will ever have available is whatever turns up during a rare and wonderful successful standard read.

    Drives no longer supporting Read Long sounds like a :wtf: in its own right, borne out of ignorance of how well a process like Dynastat can work when available. Verifying this is problematic, as apparently no one has had a reason to write an article about how hard drives work that have or don't have Read Long capability in particular. Best I can come up with on Google searches are articles like this (oh look, Spinrite is recommended) and this (hmmm, Read Long doesn't even give the ECC, and the drive was made in 2003.)

    In an ideal world, we wouldn't worry about recovering data from drives because people would:
    -Use RAID 1 minimum on servers (not RAID 0 like the one server I recovered, or just a single drive like a number of servers I've recovered.)
    -Have BACKUPS at least nightly of your data
    -Actually test backups every so often with a restore so you know your backups are actually working

    In each case where I've used Spinrite over the years, none of the above best practices were being used by any of those clients, of course. The last time I used Spinrite was about 2010 - recovering a voice mail "server" put in service 10 years earlier and still running Windows 2000 Workstation (because Comdial had gone out of business years earlier so even their phone vendor could not rebuild the voice mail server otherwise, even with a new drive). And it worked. As always, I imaged the drive over to a new drive for use and saved the image just in case. That client finally upgraded their phone system this year and retired that dinosaur.

    Forgive me for being averse to your criticisms of Mr. Gibson. Having used his product at least a dozen times over the years to save businesses considerable costs (mostly prior to 2010, but still), I wasn't about to just accept any seemingly off the cuff write-off of his technology. $89 is cheap compared to replacing/rebuilding a server.

    In my own estimation, the best approach in recovering a drive is now:

    1. Use ddrescue to get most, if not all of it. (Note: I really like that it will skip the bad parts and get all of the good data FIRST, then go back and try to read the problem areas. Long overdue common sense feature!)
    2. Conditional on critical data still missing that needs to be recovered: Use Spinrite if ddrescue is unable to get all of it, on the off chance that Spinrite can*, if the drive hasn't expired by then.

    Thank you for the recommendation.


    *Only because the Long Read issue is unverifiable; proving a negative ("no manufacturer does this anymore ever" type of thing) is nearly impossible to prove.



  • @redwizard said:

    Drives no longer supporting Read Long sounds like a :wtf: in its own right

    You might find it on a SCSI drive if you're lucky. IIRC it stopped being part of the ATA spec in about 2010.

    Edit:

    @redwizard said:

    skip the bad parts and get all of the good data FIRST, then go back and try to read the problem areas. Long overdue common sense feature

    GNU ddrescue has worked this way since first released in 2004.


Log in to reply