Still seeing a whole lotta red here ...



  •  

    I started this going a few hours ago (yeah its a server) ... would you look at the blazing progress! See that blue band of contiguous files? Woohoo we're cruising along.

    Sorry, not a WTF really. Except for the WTF that no one ever told me keeping server drive space freed up was suddenly in my job description ... *sigh* I'm a friggin' programmer/analyst not a system admin.  



  • The real WTF is that the 41% free space doesn't show up in the display. What is up with that?




  • Isn't it fairly trivial to make windows defrag itself every midnight? As for the massive amounts of red, windows XP introduced a horrible new defragmenter that looks alot prettier than the last one and is inversely as effective. Try this (save as bat run at midnight).

    @ECHO OFF
    del C:\defrag.log.0
    move C:\defrag.log C:\defrag.log.0
    defrag C: >>C:\defrag.log 2>&1
    defrag D: >>C:\defrag.log 2>&1
    defrag E: >>C:\defrag.log 2>&1
    defrag F: >>C:\defrag.log 2>&1
    


  •  I'm suprised the default defragmenter hasn't committed seppuku. Use JKDefrag or something.



  • So why do you care at all to defragment NTFS disks? ANd why do you want to have them separate, instead of one large FS?

     My guess is that it's all red because it's just a few large files that have grown at the same time, interleaving each other. You won't gain anything by defragmenting that. You might even lose some performance.



  •  @Gieron said:

    The real WTF is that the 41% free space doesn't show up in the display. What is up with that?

    Because each pixel represents (hundreds/thousands/milllions) of disk sectors. If just one of those contains a fragmented file, it's red. Obviously this is a busy disk, and the last time it was defragged, the SysAdmin was a wooly mammoth.



  •  @alegr said:

     My guess is that it's all red because it's just a few large files that have grown at the same time, interleaving each other. You won't gain anything by defragmenting that. You might even lose some performance.

    Huh Wha????????????????????????

    Step... away... from... the mushrooms... 

     



  • @alegr said:

     My guess is that it's all red because it's just a few large files that have grown at the same time, interleaving each other.

    My bet is that it's one big-arse paging file, scattered throughout all the free space gaps. 



  • @alegr said:

    So why do you care at all to defragment NTFS disks? ANd why do you want to have them separate, instead of one large FS?

     My guess is that it's all red because it's just a few large files that have grown at the same time, interleaving each other. You won't gain anything by defragmenting that. You might even lose some performance.

     

     

    Please explain (1) why NTFS is immune to the physical issue of performing extra seeks to retrieve all of the fragments of a file and (2) how defragmenting would, at best, have no effect, and, at worst, have a deleterious effect, on performance.

     



  • Its the IPMonitor data drive... if anyone seen the files it generates daily/weekly/monthly that explains a lot. And it got down to 180MB before someone's red light came on and they called me.

     



  • @Lingerance said:

    Isn't it fairly trivial to make windows defrag itself every midnight? As for the massive amounts of red, windows XP introduced a horrible new defragmenter that looks alot prettier than the last one and is inversely as effective. Try this (save as bat run at midnight).

    @ECHO OFF
    del C:\defrag.log.0
    move C:\defrag.log C:\defrag.log.0
    defrag C: >>C:\defrag.log 2>&1
    defrag D: >>C:\defrag.log 2>&1
    defrag E: >>C:\defrag.log 2>&1
    defrag F: >>C:\defrag.log 2>&1
    

     

    I agree with you that its fairly trivial, I just don't want to burn out my hard drive.

    Though the script you provided is nice for an overnight defrag script. I didn't know defrag was command-line accessible. Learn something new every day eh...

     



  • @dlikhten said:

    I didn't know defrag was command-line accessible.
     

    What program isn't command line accessible?

    The only way something would be less usable through the command line would be if it accepted no arguments... but it would still be accessible.



  • "We've certainly improved the probability of improving ..."
    - Oakland A's owner Lew Wolff (after the team's recent fire sale wherein all players over 12 years old were traded for younger talent)
     
    I thought that professional baseball players couldn't be less than 18 years old anyway.  How many 11-year-olds are there that can compete at the professional level?


  • @dlikhten said:

    Though the script you provided is nice for an overnight defrag script. I didn't know defrag was command-line accessible. Learn something new every day eh...

    Supposedly everything you can do through the windows GUI is available through a obscure CLI; I've yet to find the commands for setting up a static IP, mount a hard-drive to a mount-point or drive letter, and a few other things I'd rather use the CLI than the ten click interface GUI.


    Actually, in retrospect, a weekly defrag would probably be better.



  • @Lingerance said:

    setting up a static IP
     

    netsh interface ip set address name="Local Area Connection" static 192.168.0.100 255.255.255.0 192.168.0.1 1

    @Lingerance said:

    mount a hard-drive to a mount-point or drive letter

    Never tried this, but: http://support.microsoft.com/kb/300415

    @Lingerance said:

    Actually, in retrospect, a weekly defrag would probably be better.

    I agree, but probably even more time, depending on your filesystem usage.



  • @mrprogguy said:

    I thought that professional baseball players couldn't be less than 18 years old anyway.  How many 11-year-olds are there that can compete at the professional level?

    I'm more concerned that he considers those younger than 12 years to be better!



  • @Lingerance said:

    @dlikhten said:
    Though the script you provided is nice for an overnight defrag script. I didn't know defrag was command-line accessible. Learn something new every day eh...
    Supposedly everything you can do through the windows GUI is available through a obscure CLI; I've yet to find the commands for setting up a static IP, mount a hard-drive to a mount-point or drive letter, and a few other things I'd rather use the CLI than the ten click interface GUI.
    Actually, in retrospect, a weekly defrag would probably be better.

     Agreed to the weekly part.

    To be honest I just wish windows command line shell was even a fraction of what bash is. Then again Cygwin to the rescue :P

    My favorite of all the things I've ever gotten for windows is UnixUtils -- Win32 native ports of all your favorite core linux commands :) Just put them in your path. The only problem is that cmd.exe does not have coloring like linux shells do so ls --color gives bad output :( Sometimes I wish I could use the putty shell to access my local machine without hosting self as a ssh host. Better for clipboard usage, window resizing, etc...



  • @dlikhten said:

    The only problem is that cmd.exe does not have coloring like linux shells do so
    ls --color gives bad output

    Why not use MSYS, then? Just tried <font face="Courier New">ls --color</font>, works fine. (Well, except it messes up Cyrillic filenames. Grrr.)



  • @mrprogguy said:

    "We've certainly improved the probability of improving ..."
    - Oakland A's owner Lew Wolff (after the team's recent fire sale wherein all players over 12 years old were traded for younger talent)
     
    I thought that professional baseball players couldn't be less than 18 years old anyway.  How many 11-year-olds are there that can compete at the professional level?

     

    Erm ... 



  • @MasterPlanSoftware said:

    What program isn't command line accessible?

    Windows update.

    (And anything else written in ActiveX crap) 



  • @Spectre said:

    Well, except it messes up Cyrillic filenames

    Non-ASCII filenames are a notoriously hard problem that nobody has ever managed to get working right. You're just seeing a symptom of the particular ways in which the Windows attempt is broken.



  • @asuffield said:

    @MasterPlanSoftware said:

    What program isn't command line accessible?

    Windows update.

    (And anything else written in ActiveX crap) 

     

    Try wuauclt.exe, but best of luck with finding any documentation on it.  (It's also scriptable (what part of Windows isn't?), but best of luck with finding any documentation on that too.)



  • @asuffield said:

    @Spectre said:

    Well, except it messes up Cyrillic filenames

    Non-ASCII filenames are a notoriously hard problem that nobody has ever managed to get working right. You're just seeing a symptom of the particular ways in which the Windows attempt is broken.

    Why is it broken? The ANSI/OEM codepage duaily [i]is[/i] broken, but is easily overcome by <font face="Courier New">SetFileApisToOEM</font> or the Unicode functions, and the filenames themselves work just fine. I think the MinGW team is at fault here.



  • @mfah said:

    Try wuauclt.exe, but best of luck with finding any documentation on it. 
     

    Quick google search turns up:

    http://technet2.microsoft.com/windowsserver/en/library/26807cd7-72c0-44b1-80f4-a39793801c451033.mspx?mfr=true

    http://technet2.microsoft.com/windowsserver/en/library/fdee3ce6-9b4d-4d3d-9a5c-ef341faf507d1033.mspx?mfr=true

     @mfah said:

    t's also scriptable (what part of Windows isn't?), but best of luck with finding any documentation on that too.

    Another google search:

    http://www.microsoft.com/technet/community/columns/scripts/default.mspx



  • @RayS said:

     @alegr said:

    My guess is that it's all red because it's just a few large files that have grown at the same time, interleaving each other. You won't gain anything by defragmenting that. You might even lose some performance.

    Huh Wha????????????????????????

    Step... away... from... the mushrooms... 

     

    I was thinking the same thing, but was afraid he might actually know something I don't.

    Slightly off-topic, but I've always wondered about how Linux filesystems don't need to be defragmented. I understand that they pick the best place to save files, but wouldn't it eventually get fragmented anyway? Does it do some minor defragmenting in the background when there aren't enough contiguous sectors to store the file?



  • @Cap'n Steve said:

    Slightly off-topic, but I've always wondered about how Linux filesystems don't need to be defragmented. I understand that they pick the best place to save files, but wouldn't it eventually get fragmented anyway? Does it do some minor defragmenting in the background when there aren't enough contiguous sectors to store the file?


    @Wikipedia said:
    "Modern Linux filesystem(s) keep fragmentation at a minimum by keeping all blocks in a file close together, even if they can't be stored in consecutive sectors. Some filesystems, like ext3, effectively allocate the free block that is nearest to other blocks in a file. Therefore it is not necessary to worry about fragmentation in a Linux system."



  • @Cap'n Steve said:

    @RayS said:

     @alegr said:

     My guess is that it's all red because it's just a few large files that have grown at the same time, interleaving each other. You won't gain anything by defragmenting that. You might even lose some performance.

    Huh Wha????????????????????????

    Step... away... from... the mushrooms... 

     

    I was thinking the same thing, but was afraid he might actually know something I don't.

    As did I. The problem caused by fragmentation is upped seek times to collect all the bits (pun not intended). I'm not familiar enough with NTFS to exclude the possibility of it having a better indexing... thing... way... method... hash... rainbow... magic... to alleviate seek  problems.

    So I'm going with the drugs theory.



  • @RayS said:

    Step... away... from... the mushrooms... 

     

     

    Wouldn't it be fairy quick to read out the sequence a1-a2-a3 and b1-b2-b3 from two interleaved files (a1-b1-a2-b2-a3-b3 etc)? I It makes sense to me. I question if reading consecutive blocks is much faster, the heads just have to move a distance of 2x between every read instead of x, and I doubt that thats any overhead considering that the acceleration and deceleration would probably take much more time, especially since x is close to the minimum distance that the arms can travel. Even if there is more separation on average, I would guess that there is not much use to defrag.



  • @Obfuscator said:

    I question if reading consecutive blocks is much faster, the heads just have to move a distance of 2x between every read instead of x, ...

    Warning: some speculation follows.

    Remember that we're talking about a spinning disc here. Since it moves, and since each track can hold more than one block, the heads don't have to move for every read or write. Much of the time, you can just hold still and just let the bits come to you as it rotates.

    When you move a head, you also have to find the right block on a track. This means waiting for it to come to you, since the heads can't move in the "spinning direction" Presumably, in order to speed up reading consecutive blocks, moving from a track to its neighbour is optimized wrt waiting times, so that when you're done moving a head you only wait the absolute minimal time. (It also seems likely that you'd want to optimize the seeks themselves specifically for that case, but I don't know if that is possible.)

    So for reading consecutive blocks; move occasionally (which is slow), and let the blocks of data come to you as much as possible. For "perfectly" interleaved blocks; move twice as many times and spend half of your optimal reading state waiting for blocks that are relevant to you. You might still be moving only track-to-neighbour (an x move) though.

    If you're not dealing with perfectly interleaved files, then you'll instead have occasional longer seeks (very slow - large movement and long waiting times) and reading larger chunks of consecutive blocks (fast).



  • @Lingerance said:

    @Cap'n Steve said:

    Slightly off-topic, but I've always wondered about how Linux filesystems don't need to be defragmented. I understand that they pick the best place to save files, but wouldn't it eventually get fragmented anyway? Does it do some minor defragmenting in the background when there aren't enough contiguous sectors to store the file?

    [quote user="Wikipedia"] "Modern Linux filesystem(s) keep fragmentation at a minimum by keeping all blocks in a file close together, even if they can't be stored in consecutive sectors. Some filesystems, like ext3, effectively allocate the free block that is nearest to other blocks in a file. Therefore it is not necessary to worry about fragmentation in a Linux system."
    [/quote]

    Additionally, they spread files throughout the disk rather than packing them all at the start, so whenever something wants to append to a file, there's a very good chance of being free space after it.

    You remember that button in the Windows defrag that says "full optimization", meaning free space consolidation? Don't use it. It was created by a moron. It makes things worse, not better.



  • @dhromed said:

    I'm not familiar enough with NTFS to exclude the possibility of it having a better indexing... thing... way... method... hash... rainbow... magic... to alleviate seek  problems.

    I am. It doesn't. As filesystems go, NTFS is the second worst of the ones in common use, beaten only by FAT. 



  • @asuffield said:

    As filesystems go, NTFS is the second worst of the ones in common use, beaten only by FAT.

    Nonsense. It supports mutiple forks like HFS+, and better still, sectors as small as 8 bytes for ultra efficient space utilisation:



  • @asuffield said:

    @Lingerance said:

    @Cap'n Steve said:

    Slightly off-topic, but I've always wondered about how Linux filesystems don't need to be defragmented. I understand that they pick the best place to save files, but wouldn't it eventually get fragmented anyway? Does it do some minor defragmenting in the background when there aren't enough contiguous sectors to store the file?


    [quote user="Wikipedia"]
    "Modern Linux filesystem(s) keep fragmentation at a minimum by keeping all blocks in a file close together, even if they can't be stored in consecutive sectors. Some filesystems, like ext3, effectively allocate the free block that is nearest to other blocks in a file. Therefore it is not necessary to worry about fragmentation in a Linux system."

    Additionally, they spread files throughout the disk rather than packing them all at the start, so whenever something wants to append to a file, there's a very good chance of being free space after it.

    [/quote]

    Good to know, it just bothers me when Linux fans say it's impossible for file to get fragmented in Linux. I did notice that there is a defrag program for Linux, but I don't think it was included in any of the distros I've tried.



  • @Cap'n Steve said:

    Good to know, it just bothers me when Linux fans say it's impossible for file to get fragmented in Linux.

    It's not impossible, it's just not a problem. Files get fragmented all the time, but the number of fragmented files on any filesystem, and the number of fragments in each file, are both so small that there is no real performance problem. You don't get measurable issues until files are split into hundreds or thousands of fragments - it's not necessary to complete eliminate it in order to solve the problem.

    I did notice that there is a defrag program for Linux, but I don't think it was included in any of the distros I've tried.
     

    It's just:

    for i in find .; do cp -p "$i" /var/tmp/; mv "`basename \"$i\"`" "$i"; done

    Which isn't really worth including (and also not really worth running).



  • @asuffield said:

    for i in find .; do cp -p "$i" /var/tmp/; mv "`basename \"$i\"`" "$i"; done

    Please explain that to me, I'm stupid.



  • @derula said:

    @asuffield said:

    for i in find .; do cp -p "$i" /var/tmp/; mv "`basename \"$i\"`" "$i"; done

    Please explain that to me, I'm stupid.

     

    Assuming that the stuff on linux file systems mentioned earlier in this thread is true (I suppose it is (*)), and your disk has continuous free space, the script/command simply copies each file somewhere else, thereby (hopefully) using the free continuous space on the disk (i.e. if the file was previously fragmented, there's a chance it no longer is afterwards).

    Of course, if your disk does not have continuous free space, running the command above is pretty much pointless. Also, the command above (AFAIK) messes up the 'mv' unless you're inside /var/tmp, and files already in /var/tmp .. are funny -- I guess it's good enough to demonstrate the general idea though (and the fact that you should not run commands you don't understand fully).

    (*) I've previously heard that fragmentation on linux file systems does not become a problem unless you have little free space left (<10% IIRC).


    (I know this subject has been previously covered, but I must have missed the answer if there's any: Is there any semi-sane way to post replies in firefox without editing the html-code directly?)



  • @cvi said:

    Of course, if your disk does not have continuous free space, running the command above is pretty much pointless.

    Nothing can defragment such a filesystem without directly mangling it on the disk. Buy a larger disk when that happens.

    @cvi said:

    Also, the command above (AFAIK) messes up the 'mv' unless you're inside /var/tmp, and files already in /var/tmp

    Sigh. Hard to see what you're typing in this font.

    for i in find .; do cp -p "$i" /var/tmp/; mv /var/tmp/"`basename \"$i\"`" "$i"; done, obviously

     



  • @asuffield said:

    Sigh. Hard to see what you're typing in this font.

    [correction]

     

    I have a habit of using my text editor first, then pasting the code here in the WTFISWYG editor. 



  • @asuffield said:

    Additionally, they spread files throughout the disk rather than packing them all at the start, so whenever something wants to append to a file, there's a very good chance of being free space after it.

    Win some, lose some. OTOH, there are performance benefits from packing everything in at the start of the disk

    (a) All your files are within the first X% of the disk. Moving between two files should be faster since  the head has less distance to travel. Becomes unimportant when the disk close to full, though.
    (b) The start of the disk is on on the faster, outside, part of the disk. Putting your files where they will be read/written to faster makes sense.

    Weighing up a and b vs. the increased fragmentation is something that could probably go either way depending on your usage pattern, performance needs, and amount of files that change vs. remain static. 

    @asuffield said:

    You remember that button in the Windows defrag that says "full optimization", meaning free space consolidation? Don't use it. It was created by a moron. It makes things worse, not better.

    For files that don't change, or do so only infrequently e.g. system files, that makes sense. space between them is wasted fragmentation-bait, and will lessen performance via (a) and (b) above. That 300GB disk with 150GB of 64KB files each 64KB apart isn't going to like having a 150GB data file saved to it, since it will end up saved in over 4 million fragments. Ouch.

    Of course, for often-updated files (user data/logs/etc.) you are right. As ever, it's not a case of right vs. wrong, just right vs. wrong for a specific scenario.

    I'm sure that between the filesystem and OS, we could come up with something that takes the best aspect of both (and other methods) while minimising their downsides.

     

    Then again, most of the above is moot when you move to solid state drives, which is becoming increasingly more common.



  • @dlikhten said:

    Sometimes I wish I could use the putty shell to access my local machine without hosting self as a ssh host. Better for clipboard usage, window resizing, etc...

     

    If you use Cygwin, try Cygterm.

    It also works for some Windows programs, but with the caveat that if it tries to change the terminal window using SetConsoleMode(), it will break. Most programs don't do this, so you're fine.



  • @RayS said:

    Moving between two files should be faster since  the head has less distance to travel.

    Seek time is not significantly affected by travelling distance on modern drives. They can seek to any position in more or less the same amount of time. My understanding is that the delay mostly consists of getting the head properly aligned over the track, which requires some time to settle after moving. They don't use a stepper motor any more, which was sensitive to the distance moved.

    @RayS said:

    The start of the disk is on on the faster, outside, part of the disk.

    The sequence of bytes presented to the operating system stopped having anything to do with the physical layout of the disk about ten years ago. It's still true for CDs and DVDs, but for hard drives it's hopelessly model-specific which bytes will go where, and it changes over time because the drives rearrange themselves while in use for reliability or performance reasons. Drive manufacturers are notoriously cagey about exactly what the firmware does here, because they're paranoid about their opposition stealing their "advantage", so it's impossible to place files in the "faster" regions.



  • @asuffield said:

    The sequence of bytes presented to the operating system stopped having anything to do with the physical layout of the disk about ten years ago. It's still true for CDs and DVDs, but for hard drives it's hopelessly model-specific which bytes will go where, and it changes over time because the drives rearrange themselves while in use for reliability or performance reasons. Drive manufacturers are notoriously cagey about exactly what the firmware does here, because they're paranoid about their opposition stealing their "advantage", so it's impossible to place files in the "faster" regions.

    It's a miracle that defrag works at all, then, if the firmware is going to subvert it.



  • @GalacticCowboy said:

    It's a miracle that defrag works at all, then, if the firmware is going to subvert it.

    The authors of the firmware are presumed to design for expected usage patterns, so for the most part you can expect two contiguous logical blocks to be contiguous on the disk as well - but there are probably discontinuities in there somewhere (and nobody but the drive manufacturer knows exactly how many and where). Similarly to fragmentation, a handful of discontinuities do not create performance issues, because you don't have a real problem until you start racking up hundreds or thousands of extra seeks.


Log in to reply