Defragment my iPod?
Posted: Sat Jan 26, 2008 5:41 pm
Should I try it? I'd like to try it just to see if it speeds up at all, but if it's potentially harmful, I'll steer clear.
Huh? The music is in the (hidden) iPod_Control folder.fliptw wrote:you can't directly access the area that stores the music on the ipod
And trying to defragment flash-based players would really wear the chips down (wear levelling).Krom wrote:I believe iPods and most other MP3 players that show up as mass storage devices in windows use the fat32 file system and can be defragmented if you want. Odds are it isn't that badly fragmented though. Also if it is a flash based player then defragmenting is irrelevant, it only applies to the hard disk based players.
heftig wrote:And trying to defragment flash-based players would really wear the chips down (wear levelling).Krom wrote:I believe iPods and most other MP3 players that show up as mass storage devices in windows use the fat32 file system and can be defragmented if you want. Odds are it isn't that badly fragmented though. Also if it is a flash based player then defragmenting is irrelevant, it only applies to the hard disk based players.
So much knowledge yet lacking some very important "practical" understanding.Munk wrote:There is no "seeking time" on flash memory, so defragmentation doesn't make much sense.
Anyway the address space of the device controller does not nessecarily reflect the physical ordering.
Actually, pretty much all modern file systems do this. For example exFAT, HPFS and NTFS. They all try to store the files without fragmentation, unless that is not possible.TechPro wrote:2) the programing for storing the file is specifically programmed to force the file to be stored in one block sectors without any fragmentation (very, very, very few programs if any are designed in this fashion).
More precisely, HDD thumb drives. But those things are rare.TechPro wrote:Defragmenting can improve performance on some rather slow thumb drives, but not many.
I fail to see how disk fragmentation could lead to errors.TechPro wrote:On any device fragmentation can lead to errors when the fragmentation reaches extreme levels. On flash memory devices the extreme level is much higher than disk based systems. That is another reason why defragging a flash memory device can seem irrelevant.
Actually, modern defragmentation programs can do this just fine, down to completely reordering the files on the disk for most efficient access (you won't be a able to do this by simply copying). The only thing they can't move are locked files.TechPro wrote:The all-time best and most effective defrag method is to completely remove the files from the devices and then copy the files back on. This method will give you the best defrag, but depending on how the interface of your device is designed you may or may not be able to do that or it may be totally impractical to do it. That is why programs to effect a defrag exist for your computers because your OS usually can't be copied off and back on with any with any ease.
Each EEPROM and flash media segment can fail on its own, usually after 10000 to 1000000 erase cycles. The whole chip will not simply stop working.TechPro wrote: As for "wearing the chips down" ... that isn't the case. Chips don't "wear down", they either work or don't work (like flipping a switch on and off). Use them more frequently and yes it will probably fail sooner. But "wear down"? Nope.
Go learn more about how electronics work.
I don't think flash devices have a "seek time". There might be a delay before data is returned, but it makes absolutely no difference how the data is distributed. Flash media hasn't got any moving parts like hard disk have. There are no heads that need to seek the data.TechPro wrote:Yes, there is "seek time" with flash devices, but it's extremely small.
Not true. exFAT (designed for Windows CE where NTFS not feasible) does not prevent fragmentation. HPFS does lower the frequency of fragmentation, but it does allow it to happen anyway. NTFS was claimed by Microsoft to minimize fragmentation but any serious user of NTFS files systems knows the NTFS systems fragment (and do it a lot, that's why Microsoft included their defrag utility).heftig wrote:Actually, pretty much all modern file systems do this. For example exFAT, HPFS and NTFS. They all try to store the files without fragmentation, unless that is not possible.TechPro wrote:2) the programing for storing the file is specifically programmed to force the file to be stored in one block sectors without any fragmentation (very, very, very few programs if any are designed in this fashion).
Nope, not limited to HDD thumb drives. I have five 1gb and bigger thumb drives that can prove you wrong.heftig wrote:More precisely, HDD thumb drives. But those things are rare.TechPro wrote:Defragmenting can improve performance on some rather slow thumb drives, but not many.
Doesn't mean they wouldn't either.heftig wrote:USB 1.1 thumb drives are slow. That doesn't mean they would benefit from defragmentation.
I see you don't have much experience repairing end user's computers or you would have seen instances where high fragmentation leads to errors (not just delays)... So I'll let that slide.heftig wrote:I fail to see how disk fragmentation could lead to errors.TechPro wrote:On any device fragmentation can lead to errors when the fragmentation reaches extreme levels. On flash memory devices the extreme level is much higher than disk based systems. That is another reason why defragging a flash memory device can seem irrelevant.
Possibly if the maximum number of extents per file was to be exceeded. I could not find any information on this, so I assume this number is high enough not to be exceeded in any realistic scenario.
Even if, this would completely independent of the memory device type. Fragmentation is a problem of the file system, not the hardware itself.
You're right that modern defragmentation programs can do a good job ... but because "locked" files also include files that are actively in use by the system (considered "locked" because they are "in use) cannot be defragmented ... That is why a moving the files off (removing them) and then copying them back is extemely effective. You can only do that without the system loaded (usually you'd use a different drive to run the system from temporarily). That's when you can get the system, locked, and "in use" files defragmented.heftig wrote:Actually, modern defragmentation programs can do this just fine, down to completely reordering the files on the disk for most efficient access (you won't be a able to do this by simply copying). The only thing they can't move are locked files.TechPro wrote:The all-time best and most effective defrag method is to completely remove the files from the devices and then copy the files back on. This method will give you the best defrag, but depending on how the interface of your device is designed you may or may not be able to do that or it may be totally impractical to do it. That is why programs to effect a defrag exist for your computers because your OS usually can't be copied off and back on with any with any ease.
A flash media segment that fails would be just like getting bad sectors on a hard disk, and in the case of flash media would be comparable to chip failure. And yes, chips DO fail and simply stop working.heftig wrote:Each EEPROM and flash media segment can fail on its own, usually after 10000 to 1000000 erase cycles. The whole chip will not simply stop working.TechPro wrote: As for "wearing the chips down" ... that isn't the case. Chips don't "wear down", they either work or don't work (like flipping a switch on and off). Use them more frequently and yes it will probably fail sooner. But "wear down"? Nope.
Go learn more about how electronics work.
Possibly the media has got spare segments that are used instead of failed segments (just like hard drives have got spare sectors).
"Seek time" is simply the time it takes to seek for the requested data. It does not matter if it's all electronic or a hard disk drive. There is still a measurable amount of "seek time". In hard disk drives, the seek time is much much longer because it includes the time to move the read/write heads in the drive. Since flash memory does not have any mechanical parts, the time to seek the data is very, very small, to the point that is seems to not exist. It still exists.heftig wrote:I don't think flash devices have a "seek time". There might be a delay before data is returned, but it makes absolutely no difference how the data is distributed. Flash media hasn't got any moving parts like hard disk have. There are no heads that need to seek the data.TechPro wrote:Yes, there is "seek time" with flash devices, but it's extremely small.
They do try to store files as a single fragment. No "brain dead" writing to the first cluster found.TechPro wrote:Not true. exFAT (designed for Windows CE where NTFS not feasible) does not prevent fragmentation. HPFS does lower the frequency of fragmentation, but it does allow it to happen anyway. NTFS was claimed by Microsoft to minimize fragmentation but any serious user of NTFS files systems knows the NTFS systems fragment (and do it a lot, that's why Microsoft included their defrag utility).
But that would imply the disk is damaged, wouldn't it? If not, where would the errors come from?TechPro wrote:I see you don't have much experience repairing end user's computers or you would have seen instances where high fragmentation leads to errors (not just delays)... So I'll let that slide.
You're right, fragmentation can be avoided through the file system and is NOT a fault of the hardware HOWEVER anytime you have hardware motion involved (like in a hard disk drive), the frequency of error is exponentially higher. This is why flash memory is almost completely free of errors due to fragmentation.
Short of a mechanism explaining how these errors arise, this isn't very convincing. What kind of errors are you talking about anyways? And on which file systems?TechPro wrote:I see you don't have much experience repairing end user's computers or you would have seen instances where high fragmentation leads to errors (not just delays)... So I'll let that slide.