|
Korean Job Discussion Forums "The Internet's Meeting Place for ESL/EFL Teachers from Around the World!"
|
| View previous topic :: View next topic |
| How often does your computer crash? |
| I use a Mac and it never crashes. |
|
25% |
[ 11 ] |
| I use a PC running Windows Vista and it crashes about once a day. |
|
2% |
[ 1 ] |
| I use a PC running Windows Vista and it crashes once a week or less. |
|
2% |
[ 1 ] |
| I use a PC running Windows Vista and I can't remember the last time it crashed. |
|
16% |
[ 7 ] |
| I use a PC running Windows XP and it crashes about once a day. |
|
0% |
[ 0 ] |
| I use a PC running Windows XP and it crashes once a week or less. |
|
2% |
[ 1 ] |
| I use a PC running Windows XP and I can't remember the last time it crashed. |
|
39% |
[ 17 ] |
| I use a PC running Linux and it crashes about once a day. |
|
0% |
[ 0 ] |
| I use a PC running Linux and it crashes once a week or less. |
|
0% |
[ 0 ] |
| I use a PC running Linux and I can't remember the last time it crashed. |
|
11% |
[ 5 ] |
|
| Total Votes : 43 |
|
| Author |
Message |
idonojacs
Joined: 07 Jun 2007
|
Posted: Mon Oct 01, 2007 3:42 am Post subject: |
|
|
Since the subject of defragmenting a Windows hard drive came up earlier, and since I happened to look up my defrag program in response to another thread, I decided, what the heck, I might as well defrag my drive.
I used Auslogics freeware to defrag my C drive. I had not defragged in about 45 days. It took about four minutes. Here is the report:
Auslogics Disk Defrag Version 1.1.5.225 (diskdefrag.exe)
Disk Size 27.18 GB
Free Size 16.87 GB
Clusters 7124595
Sectors per cluster 8
Bytes per sector 512
Started defragmentation at 10/1/2007 8:50:34 PM
Completed defragmentation at 10/1/2007 8:54:54 PM
Elapsed time 00:04:20
Total Files 77275
Total Directories 10385
Fragmented File Count 1805
Defragmented File Count 1803
Skipped File Count 2
Fragmentation Before 1.20% ..................................................
Fragmentation After 0.28% ............
The key bit of relevant information is that the drive was 1.2 percent fragmented before defrag. In other words, it did not need to be defragged. Why? Because I reserve my C drive for my OS and some basic programs. I put some other large programs, such as an encylopedia, etc., on D drive. and data on E drive. I might defrag D once a year. There is no particular need to defrag a data drive. Might do it in two years.
My Web browser caches are on C drive, I believe. This would account for at least of some of the fragmentation.
Frankly, the whole defrag issue is way overblown. It doesn't hurt to do it now and then, say once every month or two. Doing it every week is nuts. Once a year is probably enough for most people. Generally, you aren't going to notice much difference. But it supposedly makes life easier for your hard drive. I don't like to burn out hard drives.
Oh, and if your hard drive gets too full, you won't be able to defrag until you delete some files - and empty the trash. The system needs space to put your files temporarily while it reorganizes the disk. |
|
| Back to top |
|
 |
eamo

Joined: 08 Mar 2003 Location: Shepherd's Bush, 1964.
|
Posted: Mon Oct 01, 2007 4:43 am Post subject: |
|
|
I wouldn't have thought that a weekly defrag would have much, if any, effect on the health of a hard drive.
MS have set a once weekly defrag as the default in Vista. You can turn that off or change it, of course, but it shows that Microsoft see a weekly defrag as being beneficial.
I remember reading an article on a tech-site about a study Google did. They kept track of how much work each of the numerous HDD's they use in their servers did. Surprisingly, the HDD's which worked harder lasted longer. The highest failure rate was among the lightly used HDD's. Just an anecdote but I think it shows that doing a de-frag, which should only really take about an hour, once per week, isn't going to thrash your HDD to death. |
|
| Back to top |
|
 |
rocklee
Joined: 04 Oct 2005 Location: Seoul
|
Posted: Mon Oct 01, 2007 5:35 am Post subject: |
|
|
| You don't want to defrag too often, it will WEAR your hard disk out. |
|
| Back to top |
|
 |
blackjack

Joined: 04 Jan 2006 Location: anyang
|
Posted: Mon Oct 01, 2007 5:36 am Post subject: |
|
|
I have an old laptop running windows 95, has a whole 32mb of ram 1 gb harddrive (I use it when i have to as a second computer) It runs fine has been doing so since i have got it, 97. I have never changed or updated the operating system. The only problem i had with it is when i tried to run a 2002 SAS on it (stats programme)
There is a reason why 95% of people use windows. It is easy and simple, for 90% of users it is what they need. Sure a european car might break down less but a Jap car is half the price and if it does break down is going to cost half as much to fix. |
|
| Back to top |
|
 |
Demophobe

Joined: 17 May 2004
|
Posted: Mon Oct 01, 2007 11:07 pm Post subject: |
|
|
| rocklee wrote: |
| You don't want to defrag too often, it will WEAR your hard disk out. |
Not true. |
|
| Back to top |
|
 |
Demophobe

Joined: 17 May 2004
|
Posted: Mon Oct 01, 2007 11:13 pm Post subject: |
|
|
| idonojacs wrote: |
Since the subject of defragmenting a Windows hard drive came up earlier, and since I happened to look up my defrag program in response to another thread, I decided, what the heck, I might as well defrag my drive.
I used Auslogics freeware to defrag my C drive. I had not defragged in about 45 days. It took about four minutes. Here is the report:
Auslogics Disk Defrag Version 1.1.5.225 (diskdefrag.exe)
Disk Size 27.18 GB
Free Size 16.87 GB
Clusters 7124595
Sectors per cluster 8
Bytes per sector 512
Started defragmentation at 10/1/2007 8:50:34 PM
Completed defragmentation at 10/1/2007 8:54:54 PM
Elapsed time 00:04:20
Total Files 77275
Total Directories 10385
Fragmented File Count 1805
Defragmented File Count 1803
Skipped File Count 2
Fragmentation Before 1.20% ..................................................
Fragmentation After 0.28% ............
The key bit of relevant information is that the drive was 1.2 percent fragmented before defrag. In other words, it did not need to be defragged. Why? Because I reserve my C drive for my OS and some basic programs. I put some other large programs, such as an encylopedia, etc., on D drive. and data on E drive. I might defrag D once a year. There is no particular need to defrag a data drive. Might do it in two years.
My Web browser caches are on C drive, I believe. This would account for at least of some of the fragmentation.
Frankly, the whole defrag issue is way overblown. It doesn't hurt to do it now and then, say once every month or two. Doing it every week is nuts. Once a year is probably enough for most people. Generally, you aren't going to notice much difference. But it supposedly makes life easier for your hard drive. I don't like to burn out hard drives.
Oh, and if your hard drive gets too full, you won't be able to defrag until you delete some files - and empty the trash. The system needs space to put your files temporarily while it reorganizes the disk. |
Sorry, but there are one of three things going on here:
1. The freeware you use is absolute garbage.
2. You don't use your computer.
3. You are out of your mind. |
|
| Back to top |
|
 |
idonojacs
Joined: 07 Jun 2007
|
Posted: Tue Oct 02, 2007 12:23 am Post subject: |
|
|
Demophobe wrote:
| Quote: |
Sorry, but there are one of three things going on here:
1. The freeware you use is absolute garbage.
2. You don't use your computer.
3. You are out of your mind. |
Why?
_____________
Could you be more specific, please? |
|
| Back to top |
|
 |
idonojacs
Joined: 07 Jun 2007
|
Posted: Tue Oct 02, 2007 2:05 am Post subject: |
|
|
Apparently there may still be some people who do not understand disk fragmentation.
I decided to do a bit more reading. I started with my defrag program and found this nice summary of the issue:
| Quote: |
Defragmentation Explained
Fragmentation is caused by creating and deleting files and folders, installing new software, and downloading files from the Internet. Computers do not necessarily save an entire file or folder in a single space on a disk; they're saved in the first available space. After a large portion of a disk has been used, most of the subsequent files and folders are saved in pieces across the volume.
When you delete files or folders, the empty spaces left behind are filled in randomly as you store new ones. This is how fragmentation occurs. The more fragmented the volume is, the slower the computer's file input and output performance will be.
Defragmentation is the process of rewriting non-contiguous parts of a file to contiguous sectors on a disk for the purpose of increasing data access and retrieval speeds. Because FAT and NTFS disks can deteriorate and become badly fragmented over time, defragmentation is vital for optimal system performance.
In June 1999 the ABR Corporation of Irvine, California performed a fragmentation analysis and found that, out of 100 corporate offices that were not using a defragmenter, 50 percent of the respondents had server files with 2,000 to 10,000 fragments. In all cases the results were the same: Servers and workstations experienced a significant degradation in performance
Auslogics Disk Defrag is a compact, manual defragmentation tool that supports FAT 16, FAT 32, and NTFS (with compressed and encrypted files). For the individual user, Disk Defrag is more than adequate for the job of maintaining high-level disk performance. |
Then I turned to wikipedia and found several articles. Lets start with this section titled "myths."
http://en.wikipedia.org/wiki/Defragmentation
| Quote: |
Myths
Defragging the disk will not stop a system from malfunctioning or crashing because the filesystem is designed to work with fragmented files.[3] Since defrag cannot be run on a filesystem marked as dirty without first running chkdsk[4], a user who intends to run defrag "to fix a system acting strangely" often ends up running chkdsk, which repairs file system errors, the end result of which may mislead the user into thinking that defrag fixed the problem when it was actually fixed by chkdsk.
In fact, in a modern multi-user operating system, an ordinary user cannot defragment the system disks since superuser access is required to move system files. Additionally, file systems such as NTFS (and most Unix/Linux filesystems) are designed to decrease the likelihood of fragmentation.[5][6] Improvements in modern hard drives such as RAM cache, faster platter rotation speed, and greater data density reduce the negative impact of fragmentation on system performance to some degree, though increases in commonly used data quantities offset those benefits. However, modern systems profit enormously from the huge disk capacities currently available, since partially filled disks fragment much less than full disks.[7] In any case, these limitations of defragmentation have led to design decisions in modern operating systems like Windows Vista to automatically defragment in a background process but not to attempt to defragment a volume 100% because doing so would only produce negligible performance gains.[8] |
It should be noted especially that if your drive is not crowded, you are not likely to get much fragmentation, since the files can easily be written contiguously.
Also, fragmentation of files occurs when other files are added or deleted, creating a Swiss cheese effect of holes amidst the used sectors. Squeeze new files into those holes and you get fragmented files.
But if you don't go adding and deleting files after you defrag, you could use your computer for a thousand years and the disk is not going to get fragmented, at least theoretically. Simply using you computer does create some temp files and other data. But it is not, in itself, going to fragment existing files.
So, to repeat, if you install your OS, install all your programs, do your updates, clean up the disk by emptying the trash, etc., and then defrag your disk,* those files will stay defraged. Once you put a program, such as Windows, on your drive, it does not move files around on its own (unless you have auto defrag running, which I do not).
Now, if you do not put data files on the same drive as your program files, the program files are not, all things being equal, going to get fragmented. If you install updates, some fragmentation will occur. Windows does updates. I installed updates in the past month. I had 1.2 percent fragmentation, which is insignificant.
As to data files, you put them on your drive. If there is plenty of room on the drive, they won't be fragmented, I don't think. Even if they are, who cares? Unless you load some might large files on a daily basis, I don't see that it would make any noticeable difference. Most data files are not opened more than once a week or month, or maybe year. BUT, I guess it would be good to defrag your data disk periodically to clear up contiguous space so that new files will be recorded contiguously. Get it? Not a big deal, though, with these modern, enormous hard drives. Which is another reason to subdivide them into smaller partitions. Who in their right mind wants to defrag a 500 GB drive once a month?
So, just for the record:
1) I do use my computer. WTF do you think I am writing on right now, Western Union?
2) Here are reviews for Auslogics. Judge for yourself:
http://www.download.com/Auslogics-Disk-Defrag/3000-2086-10567503.html
3) As for being out of my mind, anyone who would spend their free time writing long treatises on disk fragmentation on a Web forum like Dave's where absolutely no one in their right mind actually cares must be significantly off kilter. So I must plead guilty.
However, I am not the only one out of my mind. So I would like to enter into the record documents from wikipedia:
http://en.wikipedia.org/wiki/Disk_fragmentation
| Quote: |
File system fragmentation
From Wikipedia, the free encyclopedia
(Redirected from Disk fragmentation)� Find out more about navigating Wikipedia and finding information �
In computing, file system fragmentation, sometimes called file system aging, is the inability of a file system to lay out related data sequentially (contiguously), an inherent phenomenon in storage-backed file systems that allow in-place modification of their contents. It is a special case of data fragmentation. File system fragmentation increases disk head movement or seeks, which are known to hinder throughput. The correction to existing fragmentation is to compress files and free space back into contiguous areas, a process called defragmentation.Contents [hide]
1 Why fragmentation occurs
2 Performance implications
3 Types of fragmentation
3.1 File fragmentation
3.2 Free space fragmentation
3.3 Related file fragmentation
4 Techniques for mitigating fragmentation
4.1 Proactive techniques
4.2 Retroactive techniques
5 See also
6 Notes and references
[edit]
Why fragmentation occurs
When a file system is first initialized on a partition (the partition is formatted for the file system), the entire space allotted is empty.[1] This means that the allocator algorithm is completely free to position newly created files anywhere on the disk. For some time after creation, files on the file system can be laid out near-optimally. When the operating system and applications are installed or other archives are unpacked, laying out separate files sequentially also means that related files are likely to be positioned close to each other.
However, as existing files are deleted or truncated, new regions of free space are created. When existing files are appended to, it is often impossible to resume the write exactly where the file used to end, as another file may already be allocated there � thus, a new fragment has to be allocated. As time goes on, and the same factors are continuously present, free space as well as frequently appended files tend to fragment more. Shorter regions of free space also mean that the allocator is no longer able to allocate new files contiguously, and has to break them into fragments. This is especially true when the file system is more full � longer contiguous regions of free space are less likely to occur.
Consider the following scenario, as shown by the image on the right:
A blank disk has 5 files, A, B, C, D and E each using 10 blocks of space (for this section, a block is an allocation unit of that system, it could be 1K, 100K or 1 megabyte and is not any specific size). On a blank disk, all of these files will be allocated one after the other. (Example (1) on the image.) If file B is deleted, there are two options, leave the space for B empty and use it again later, or compress all the files after B so that the empty space follows it. This could be time consuming if there were hundreds or thousands of files which needed to be moved, so in general the empty space is simply left there, marked in a table as available for later use, then used again as needed.[2] (Example (2) on the image.) Now, if a new file, F, is allocated 7 blocks of space, it can be placed into the first 7 blocks of the space formerly holding the file B and the 3 blocks following it will remain available. (Example (3) on the image.) If another new file, G is added, and needs only three blocks, it could then occupy the space after F and before C. (Example (4) on the image). Now, if subsequently F needs to be expanded, since the space immediately following it is no longer available, there are two options: (1) add a new block somewhere else and indicate that F has a second extent, or (2) move the file F to someplace else where it can be created as one contiguous file of the new, larger size. The latter operation may not be possible as the file may be larger than any one contiguous space available, or the file conceivably could be so large the operation would take an undesirably long period of time, thus the usual practice is simply to create an extent somewhere else and chain the new extent onto the old one. (Example (5) on the image.) Repeat this practice hundreds or thousands of times and eventually the file system has many free segments in many places and many files may be spread over many extents. If, as a result of free space fragmentation, a newly created file (or a file which has been extended) has to be placed in a large number of extents, access time for that file (or for all files) may become excessively long.
To summarize, factors that typically cause or facilitate fragmentation, include:
low free space.
frequent deletion, truncation or extension of files.
overuse of sparse files.
[edit]
Performance implications
File system fragmentation is projected to become more problematic with newer hardware due to the increasing disparity between sequential access speed and rotational delay (and to a lesser extent seek time), of consumer-grade hard disks,[3] which file systems are usually placed on. Thus, fragmentation is an important problem in recent file system research and design. The containment of fragmentation not only depends on the on-disk format of the file system, but also heavily on its implementation.[4]
In simple file system benchmarks, the fragmentation factor is often omitted, as realistic aging and fragmentation is difficult to model. Rather, for simplicity of comparison, file system benchmarks are often run on empty file systems, and unsurprisingly, the results may vary heavily from real-life access patterns.[5]
[edit]
Types of fragmentation
File system fragmentation may occur on several levels:
Fragmentation within individual files and their metadata.
Free space fragmentation, making it increasingly difficult to lay out new files contiguously.
The decrease of locality of reference between separate, but related files.
[edit]
File fragmentation
Individual file fragmentation occurs when a single file has been broken into multiple pieces (called extents on extent-based file systems). While disk file systems attempt to keep individual files contiguous, this is not often possible without significant performance penalties. File system check and defragmentation tools typically only account for file fragmentation in their "fragmentation percentage" statistic.
[edit]
Free space fragmentation
Free (unallocated) space fragmentation occurs when there are several unused areas of the file system where new files or metadata can be written to. Unwanted free space fragmentation is generally caused by deletion or truncation of files, but file systems may also intentionally insert fragments ("bubbles") of free space in order to facilitate extending nearby files (see proactive techniques below).
[edit]
Related file fragmentation
Related file fragmentation, also called application-level (file) fragmentation, refers to the lack of locality of reference between related files. Unlike the previous two types of fragmentation, related file fragmentation is a much more vague concept, as it heavily depends on the access pattern of specific applications. This also makes objectively measuring or estimating it very difficult. However, arguably, it is the most critical type of fragmentation, as studies have found that the most frequently accessed files tend to be small compared to available disk throughput per second.[6]
To avoid related file fragmentation and improve locality of reference, assumptions about the operation of applications have to be made. A very frequent assumption made is that it is worthwhile to keep smaller files within a single directory together, and lay them out in the natural file system order. While it is often a reasonable assumption, it does not always hold. For example, an application might read several different files, perhaps in different directories, in the exact same order they were written. Thus, a file system that simply orders all writes successively, might work faster for the given application.
[edit]
Techniques for mitigating fragmentation
Several techniques have been developed to fight fragmentation. They can usually be classified into two categories: proactive and retroactive. Due to the hard predictability of access patterns, these techniques are most often heuristic in nature, and may degrade performance under unexpected workloads.
[edit]
Proactive techniques
Proactive techniques attempt to keep fragmentation at a minimum at the time data is being written on the disk. The simplest of such is, perhaps, appending data to an existing fragment in place where possible, instead of allocating new blocks to a new fragment.
Many of today's file systems attempt to preallocate longer chunks, or chunks from different free space fragments, called extents to files that are actively appended to. This mainly avoids file fragmentation when several files are concurrently being appended to, thus avoiding them from becoming excessively intertwined.[4]
A relatively recent technique is delayed allocation in XFS and ZFS; the same technique is also called allocate-on-flush in reiser4 and ext4. This means that when the file system is being written to, file system blocks are reserved, but the locations of specific files are not laid down yet. Later, when the file system is forced to flush changes as a result of memory pressure or a transaction commit, the allocator will have much better knowledge of the files' characteristics. Most file systems with this approach try to flush files in a single directory contiguously. Assuming that multiple reads from a single directory are common, locality of reference is improved.[7] Reiser4 also orders the layout of files according to the directory hash table, so that when files are being accessed in the natural file system order (as dictated by readdir), they are always read sequentially.[8]
Bittorrent and other peer-to-peer filesharing clients have an "Antifragmentation" feature that allocates the full space needed for a file when initiating downloads.
[edit]
Retroactive techniques
Retroactive techniques attempt to reduce fragmentation, or the negative effects of fragmentation, after it has occurred. Many file systems provide defragmentation tools, which attempt to reorder fragments of files, and often also increase locality of reference by keeping smaller files in directories, or directory trees, close to each other on the disk.
The HFS Plus file system transparently defragments files that are less than 20 MiB in size and are broken into 8 or more fragments, when the file is being opened.[9]
[edit]
See also
Fragmentation
Defragmentation
File system
Locality of reference
[edit]
Notes and references
^ The partition is not completely empty: some internal file system structures are always created. However, these are typically contiguous, and their existence is negligible. Some file systems, such as NTFS and ext2+, might also preallocate empty contiguous regions for special purposes.
^ The practice of leaving the empty space behind after a file is deleted, marked in a table as available for later use, then used again as needed is why undelete programs were able to work, they simply recovered the file whose name had been deleted from the directory, but the contents were still on disk.
^ Dr. Mark H. Kryder (2006-04-03). "Future Storage Technologies: A Look Beyond the Horizon" (PDF). Storage Networking World conference, Seagate Technology. Retrieved on 2006-12-14.
^ a b L. W. McVoy, S. R. Kleiman (1991 winter). "Extent-like Performance from a UNIX File System" (PostScript). Proceedings of USENIX winter '91: pages 33�43, Dallas, Texas: Sun Microsystems, Inc.. Retrieved on 2006-12-14.
^ Keith Arnold Smith (2001-01). "Workload-Specific File System Benchmarks" (PDF). Harvard University. Retrieved on 2006-12-14.
^ John R. Douceur, William J. Bolosky (1999-06). "A Large-Scale Study of File-System Contents" (PDF). ACM SIGMETRICS Performance Evaluation Review volume 27 (issue 1): pages 59�70. ISSN 0163-5999. Retrieved on 2006-12-14.
^ Adam Sweeney, Doug Doucette, Wei Hu, Curtis Anderson, Mike Nishimoto, Geoff Peck (1996-01). "Scalability in the XFS File System" (PDF). Proceedings of the USENIX 1996 Annual Technical Conference, San Diego, California: Silicon Graphics. Retrieved on 2006-12-14.
^ Hans Reiser (2006-02-06). The Reiser4 Filesystem (Google Video). A lecture given by the author, Hans Reiser. Retrieved on 2006-12-14.
^ Amit Singh (2006-06-19). "The HFS Plus File System", Mac OS X Internals: A Systems Approach. Addison Wesley. |
http://en.wikipedia.org/wiki/Defragmentation
| Quote: |
Defragmentation
From Wikipedia, the free encyclopedia
� Learn more about using Wikipedia for research �
�Defrag� redirects here. For the Quake 3 Arena modification, see DeFRaG.
In the context of administering computer systems, defragmentation is a process that reduces the amount of fragmentation in file systems. It does this by physically reorganizing the contents of the disk to store the pieces of each file close together and contiguously. It also attempts to create larger regions of free space using compaction to impede the return of fragmentation. Some defragmenters also try to keep smaller files within a single directory together, as they are often accessed in sequence. According to a survey, 42% of PC users fail to defrag their system regularly which leads to adversely affecting system performance.[1]Contents [hide]
1 Aims of defragmentation
2 Causes and cures
3 Defragmentation issues
4 Myths
5 Filesystems
6 See also
7 References
8 Sources
9 External links
[edit]
Aims of defragmentation
Reading and writing data on a heavily fragmented file system is slowed down as the time needed for the disk heads to move between fragments and waiting for the disk platter to rotate into position is increased (see seek time and rotational delay). For many common operations, the performance bottleneck of the entire computer is the hard disk; thus the desire to process more efficiently encourages defragmentation. Operating system vendors often recommend periodic defragmentation to keep disk access speed from degrading over time.
Fragmented data also spreads over more of the disk than it needs to. Thus, one may defragment to gather data together in one area, before splitting a single partition into two or more partitions (for example, with GNU Parted, or PartitionMagic).
Defragmenting may help people to increase the life-span of the hard drive itself, by minimizing head movement and simplifying data access operations.[citation needed]
[edit]
Causes and cures
Fragmentation occurs when the operating system cannot or will not allocate enough contiguous space to store a complete file as a unit, but instead puts parts of it in gaps between other files (usually those gaps exist because they formerly held a file that the operating system has subsequently deleted or because the operating system allocated excess space for the file in the first place). Larger files and greater numbers of files also contribute to fragmentation and consequent performance loss. Defragmentation attempts to alleviate these problems.
Consider the following scenario, as shown by the image on the right:
An otherwise blank disk has 5 files, A, B, C, D and E each using 10 blocks of space (for this section, a block is an allocation unit of that system, it could be 1K, 100K or 1 megabyte and is not any specific size). On a blank disk, all of these files will be allocated one after the other. (Example (1) on the image.) If file B is deleted, there are two options, leave the space for B empty and use it again later, or compress all the files after B so that the empty space follows it. This could be time consuming if there were hundreds or thousands of files which needed to be moved, so in general the empty space is simply left there, marked in a table as available for later use, then used again as needed.[2] (Example (2) on the image.) Now, if a new file, F, is allocated 7 blocks of space, it can be placed into the first 7 blocks of the space formerly holding the file B and the 3 blocks following it will remain available. (Example (3) on the image.) If another new file, G is added, and needs only three blocks, it could then occupy the space after F and before C. (Example (4) on the image). Now, if subsequently F needs to be expanded, since the space immediately following it is no longer available, there are two options: (1) add a new block somewhere else and indicate that F has a second extent, or (2) move the file F to someplace else where it can be created as one contiguous file of the new, larger size. The latter operation may not be possible as the file may be larger than any one contiguous space available, or the file conceivably could be so large the operation would take an undesirably long period of time, thus the usual practice is simply to create an extent somewhere else and chain the new extent onto the old one. (Example (5) on the image.) Repeat this practice hundreds or thousands of times and eventually the file system has many free segments in many places and many files may be spread over many extents. If, as a result of free space fragmentation, a newly created file (or a file which has been extended) has to be placed in a large number of extents, access time for that file (or for all files) may become excessively long.
The process of creating new files, and of deleting and expanding existing files, may sometimes be colloquially referred to as churn, and can occur at both the level of the general root file system, but in subdirectories as well. Fragmentation not only occurs at the level of individual files, but also when different files in a directory (and maybe its subdirectories), that are often read in a sequence, start to "drift apart" as a result of "churn".
A defragmentation program must move files around within the free space available to undo fragmentation. This is a memory intensive operation and cannot be performed on a file system with no free space. The reorganization involved in defragmentation does not change logical location of the files (defined as their location within the directory structure).
Another common strategy to optimize defragmentation and to reduce the impact of fragmentation is to partition the hard disk(s) in a way that separates partitions of the file system that experience many more reads than writes from the more volatile zones where files are created and deleted frequently. In Microsoft Windows, the contents of directories such as "\Program Files" or "\Windows" are modified far less frequently than they are read. The directories that contain the users' profiles are modified constantly (especially with the Temp directory and Internet Explorer cache creating thousands of files that are deleted in a few days). If files from user profiles are held on a dedicated partition (as is commonly done on UNIX systems), the defragmenter runs better since it does not need to deal with all the static files from other directories. For partitions with relatively little write activity, defragmentation performance greatly improves after the first defragmentation, since the defragmenter will need to defrag only a small number of new files in the future.
[edit]
Defragmentation issues
The presence of immovable system files, especially a swap file, can impede defragmentation. These files can be safely moved when the operating system is not in use. For example, ntfsresize moves these files to resize an NTFS partition.
All files with read-only attributes are immovable if the defragger is not run with administrative rights. While the system files are correctly read-only, most computers today contain many inappropriately read-only files. When copying from a CD all the copied files retain the read-only attribute. These immovable files will interfere with defrag operations. Unsetting all the read only flags can be accomplished in MS-DOS and Windows with the command "attrib -R /S /D * " which will not impact files marked with the system attribute.
On systems without fragmentation resistance, fragmentation builds upon itself when left unhandled, so periodic defragmentation is necessary to keep the disk performance at peak and avoid the excess overhead of less frequent defragmentation.
[edit]
Myths
Defragging the disk will not stop a system from malfunctioning or crashing because the filesystem is designed to work with fragmented files.[3] Since defrag cannot be run on a filesystem marked as dirty without first running chkdsk[4], a user who intends to run defrag "to fix a system acting strangely" often ends up running chkdsk, which repairs file system errors, the end result of which may mislead the user into thinking that defrag fixed the problem when it was actually fixed by chkdsk.
In fact, in a modern multi-user operating system, an ordinary user cannot defragment the system disks since superuser access is required to move system files. Additionally, file systems such as NTFS (and most Unix/Linux filesystems) are designed to decrease the likelihood of fragmentation.[5][6] Improvements in modern hard drives such as RAM cache, faster platter rotation speed, and greater data density reduce the negative impact of fragmentation on system performance to some degree, though increases in commonly used data quantities offset those benefits. However, modern systems profit enormously from the huge disk capacities currently available, since partially filled disks fragment much less than full disks.[7] In any case, these limitations of defragmentation have led to design decisions in modern operating systems like Windows Vista to automatically defragment in a background process but not to attempt to defragment a volume 100% because doing so would only produce negligible performance gains.[8]
[edit]
Filesystems
FAT: DOS 6.x and Windows 9x-systems come with a defragmentation utility called Defrag. The DOS version is a limited version of Norton SpeedDisk[9], and the Windows version is licensed from Diskeeper.
NTFS: Windows 2000 and newer include a defragmentation tool based on Diskeeper. NT 4 and below do not have built-in defragmentation utilities. Unfortunately the integrated defragger does not consolidate free space. Thus a heavily fragmented drive with many small files may still have no large consecutive free space after defragmentation. So any new large file will instantly be split into small fragments with immediate impact on performance. This can happen even if the overall disk usage is less than 60%[10]
ext2 (Linux) uses an offline defragmenter called e2defrag, which does not work with its successor ext3, unless the ext3 filesystem is temporarily down-graded to ext2. Instead, a filesystem-independent defragmenter like Shake[1] may be used.
JFS has a defragfs utility on IBM operating systems.[citation needed]
HFS Plus in 1998 introduced a number of optimizations to the allocation algorithms in an attempt to defragment files while they're being accessed without a separate defragmenter.[citation needed]
XFS provides an online defragmentation utility called xfs_fsr[citation needed].
[edit]
See also
Fragmentation
File system fragmentation
Optimization software
Commercial utilities for NTFS-based Windows systems:
Diskeeper
O&O Defrag
PerfectDisk
Vopt
Windows Disk Defragmenter
Freeware utilities for NTFS-based Windows systems:
Contig, a command line utility
JkDefrag, the first free and open source defrag utility for Windows (under GPL).
PageDefrag, a boot-time utility
UltraDefrag - free defrag utility for Windows (under GPL).
[edit]
References
^ "42% of PC Users Fail to Defrag their Computers: Survey".
^ The practice of leaving the empty space behind after a file is deleted, marked in a table as available for later use, then used again as needed is why undelete programs were able to work, they simply recovered the file whose name had been deleted from the directory, but the contents were still on disk.
^ Defragmentation is not a solution to program or system crashes
^ Defrag cannot be run on file system marked as dirty until chkdsk is run
^ NTFS decreases the likelihood of fragmentation as compared to older file systems
^ UNIX filesystems tend to do a lot to prevent fragmentation
^ Modern hard drive improvements minimize negative impact of fragmentation
^ Windows Vista automatic defragmentation does not attempt to reach 100% defragmentation because that would not help system performance
^ Peter Norton's Complete Guide to DOS 6.22, page 521
^ See Windows XP Timesaving Techniques For Dummies, Second Edition page 456.
Third party products such as PerfectDisk offer free space consolidation
[edit]
Sources
Norton, Peter (1994) Peter Norton's Complete Guide to DOS 6.22, page 521 - Sams (ISBN 067230614X)
Woody Leonhard, Justin Leonhard (2005) Windows XP Timesaving Techniques For Dummies, Second Edition page 456 - For Dummies (ISBN 0-764578-839).
Jensen, Craig (1994). Fragmentation: The Condition, the Cause, the Cure. Executive Software International (ISBN 0-9640049-0-9).
Dave Kleiman, Laura Hunter, Mahesh Satyanarayana, Kimon Andreou, Nancy G Altholz, Lawrence Abrams, Darren Windham, Tony Bradley and Brian Barber (2006) Winternals: Defragmentation, Recovery, and Administration Field Guide - Syngress (ISBN 1-597490-792)
Robb, Drew (2003) Server Disk Management in a Windows Environment Chapter 7 - AUERBACH (ISBN 0849324327 |
I hope this answers most people's questions, and that it will not be necessary to write another post on disk fragmentation, ever.
* Apparently, this is too terribly exhausting and complicated for Mac users to even read about. Just TOO MANY STEPS!
Last edited by idonojacs on Tue Oct 02, 2007 2:20 am; edited 1 time in total |
|
| Back to top |
|
 |
rocklee
Joined: 04 Oct 2005 Location: Seoul
|
Posted: Tue Oct 02, 2007 2:19 am Post subject: |
|
|
| idonojacs wrote: |
My Web browser caches are on C drive, I believe. This would account for at least of some of the fragmentation.
|
It is always good practice to separate your system files from your personal stuff using partitions. Fragmenting would be a lot faster and easier too as your log file has indicated.
| Quote: |
Frankly, the whole defrag issue is way overblown. It doesn't hurt to do it now and then, say once every month or two. Doing it every week is nuts. Once a year is probably enough for most people. Generally, you aren't going to notice much difference. But it supposedly makes life easier for your hard drive. I don't like to burn out hard drives.
|
All true and defragging is an intensive task since it involves moving files to different locations on the hard disk to a more convenient location. Most modern hard disks can take the abuse, but like I said doing it often is only going to reduce the life of the drives more. |
|
| Back to top |
|
 |
Demophobe

Joined: 17 May 2004
|
Posted: Tue Oct 02, 2007 5:08 pm Post subject: |
|
|
| rocklee wrote: |
I said doing it often is only going to reduce the life of the drives more. |
This is not true. |
|
| Back to top |
|
 |
Henry VII
Joined: 14 Sep 2007 Location: Seoul
|
Posted: Wed Oct 03, 2007 6:09 pm Post subject: |
|
|
So, yes, PCs can get infected. But then if you go to Thailand and have unprotected promiscuous sex, how long will it be before you get infected? IN the real world, we are expected to use common sense and act responsibly. Computers are part of the real world.
Wouldn't it be more responsible to get a computer you don't need to waste time downloading antivirus patches and the latest spyware updates? Everything is part of the real world. Unfortunately, too many people are unrealistic. |
|
| Back to top |
|
 |
Thunndarr

Joined: 30 Sep 2003
|
Posted: Wed Oct 03, 2007 6:49 pm Post subject: |
|
|
| Henry VII wrote: |
So, yes, PCs can get infected. But then if you go to Thailand and have unprotected promiscuous sex, how long will it be before you get infected? IN the real world, we are expected to use common sense and act responsibly. Computers are part of the real world.
Wouldn't it be more responsible to get a computer you don't need to waste time downloading antivirus patches and the latest spyware updates? Everything is part of the real world. Unfortunately, too many people are unrealistic. |
Actually, I think it makes sense to get a computer that does what you need it to do. This can include such things as having the OS you prefer, the programs you want to run, and possibly the compatibility with other computers that you might also have to work on. For most people, knowing one OS is plenty, if that OS is Windows. If you you are a fan of Linux, or OSX (or whatever) that's great, but the odds are that you still spend significant amounts of time using Windows computers (at work, internet cafes, or whatever.) To me, having to know more than one operating system would be a waste of my time. (And yes, I know it's not exactly rocket science to operate a Mac, but any time I have to spend fiddling around learning how to do something that I can easily do on my PC would be, to me, wasted time. And probably would take up much more time than the time I spend updating my anti-virus, which is literally seconds a year.) |
|
| Back to top |
|
 |
Thunndarr

Joined: 30 Sep 2003
|
Posted: Wed Oct 03, 2007 7:06 pm Post subject: |
|
|
Another thing (and I don't want to come off as anti-Mac, I honestly don't care either way) but doesn't owning a Mac end up being more expensive over time?
I mean, let's say you want a Mac. You go out and buy one. At the time you get it, let's say you spend around $1500 for a machine with very good performance.
Now, me, being a cheap bastard and also a PC guy, will go out and get a moderately priced PC. Let's say I spend $600. Your Mac will most likely be quite a bit faster (at this point in time.)
After a year or so, however, I'm unsatisfied with my PC. So, I upgrade the CPU for $200. My PC is now as fast as your Mac after one year. The next year, I'm again unhappy with my computer, so I upgrade the video card for another $150. After two years, a Mac will be significantly slower than a new computer, whereas my budget PC will still be middle of the road and chugging along happily.
Now, I'm typing this all off the top of my head, and maybe I am completely base, but I have always heard that Macs are more or less un-upgradeable. So, if my choice is making piece-meal upgrades of budget parts and keeping my computer fairly middle of the road on a constant basis, or paying a premium for a new model that will get relatively slower over the course of its lifetime, well, to me the choice is obvious. |
|
| Back to top |
|
 |
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|