The SSD megathread.
- Krom
- DBB Database Master
- Posts: 16137
- Joined: Sun Nov 29, 1998 3:01 am
- Location: Camping the energy center. BTW, did you know you can have up to 100 characters in this location box?
- Contact:
The SSD megathread.
Some of the regulars here have probably heard a few of us talking/bragging about Solid State Drives for the last year or so, and if you haven't heard it enough, or even if you have, this is the thread for you. I'm going to answer some common questions about SSDs, and explain some aspects of how hard drives work and why SSDs are often better.
First, lets talk about Hard Drives.
Hard drives are fairly consistent devices because of their mechanical nature. A common desktop hard drive has one or more disks that spin at a constant speed of 7200 revolutions per minute (that's 120 revolutions per second), the disks are coated with a magnetic surface and the surface is divided into small blocks called sectors. On modern drives sectors are 4096 bytes (4 KB), on older drives they were 512 bytes and a sector always takes up the same amount of physical size on the disk regardless of where on the disk they are. Sectors are arranged in a spiral pattern, starting on the outer most edge of the disk and working their way in to the center (which is why hard drives are faster at the start than at the end, more sectors fit on the outer edge of the disk than on the inner edge). The sector is the smallest unit a hard drive understands and can operate on, which means you cannot read or write individual bits from/to the disk, you can only read/write whole sectors. The drive keeps track of the sectors with a simple number, starting at 0 and counting up to as many sectors as the drive has. Floating above the disk there is an actuator arm with the drive heads at the tip, the heads are responsible for writing to or reading from the magnetic coating on the disk. The heads require a brief but fairly predictable time to seek to a location of some data on the disk, and typically require a bit more time after that for the disk to rotate and the correct data to pass beneath them. This time is called the access latency, on a typical desktop hard drive this latency is 12-14 milliseconds.
Once a hard drive seeks to some location, it can easily average over 100 MB/sec reading/writing it since the data moves about that quickly beneath the heads in a modern drive; this is great if you are trying to read/write a big piece of data like 100+ MB, but it can be terrible if you are only trying to read/write a single bit. The reason is because the access latency is the same regardless of the size of data to be read/written and a single 4 KB sector only requires a tiny fraction of a millisecond to read or write. At 100 MB/sec, it requires only 0.0390625 milliseconds to completely read or write a single 4 KB sector, if you want to read or write the next 4 KB sector behind that one (a sequential operation) it only requires an additional 0.0390625 milliseconds, but if you want to read a different sector somewhere else on the disk, it could require up to the whole 12-14 MS access latency again to reach it and start over. As a result the data rate for asking for one big 100 MB chunk may easily reach 100 MB/sec, but asking for hundreds of little 4 KB to 128 KB chunks of data would result in a rate of as little as 1 MB/sec. So when you are reading or writing scattered individual sectors it is very inefficient because the vast majority of the time is spent seeking around and waiting for the heads and disk to line up rather than actually doing the work of reading or writing the data.
Now we know that hard drives are efficient at handling large chunks of data, but inefficient at handling one or more small chunks of data. So what?
Each time your computer requests a drive to do something it generates an event on the drive called an In/Out Operation, and a common method for benchmarking drives is the number of these operations the drive can perform in a second. However its important to remember that an IOP isn't a fixed size (although 4 KB is the smallest practical size on modern file systems/drives, and benchmarks control the IOP sizes they generate). The system can request 1 byte from a drive, or it can request 1 million bytes from a drive both with a single IOP. An IOP is simply a command like "go to sector X and read/write something" or "go to sector X and read/write all data until you reach sector Y". Which is why it is important to also benchmark drives in the actual throughput in megabytes per second. Also keep in mind that an IOP does not equal reading or writing a file to the system as a human would understand files, a single file that is larger than the 4 KB sector size can require more than one IOP to be read or write if it is fragmented. So when you want to read a file that is split into three fragments from a hard drive, the operating system will generate three IOPs to read it, which would come out like "read sector 123" "read sector 234" "read sector 345". And each IOP requires seeking the heads and waiting for the disk to rotate to the proper orientation which brings us back to that inefficiency of small operations on a hard drive because of the access latency (and that is why its important to keep your hard drive defragmented).
This brings us to how your computer behaves, like it or not, on average well over half the operations people do on their computers generate large numbers of those small and inefficient IOPs. Booting up your computer, loading most desktop applications, updating software, installing regular applications, the vast majority of these operations work in small IOPs and there is nothing that can be done to change that. So, because hard drives are so inefficient at doing this, people started looking for a device that could handle those compact IOPs faster, and the answer was the Solid State Drive.
Okay already, what is a SSD and why is it better than a hard drive?
Well to put it simply, it is a drive made up of the same basic type of memory that a USB thumb drive has in it (although the memory in SSDs is manufactured to a much higher quality). SSDs have an array of flash memory chips to store the data on, they have no spinning disks or heads on actuators, in fact they have no moving parts at all! This means that when you send an IOP to a SSD asking for a piece of data, it doesn't have to wait for any moving parts, it can simply immediately retrieve the data the moment it looks up where it is and send it to the system right then and there. SSDs still have an access latency, but its usually measured in microseconds or even nanoseconds. The much shorter access latency means that even when dealing with a whole bunch of small and inefficient IOPs, SSDs can still keep the actual data rate dramatically higher when compared to a hard drive. Basically they took the problem of access latency and disk "thrashing", and applied a simple but effective brute force solution to the problem. So with a SSD, when you boot up your computer, or load an application, or install or update something, or do anything that causes those small inefficient IOPs, it happens MUCH faster because the rest of the system doesn't have to wait anywhere near as long for the drive to deliver. It isn't that SSDs don't thrash, in fact they do thrash just like a hard drive, the difference is they thrash 300 times faster. The amount of time this can save can be tremendous, you are probably so used to waiting for your PC as the hard drive thrashes away that you don't even realize that the accumulation of its 12-14 MS access latency is literally wasting hours of your time every week.
But SSDs are expensive...
Indeed, there is no way around this one: The flash memory used in SSDs is expensive when compared to a hard drive. Fortunately the price has been steadily falling for the last few years, initially it was well over $5/GB but now its down to $0.80/GB, mid sized SSDs are starting to reach quite affordable levels. Sure hard drives are still a lot cheaper at as little as $0.05/GB, but if you have a well balanced modern computer already or are planning on building one soon, its absolutely worth every penny to include a SSD.
How reliable are SSDs?
People sometimes ask with some concern "Is a SSD as reliable as a hard drive?", and to that I usually reply "I certainly hope not!". For a technology that has been around for over 50 years, hard drives are surprisingly unreliable. If you were shopping for brand new cars and 1 in 20 would grind to a halt the moment you tried to move it off the lot, you would be furious, but statistically speaking that is the dead-on-arrival ratio for mechanical hard drives. Sure, the early SSDs from a couple years ago definitely had their issues, but like any good technology they have since largely grown out of it. (For the record, my brother has one of the early generation 1 OCZ vertex drives that still works and he has been using every day for years. And in the same time he has had to RMA a half dozen hard drives from the arrays he keeps around.) Giving up on moving parts has allowed the SSDs a lot of room to push for higher reliability than mechanical hard drives, and as the industry has matured they have started capitalizing on that so modern SSDs are very reliable.
I've heard that SSDs have a limited life span, wouldn't that be a problem for me?
It is true that the individual cells that make up flash memory in SSDs will wear out after it has been written (or programmed) a certain number of times, which is at least 3000 times in a modern SSD, this is known as the Program/Erase cycle endurance. 3000 might sound like a small number but it is actually more than sufficient for any desktop PC user. SSDs use a feature called wear leveling to spread out the writes on a drive in order to make sure no cells wear out prematurely. Wear leveling works because SSDs actually have additional capacity that is reserved and you cannot directly access called spare area. Virtually all SSDs have spare area, typically around 6% for smaller drives (up to 200 GB) and 12% for larger drives (200GB and over). SSDs use spare area as guaranteed free space for shuffling around writes to frequently updated files. Spare area also means SSDs behave vastly different from hard drives on the inside, even though they present themselves to the operating system as working the same. To your OS a SSD is just like a hard drive, it starts at sector 0 and works its way up, but internally SSDs can remap any "sector" to any physical location on its flash memory, and it can even change these mappings as necessary.
Spare area is also important because of a difference in how flash memory is rewritten. Flash memory starts out with the cell, which is the smallest part and on consumer drives can hold two bits of information, cells are arranged into pages, which are typically 4 KB and work pretty much like sectors on a hard drive, but unlike hard drives, pages are further organized into blocks of typically 512 KB. When you update/overwrite a file on a mechanical hard drive, it can simply overwrite it with the new data in exactly the same location, but flash memory cannot do that. Instead flash memory has to be erased before it can be rewritten, unfortunately individual pages cannot be erased, only whole blocks can. So in order to update a single page within a block, the controller must first read all the valid data in the block to a special cache, then erase the whole block, then write it all back with the updated data. We call this phenomenon "Write Amplification", which is another use for spare area. When a file is updated, rather than updating it in-place, SSDs will simply write the updated data to a completely different page and mark the old page as invalid, in doing so it avoids write amplification. This causes the invalid pages to eventually build up, so once a block is filled with mostly invalid data it triggers the drives aptly named "garbage collection" routine which will save the valid data (sometimes in a different block entirely) erase the block and then add it back to the spare area pool. Additionally modern drives when used with a supporting operating system are smart about using everything they have available for spare area, which means when you use Windows 7 or recent Linux kernels on a TRIM supporting SSD, anything that isn't filled with valid data gets applied to the spare area pool, including ALL the free space within the file system. (If you're wondering "Wouldn't that fragment the files and be bad?". The answer is "It would if SSDs were anything like a hard drive, fortunately they aren't.". SSDs are so different from from hard disks that they fragment themselves on purpose and actually work better that way!)
As for how long a SSD will actually last in practice... Fortunately for us most recent SSDs keep track of their used lifespan; a couple months after I got mine I decided to check and see how much endurance I had used up and how long it would take to use up what was left. I loaded up the SSD toolbox for my drive and checked the amount of data that had been written to it and the write amplification that had occurred, then calculated the remaining lifespan assuming it would last exactly 3000 P/E cycles. Once I arrived at a number, I promptly closed my calculator and the toolbox and never let a thought of endurance cross my mind again. The reason I gave up caring is because I simply couldn't imagine a scenario where I would still be using this drive in ~275 years (give or take a couple decades) when I would reach the projected 3000 P/E cycle endurance limit. Needless to say, many online discussions on SSDs greatly inflate the importance of the P/E cycle endurance. It is actually possible to wear out a SSD within a couple years if you dedicate a machine to constantly write and rewrite garbage to the drive at its highest speed 24/7/365, but that is so astronomically beyond even the heaviest desktop user workload that it instead becomes a good example of just how durable SSDs are.
How much faster is a SSD compared to a hard drive.
A good modern desktop hard drive can usually thrash out a 1 MB/sec score in the 4 KB random access test, which is roughly 270 IOPS (I/O Operations per Second). The high end SSDs these days can do the same task at over 350 MB/sec, or upwards of 85,000 IOPS. Even the inexpensive entry level SSDs can reach 60 MB/sec or around 15,000 IOPS at this test, and they keep getting better all the time. So depending on what type of work the drive is doing, SSDs are anywhere from 5 times faster, to 350 times faster than a common mechanical hard drive. Granted it is necessary to keep it in perspective, not every time consuming or slow task your computer does is demanding on or waiting for the drive. SSDs will accelerate things like booting up, installing/uninstalling/updating/launching applications, virus scanning files, decompressing files, loading the next level in video games and anything else that is I/O bound, typically right up until your system finds whatever other limiting factor it has (CPU, memory, etc). However they will not accelerate things that aren't I/O bound, such as video game or movie playback frame rates (GPU/CPU bound) or video/audio encoding, image editing/word processing/spreadsheets (CPU bound), or copying/moving stuff to or from other non SSD drives or other computers (limited by the other drive/computer or the connections between them). Your computer probably wont boot its operating system or load all applications instantly, but it probably will do the vast majority of it noticeably faster, how much depends on the computer and what it is loading. SSDs may not save much time during some common tasks, but they will save a lot of time between tasks because they allow you to begin the next task much faster.
Why now?
At this point I wouldn't expect SSDs to get much faster for a while, the reason is because of the SATA connection that is most common, most drives are already reaching the interface limits from the SATA 6.0 gbps channel and have no room left to grow. If you look at benchmarks of SSDs, you tend to notice that the performance of sometimes vastly different drives (different chipsets, firmware, capacity, etc) still tend to cluster in a few areas; mainly 550-560 MB/sec sequential tests and 350 MB/sec 4 KB random tests. The probable reason behind this is because the SATA connection is the limit (4 KB tests generate more more overhead than sequential tests, which explains why they are slower). Soon drives will no longer be able to differentiate each other very well in performance and capacity is largely dependent on the price of the memory which is pretty much the same between all vendors, so there should be a lot more competition on reliability and other features in order for each manufacturer to try and distinguish themselves, which means its a good time to start watching the market.
How fast could SSDs really be without SATA 6.0 gbps holding them back? Well take my entry level SSD, which is actually one of the slower ones still on the market these days and is only a SATA 3.0 gbps drive. Internally each flash chip on my drive can deliver 160 MB/sec, which doesn't sound like *that* much considering hard drives are only a little slower and many newer SSDs can hit 500+ MB/sec, but like any other SSD this one is an array of multiple chips, my specific model is a 10 channel device and each channel is capable of 160 MB/sec. So if you eliminated the controller imposed and SATA 3.0 gbps interface speed limitations, even my entry level drive could hit an absurd 1.6 gigabytes per second sequential throughput (equal to an array of more than 15 conventional hard drives in RAID0). Perhaps not that surprisingly, even the first generation of SSDs had the potential to completely shatter the SATA 6.0 gbps interface limits even though their flash was slower (although they didn't because they were held back by their comparatively slow controllers and other bottlenecks that hadn't been worked out yet).
Is there anything else my computer would need before I add a SSD to it.
Potentially yes, if your computer has a slow CPU or doesn't have enough DRAM, you won't see the full benefits of using a SSD. You would have to fix the other bottlenecks of your system before you should invest in a SSD or you will just be waiting for a different component. It doesn't take a super fast or expensive system to extract the benefits of a SSD, but it does require a reasonably balanced one.
First, lets talk about Hard Drives.
Hard drives are fairly consistent devices because of their mechanical nature. A common desktop hard drive has one or more disks that spin at a constant speed of 7200 revolutions per minute (that's 120 revolutions per second), the disks are coated with a magnetic surface and the surface is divided into small blocks called sectors. On modern drives sectors are 4096 bytes (4 KB), on older drives they were 512 bytes and a sector always takes up the same amount of physical size on the disk regardless of where on the disk they are. Sectors are arranged in a spiral pattern, starting on the outer most edge of the disk and working their way in to the center (which is why hard drives are faster at the start than at the end, more sectors fit on the outer edge of the disk than on the inner edge). The sector is the smallest unit a hard drive understands and can operate on, which means you cannot read or write individual bits from/to the disk, you can only read/write whole sectors. The drive keeps track of the sectors with a simple number, starting at 0 and counting up to as many sectors as the drive has. Floating above the disk there is an actuator arm with the drive heads at the tip, the heads are responsible for writing to or reading from the magnetic coating on the disk. The heads require a brief but fairly predictable time to seek to a location of some data on the disk, and typically require a bit more time after that for the disk to rotate and the correct data to pass beneath them. This time is called the access latency, on a typical desktop hard drive this latency is 12-14 milliseconds.
Once a hard drive seeks to some location, it can easily average over 100 MB/sec reading/writing it since the data moves about that quickly beneath the heads in a modern drive; this is great if you are trying to read/write a big piece of data like 100+ MB, but it can be terrible if you are only trying to read/write a single bit. The reason is because the access latency is the same regardless of the size of data to be read/written and a single 4 KB sector only requires a tiny fraction of a millisecond to read or write. At 100 MB/sec, it requires only 0.0390625 milliseconds to completely read or write a single 4 KB sector, if you want to read or write the next 4 KB sector behind that one (a sequential operation) it only requires an additional 0.0390625 milliseconds, but if you want to read a different sector somewhere else on the disk, it could require up to the whole 12-14 MS access latency again to reach it and start over. As a result the data rate for asking for one big 100 MB chunk may easily reach 100 MB/sec, but asking for hundreds of little 4 KB to 128 KB chunks of data would result in a rate of as little as 1 MB/sec. So when you are reading or writing scattered individual sectors it is very inefficient because the vast majority of the time is spent seeking around and waiting for the heads and disk to line up rather than actually doing the work of reading or writing the data.
Now we know that hard drives are efficient at handling large chunks of data, but inefficient at handling one or more small chunks of data. So what?
Each time your computer requests a drive to do something it generates an event on the drive called an In/Out Operation, and a common method for benchmarking drives is the number of these operations the drive can perform in a second. However its important to remember that an IOP isn't a fixed size (although 4 KB is the smallest practical size on modern file systems/drives, and benchmarks control the IOP sizes they generate). The system can request 1 byte from a drive, or it can request 1 million bytes from a drive both with a single IOP. An IOP is simply a command like "go to sector X and read/write something" or "go to sector X and read/write all data until you reach sector Y". Which is why it is important to also benchmark drives in the actual throughput in megabytes per second. Also keep in mind that an IOP does not equal reading or writing a file to the system as a human would understand files, a single file that is larger than the 4 KB sector size can require more than one IOP to be read or write if it is fragmented. So when you want to read a file that is split into three fragments from a hard drive, the operating system will generate three IOPs to read it, which would come out like "read sector 123" "read sector 234" "read sector 345". And each IOP requires seeking the heads and waiting for the disk to rotate to the proper orientation which brings us back to that inefficiency of small operations on a hard drive because of the access latency (and that is why its important to keep your hard drive defragmented).
This brings us to how your computer behaves, like it or not, on average well over half the operations people do on their computers generate large numbers of those small and inefficient IOPs. Booting up your computer, loading most desktop applications, updating software, installing regular applications, the vast majority of these operations work in small IOPs and there is nothing that can be done to change that. So, because hard drives are so inefficient at doing this, people started looking for a device that could handle those compact IOPs faster, and the answer was the Solid State Drive.
Okay already, what is a SSD and why is it better than a hard drive?
Well to put it simply, it is a drive made up of the same basic type of memory that a USB thumb drive has in it (although the memory in SSDs is manufactured to a much higher quality). SSDs have an array of flash memory chips to store the data on, they have no spinning disks or heads on actuators, in fact they have no moving parts at all! This means that when you send an IOP to a SSD asking for a piece of data, it doesn't have to wait for any moving parts, it can simply immediately retrieve the data the moment it looks up where it is and send it to the system right then and there. SSDs still have an access latency, but its usually measured in microseconds or even nanoseconds. The much shorter access latency means that even when dealing with a whole bunch of small and inefficient IOPs, SSDs can still keep the actual data rate dramatically higher when compared to a hard drive. Basically they took the problem of access latency and disk "thrashing", and applied a simple but effective brute force solution to the problem. So with a SSD, when you boot up your computer, or load an application, or install or update something, or do anything that causes those small inefficient IOPs, it happens MUCH faster because the rest of the system doesn't have to wait anywhere near as long for the drive to deliver. It isn't that SSDs don't thrash, in fact they do thrash just like a hard drive, the difference is they thrash 300 times faster. The amount of time this can save can be tremendous, you are probably so used to waiting for your PC as the hard drive thrashes away that you don't even realize that the accumulation of its 12-14 MS access latency is literally wasting hours of your time every week.
But SSDs are expensive...
Indeed, there is no way around this one: The flash memory used in SSDs is expensive when compared to a hard drive. Fortunately the price has been steadily falling for the last few years, initially it was well over $5/GB but now its down to $0.80/GB, mid sized SSDs are starting to reach quite affordable levels. Sure hard drives are still a lot cheaper at as little as $0.05/GB, but if you have a well balanced modern computer already or are planning on building one soon, its absolutely worth every penny to include a SSD.
How reliable are SSDs?
People sometimes ask with some concern "Is a SSD as reliable as a hard drive?", and to that I usually reply "I certainly hope not!". For a technology that has been around for over 50 years, hard drives are surprisingly unreliable. If you were shopping for brand new cars and 1 in 20 would grind to a halt the moment you tried to move it off the lot, you would be furious, but statistically speaking that is the dead-on-arrival ratio for mechanical hard drives. Sure, the early SSDs from a couple years ago definitely had their issues, but like any good technology they have since largely grown out of it. (For the record, my brother has one of the early generation 1 OCZ vertex drives that still works and he has been using every day for years. And in the same time he has had to RMA a half dozen hard drives from the arrays he keeps around.) Giving up on moving parts has allowed the SSDs a lot of room to push for higher reliability than mechanical hard drives, and as the industry has matured they have started capitalizing on that so modern SSDs are very reliable.
I've heard that SSDs have a limited life span, wouldn't that be a problem for me?
It is true that the individual cells that make up flash memory in SSDs will wear out after it has been written (or programmed) a certain number of times, which is at least 3000 times in a modern SSD, this is known as the Program/Erase cycle endurance. 3000 might sound like a small number but it is actually more than sufficient for any desktop PC user. SSDs use a feature called wear leveling to spread out the writes on a drive in order to make sure no cells wear out prematurely. Wear leveling works because SSDs actually have additional capacity that is reserved and you cannot directly access called spare area. Virtually all SSDs have spare area, typically around 6% for smaller drives (up to 200 GB) and 12% for larger drives (200GB and over). SSDs use spare area as guaranteed free space for shuffling around writes to frequently updated files. Spare area also means SSDs behave vastly different from hard drives on the inside, even though they present themselves to the operating system as working the same. To your OS a SSD is just like a hard drive, it starts at sector 0 and works its way up, but internally SSDs can remap any "sector" to any physical location on its flash memory, and it can even change these mappings as necessary.
Spare area is also important because of a difference in how flash memory is rewritten. Flash memory starts out with the cell, which is the smallest part and on consumer drives can hold two bits of information, cells are arranged into pages, which are typically 4 KB and work pretty much like sectors on a hard drive, but unlike hard drives, pages are further organized into blocks of typically 512 KB. When you update/overwrite a file on a mechanical hard drive, it can simply overwrite it with the new data in exactly the same location, but flash memory cannot do that. Instead flash memory has to be erased before it can be rewritten, unfortunately individual pages cannot be erased, only whole blocks can. So in order to update a single page within a block, the controller must first read all the valid data in the block to a special cache, then erase the whole block, then write it all back with the updated data. We call this phenomenon "Write Amplification", which is another use for spare area. When a file is updated, rather than updating it in-place, SSDs will simply write the updated data to a completely different page and mark the old page as invalid, in doing so it avoids write amplification. This causes the invalid pages to eventually build up, so once a block is filled with mostly invalid data it triggers the drives aptly named "garbage collection" routine which will save the valid data (sometimes in a different block entirely) erase the block and then add it back to the spare area pool. Additionally modern drives when used with a supporting operating system are smart about using everything they have available for spare area, which means when you use Windows 7 or recent Linux kernels on a TRIM supporting SSD, anything that isn't filled with valid data gets applied to the spare area pool, including ALL the free space within the file system. (If you're wondering "Wouldn't that fragment the files and be bad?". The answer is "It would if SSDs were anything like a hard drive, fortunately they aren't.". SSDs are so different from from hard disks that they fragment themselves on purpose and actually work better that way!)
As for how long a SSD will actually last in practice... Fortunately for us most recent SSDs keep track of their used lifespan; a couple months after I got mine I decided to check and see how much endurance I had used up and how long it would take to use up what was left. I loaded up the SSD toolbox for my drive and checked the amount of data that had been written to it and the write amplification that had occurred, then calculated the remaining lifespan assuming it would last exactly 3000 P/E cycles. Once I arrived at a number, I promptly closed my calculator and the toolbox and never let a thought of endurance cross my mind again. The reason I gave up caring is because I simply couldn't imagine a scenario where I would still be using this drive in ~275 years (give or take a couple decades) when I would reach the projected 3000 P/E cycle endurance limit. Needless to say, many online discussions on SSDs greatly inflate the importance of the P/E cycle endurance. It is actually possible to wear out a SSD within a couple years if you dedicate a machine to constantly write and rewrite garbage to the drive at its highest speed 24/7/365, but that is so astronomically beyond even the heaviest desktop user workload that it instead becomes a good example of just how durable SSDs are.
How much faster is a SSD compared to a hard drive.
A good modern desktop hard drive can usually thrash out a 1 MB/sec score in the 4 KB random access test, which is roughly 270 IOPS (I/O Operations per Second). The high end SSDs these days can do the same task at over 350 MB/sec, or upwards of 85,000 IOPS. Even the inexpensive entry level SSDs can reach 60 MB/sec or around 15,000 IOPS at this test, and they keep getting better all the time. So depending on what type of work the drive is doing, SSDs are anywhere from 5 times faster, to 350 times faster than a common mechanical hard drive. Granted it is necessary to keep it in perspective, not every time consuming or slow task your computer does is demanding on or waiting for the drive. SSDs will accelerate things like booting up, installing/uninstalling/updating/launching applications, virus scanning files, decompressing files, loading the next level in video games and anything else that is I/O bound, typically right up until your system finds whatever other limiting factor it has (CPU, memory, etc). However they will not accelerate things that aren't I/O bound, such as video game or movie playback frame rates (GPU/CPU bound) or video/audio encoding, image editing/word processing/spreadsheets (CPU bound), or copying/moving stuff to or from other non SSD drives or other computers (limited by the other drive/computer or the connections between them). Your computer probably wont boot its operating system or load all applications instantly, but it probably will do the vast majority of it noticeably faster, how much depends on the computer and what it is loading. SSDs may not save much time during some common tasks, but they will save a lot of time between tasks because they allow you to begin the next task much faster.
Why now?
At this point I wouldn't expect SSDs to get much faster for a while, the reason is because of the SATA connection that is most common, most drives are already reaching the interface limits from the SATA 6.0 gbps channel and have no room left to grow. If you look at benchmarks of SSDs, you tend to notice that the performance of sometimes vastly different drives (different chipsets, firmware, capacity, etc) still tend to cluster in a few areas; mainly 550-560 MB/sec sequential tests and 350 MB/sec 4 KB random tests. The probable reason behind this is because the SATA connection is the limit (4 KB tests generate more more overhead than sequential tests, which explains why they are slower). Soon drives will no longer be able to differentiate each other very well in performance and capacity is largely dependent on the price of the memory which is pretty much the same between all vendors, so there should be a lot more competition on reliability and other features in order for each manufacturer to try and distinguish themselves, which means its a good time to start watching the market.
How fast could SSDs really be without SATA 6.0 gbps holding them back? Well take my entry level SSD, which is actually one of the slower ones still on the market these days and is only a SATA 3.0 gbps drive. Internally each flash chip on my drive can deliver 160 MB/sec, which doesn't sound like *that* much considering hard drives are only a little slower and many newer SSDs can hit 500+ MB/sec, but like any other SSD this one is an array of multiple chips, my specific model is a 10 channel device and each channel is capable of 160 MB/sec. So if you eliminated the controller imposed and SATA 3.0 gbps interface speed limitations, even my entry level drive could hit an absurd 1.6 gigabytes per second sequential throughput (equal to an array of more than 15 conventional hard drives in RAID0). Perhaps not that surprisingly, even the first generation of SSDs had the potential to completely shatter the SATA 6.0 gbps interface limits even though their flash was slower (although they didn't because they were held back by their comparatively slow controllers and other bottlenecks that hadn't been worked out yet).
Is there anything else my computer would need before I add a SSD to it.
Potentially yes, if your computer has a slow CPU or doesn't have enough DRAM, you won't see the full benefits of using a SSD. You would have to fix the other bottlenecks of your system before you should invest in a SSD or you will just be waiting for a different component. It doesn't take a super fast or expensive system to extract the benefits of a SSD, but it does require a reasonably balanced one.
- Krom
- DBB Database Master
- Posts: 16137
- Joined: Sun Nov 29, 1998 3:01 am
- Location: Camping the energy center. BTW, did you know you can have up to 100 characters in this location box?
- Contact:
Re: The SSD megathread.
I've got a small list of some noteworthy SSDs / brands:
The most heard about brand out there is undoubtedly OCZ and their Vertex series although they have a somewhat poor reputation. My brother has been using their drives for a couple years, the most significant issues he has had have been performance degradation that required a complete wipe/secure erase to restore to full performance, which was fairly common among first generation drives like his. You are most likely to read about problems with OCZ drives because they are some of the most common drives in the wild, but your mileage may vary.
The Samsung 830 series is one of the best all around performers, people don't seem to have many complaints about them other than price. The 256 GB+ models are serious contenders for the fastest currently available SSD (the other major contender is OCZ's Vertex 4).
Intel branded SSDs are the powerhouses of reliability and compatibility, if you pick them you will definitely be happy with the drive. However they took a pretty conservative approach and traded in some performance in order to reach their reliability goals, so they are usually in the middle of the pack for their performance drives and the mainstream drives are some of the slower SSDs on the market (they still annihilate mechanical drives though). Also a bit on the expensive side, but they remain the go-to drives for rock solid reliability.
Mushkin Enhanced Chronos and Crucial M4 are also well received drives in their 120+ GB capacities and should also be on the more affordable end of the spectrum, but will only score middle of the pack performance.
It is also worth mentioning that larger capacity SSDs are faster than smaller capacity SSDs of the same model, performance scales with capacity. The current sweet spot for most models is 256 GB, the 128 GB drives will be 15-25% slower in most cases. The 512 GB drives will be faster than the 256 GB models, but only just barely (5% at most). Don't aim for 512 GB unless you need the capacity because the performance isn't worth it.
The most heard about brand out there is undoubtedly OCZ and their Vertex series although they have a somewhat poor reputation. My brother has been using their drives for a couple years, the most significant issues he has had have been performance degradation that required a complete wipe/secure erase to restore to full performance, which was fairly common among first generation drives like his. You are most likely to read about problems with OCZ drives because they are some of the most common drives in the wild, but your mileage may vary.
The Samsung 830 series is one of the best all around performers, people don't seem to have many complaints about them other than price. The 256 GB+ models are serious contenders for the fastest currently available SSD (the other major contender is OCZ's Vertex 4).
Intel branded SSDs are the powerhouses of reliability and compatibility, if you pick them you will definitely be happy with the drive. However they took a pretty conservative approach and traded in some performance in order to reach their reliability goals, so they are usually in the middle of the pack for their performance drives and the mainstream drives are some of the slower SSDs on the market (they still annihilate mechanical drives though). Also a bit on the expensive side, but they remain the go-to drives for rock solid reliability.
Mushkin Enhanced Chronos and Crucial M4 are also well received drives in their 120+ GB capacities and should also be on the more affordable end of the spectrum, but will only score middle of the pack performance.
It is also worth mentioning that larger capacity SSDs are faster than smaller capacity SSDs of the same model, performance scales with capacity. The current sweet spot for most models is 256 GB, the 128 GB drives will be 15-25% slower in most cases. The 512 GB drives will be faster than the 256 GB models, but only just barely (5% at most). Don't aim for 512 GB unless you need the capacity because the performance isn't worth it.
-
- DBB Admiral
- Posts: 1113
- Joined: Sun Jan 02, 2000 3:01 am
Re: The SSD megathread.
I'd like to point out that Clusters are a logical construct that organizes a group of continuous sectors. A Cluster is the smallest unit which can be allocated to hold a file. Therefore if your drive is setup to use larger than 512byte clusters, you are potentially wasting a lot of space. However, larger cluster sizes can improve access speed as it reduces the overhead involved in organizing the drive space on the logical level.
- Krom
- DBB Database Master
- Posts: 16137
- Joined: Sun Nov 29, 1998 3:01 am
- Location: Camping the energy center. BTW, did you know you can have up to 100 characters in this location box?
- Contact:
Re: The SSD megathread.
Most hard drive manufacturers are switching to 4 KB sector sizes because they have less parity/ECC overhead allowing for higher density disks, especially if you have a newer drive, formatting to a 512 byte cluster size could result in an 87.5% reduction in available disk capacity, so it is best to leave the cluster size at the default 4 KB. The only thing to be concerned about is maintaining proper alignment (AKA: Don't partition or format the drive with Windows XP).
It should also be noted that the average file size on my own drive excluding the page file, hibernation file, and the windows XP mode virtual hard drive image is 350 KB, so files that would benefit from less slack via a 512 byte cluster size are all but extinct these days.
It should also be noted that the average file size on my own drive excluding the page file, hibernation file, and the windows XP mode virtual hard drive image is 350 KB, so files that would benefit from less slack via a 512 byte cluster size are all but extinct these days.
Re: The SSD megathread.
Threads like this are why I still consider this place to be the best tech source I've ever seen. Thanks Krom.
Re: The SSD megathread.
Amazing post, Krom. For my personal computer, I spend probably 20-30 hours a week coding and/or browsing the web. If you can further explain the following sentence, you will likely have persuaded me to invest in SSD even though I just bought a 150 GB Raptor to compliment my existing Raptor for RAID1 mirror. I'm all about performance but I also care about redundancy.
Krom wrote: The amount of time this can save can be tremendous, you are probably so used to waiting for your PC as the hard drive thrashes away that you don't even realize that the accumulation of its 12-14 MS access latency is literally wasting hours of your time every week.
- Krom
- DBB Database Master
- Posts: 16137
- Joined: Sun Nov 29, 1998 3:01 am
- Location: Camping the energy center. BTW, did you know you can have up to 100 characters in this location box?
- Contact:
Re: The SSD megathread.
Don't take this the wrong way Ryujin, but have you have been purchasing and using WD Raptor drives without even understanding why they are superior to more common desktop hard drives? (They have a ~30% average improvement in access latency from the 10,000 RPM instead of 7200 RPM disk rotation speed.)
As for that sentence, basically what it means is that 12-14 MS access latency will nickel and dime you to death. In the process of using your computer its easy to generate hundreds of thousands, even millions of accesses and you can guess at the total time it takes by multiplying the number of accesses by the average time of each access. If you take one weeks worth of accesses which for a moderate user would number at about two million and multiply it by a 13 MS average latency you will arrive at 26 million milliseconds spent just waiting. One millisecond is 1/1000th of a second which means 26 million of them comes out to: 7 hours 13 minutes and 20 seconds. By comparison a modern SSD has an average access latency of around 0.1 MS, which means it could accomplish the same two million accesses in only 3 minutes and 20 seconds of accumulated time.
As for that sentence, basically what it means is that 12-14 MS access latency will nickel and dime you to death. In the process of using your computer its easy to generate hundreds of thousands, even millions of accesses and you can guess at the total time it takes by multiplying the number of accesses by the average time of each access. If you take one weeks worth of accesses which for a moderate user would number at about two million and multiply it by a 13 MS average latency you will arrive at 26 million milliseconds spent just waiting. One millisecond is 1/1000th of a second which means 26 million of them comes out to: 7 hours 13 minutes and 20 seconds. By comparison a modern SSD has an average access latency of around 0.1 MS, which means it could accomplish the same two million accesses in only 3 minutes and 20 seconds of accumulated time.
Re: The SSD megathread.
I bought them because they were the most inexpensive way to get performance at the time. I bought the first one years back when SSD's were bleeding edge and far too expensive. Very recently, as I grew to care more about redundancy and not losing ANY work, I bought a new one to compliment the old one I had since I read it's good to use the same drive for RAID setups. However, recently you told me that in mirroring them for redundancy is probably hindering performance. Given my priority of not losing ANY work at any point and my other perhaps equally important priority of performance, should I consider mirroring SSDs?
- Krom
- DBB Database Master
- Posts: 16137
- Joined: Sun Nov 29, 1998 3:01 am
- Location: Camping the energy center. BTW, did you know you can have up to 100 characters in this location box?
- Contact:
Re: The SSD megathread.
I remember a pretty good story about a computer security expert who went through considerable lengths to backup all his data, he regularly backed up his mac to not just one but two external hard drives which he only powered up for backups and otherwise carefully maintained. He was pretty confident about it until one day when his mac was stolen from his second floor apartment along with both of the external hard drives which he kept in the closet. Several years later he was able to track down and recover his machine, but by then the vast majority of his data (family photos, wedding videos, etc) was lost forever and he never recovered the backup drives. Mirroring is only so effective at really protecting your data, if your computer is hit by lightning, or a power outage at the right moment, or a house fire, or a flood, or is stolen or any number of other disasters that won't single out the drive they hit, you will still lose everything. Relying on a RAID1 setup is quick and easy, but it only returns results equal to the expenditure of effort, having a truly effective backup solution is never that simple.
If you really want to protect your data, you need a copy that isn't even on the same computer or room as the one you use primarily for an on-site backup and you also need an off-site backup at some other location entirely. Anything less and you are still sitting on a single point of failure.
If you really want to protect your data, you need a copy that isn't even on the same computer or room as the one you use primarily for an on-site backup and you also need an off-site backup at some other location entirely. Anything less and you are still sitting on a single point of failure.
Re: The SSD megathread.
Yup, I already have other backups in addition. My main goal with mirroring is simply to avoid losing hours of work to an unexpected single HDD failure. I was merely asking about the performance of SSD with mirroring, perhaps as it compares to mirroring on mechanical. That is, my setup going from mirrored Raptors to mirrored SSD.
- Krom
- DBB Database Master
- Posts: 16137
- Joined: Sun Nov 29, 1998 3:01 am
- Location: Camping the energy center. BTW, did you know you can have up to 100 characters in this location box?
- Contact:
Re: The SSD megathread.
Yeah, mirroring is going to slow it down a little (but SSDs will still be way faster than mechanical drives). A good way to balance things out would be to mirror only the files you work on instead of the entire drive, having a monthly or so backup of the OS drive is generally sufficient. Windows 7 Professional can also do full disk and incremental backups of specific files on automated schedules, and can take things pretty far with shadow copy / previous versions functionality.
Re: The SSD megathread.
Why would mirroring be slower? Writes would be a little slower on average (the time of the drive that finishes *last*), but for large reads, you can read half of the read from one drive, the other half from the other, in parallel. And for small reads, it would take the time of whatever drive finishes *first*.
- Krom
- DBB Database Master
- Posts: 16137
- Joined: Sun Nov 29, 1998 3:01 am
- Location: Camping the energy center. BTW, did you know you can have up to 100 characters in this location box?
- Contact:
Re: The SSD megathread.
It depends on the implementation, if they treat the mirror like a stripe during sequential reads then you would get double the slowest drives throughput, or they could go the other way and require each drive to read the whole file and the speed would be equal to the slowest drive.
There is also still a very good reason to not use any RAID mode on SSDs: TRIM cannot be used on a RAID array (although Intel is working on fixing that in their RST v11.5 drivers so this won't always be true, although it will only work for systems on Intel chipsets).
There is also still a very good reason to not use any RAID mode on SSDs: TRIM cannot be used on a RAID array (although Intel is working on fixing that in their RST v11.5 drivers so this won't always be true, although it will only work for systems on Intel chipsets).
Re: The SSD megathread.
If you read from both drives, you can go with the result from the *fastest* drive to finish. (You wouldn't want to read from the same place from both drives except for small reads when nothing else is waiting to access the drive.)Krom wrote:they could go the other way and require each drive to read the whole file and the speed would be equal to the slowest drive
- Krom
- DBB Database Master
- Posts: 16137
- Joined: Sun Nov 29, 1998 3:01 am
- Location: Camping the energy center. BTW, did you know you can have up to 100 characters in this location box?
- Contact:
Re: The SSD megathread.
My brother has that drive, it works well for him but make sure you are using the latest firmware before you start using it (many firmware updates are destructive, meaning they erase the drive).
Re: The SSD megathread.
Would still go for a Samsung 830. There seems to be a problem w/ the OCZs service life.
Re: The SSD megathread.
One of my drives took a dump (The one containing the /home directory ), so to replace it I got both a 2TB traditional drive and a Samsung 830, 128GB SSD.
I'll let you guys know how it goes!
I'll let you guys know how it goes!
Arch Linux x86-64, Openbox
"We'll just set a new course for that empty region over there, near that blackish, holeish thing. " Zapp Brannigan
"We'll just set a new course for that empty region over there, near that blackish, holeish thing. " Zapp Brannigan
- Aggressor Prime
- DBB Captain
- Posts: 763
- Joined: Wed Feb 05, 2003 3:01 am
- Location: USA
Re: The SSD megathread.
Great thread, although I would like to add a few notes on why it may be wise to wait. Ultimately, when to get an SSD depends on when your comfort zone of GB's matches your comfort zone of spending money. Two things will continue to improve, one in bursts, the other more steadily.
1. Speed in bursts. Right now, SSDs max out SATA 6Gbps and are currently working on increasing IOPS (which is what really matters in consistency of speed). If you plan on using an SSD with a SATA interface, aka current motherboard, and you want to wait for the faster SSDs, you already have them available. However, SATA Express (from SATA-IO) and OCuLink (from PCI-SIG) are emerging technologies that aim to replace SATA in 2013+. Both use PCIe 3.0 links, but SATA Express only uses up to 2 (or 16Gbps). The plus side is that it is aimed to be backwards compatible with SATA. OCuLink uses up to 4 PCIe 3.0 links, or 32Gbps as compared with SATA 3's 6Gbps. So if your main aim is speed, with the next connection technology that should come out next year, your SSD will be that much faster. Granted, that is just raw speed. The most important changes come in IOPS, and SSDs are already low latency.
2. Capacity increase is steady. Every new nm technology brings greater possibilities for more GB. With 25nm, we are seeing 64Gb dies, or 512Gb of 8 stacked dies. 20nm will bring 128Gb dies, or 1Tb of 8 stacked dies. The max 2.5" SSD we will see from 20nm will therefore be 2TB. Then there is DensBits' partnership with Seagate that will make MLC NAND as long lasting as SLC and TLC as long lasting as MLC, further dropping $/GB. Finally, there are Mosaid's efforts of building a 3.5" SSD with 4 sets of 16 HLNAND (8 dies/NAND chip) to allow for 8TB SSDs by next year.
So many exciting developments are still happening. However, prices are low enough to buy a 240GB SSD now for most people, considering you can get an Intel 330 Series 240GB for $200 at NewEgg. And if you need more storage, specifically for data and not programs, then you could always use the SSD as a boot drive and a HDD array for extra storage. As for a strictly SSD future, I still think we have a few more years to wait. Having ~1TB of storage for me is pretty important, and getting that in flash is just too expensive ATM.
1. Speed in bursts. Right now, SSDs max out SATA 6Gbps and are currently working on increasing IOPS (which is what really matters in consistency of speed). If you plan on using an SSD with a SATA interface, aka current motherboard, and you want to wait for the faster SSDs, you already have them available. However, SATA Express (from SATA-IO) and OCuLink (from PCI-SIG) are emerging technologies that aim to replace SATA in 2013+. Both use PCIe 3.0 links, but SATA Express only uses up to 2 (or 16Gbps). The plus side is that it is aimed to be backwards compatible with SATA. OCuLink uses up to 4 PCIe 3.0 links, or 32Gbps as compared with SATA 3's 6Gbps. So if your main aim is speed, with the next connection technology that should come out next year, your SSD will be that much faster. Granted, that is just raw speed. The most important changes come in IOPS, and SSDs are already low latency.
2. Capacity increase is steady. Every new nm technology brings greater possibilities for more GB. With 25nm, we are seeing 64Gb dies, or 512Gb of 8 stacked dies. 20nm will bring 128Gb dies, or 1Tb of 8 stacked dies. The max 2.5" SSD we will see from 20nm will therefore be 2TB. Then there is DensBits' partnership with Seagate that will make MLC NAND as long lasting as SLC and TLC as long lasting as MLC, further dropping $/GB. Finally, there are Mosaid's efforts of building a 3.5" SSD with 4 sets of 16 HLNAND (8 dies/NAND chip) to allow for 8TB SSDs by next year.
So many exciting developments are still happening. However, prices are low enough to buy a 240GB SSD now for most people, considering you can get an Intel 330 Series 240GB for $200 at NewEgg. And if you need more storage, specifically for data and not programs, then you could always use the SSD as a boot drive and a HDD array for extra storage. As for a strictly SSD future, I still think we have a few more years to wait. Having ~1TB of storage for me is pretty important, and getting that in flash is just too expensive ATM.
- Krom
- DBB Database Master
- Posts: 16137
- Joined: Sun Nov 29, 1998 3:01 am
- Location: Camping the energy center. BTW, did you know you can have up to 100 characters in this location box?
- Contact:
Re: The SSD megathread.
SSDs are a lot more expensive per gigabyte than HDDs and no amount of die shrinking is going to change that any time soon. The important bit to remember about storage is the vast majority of actual programs out there don't take up much space. It is the data that takes up space, and the type of data determines where it is best stored. For most people the data that is consuming the most space on their system is pictures, music and video and none of those benefit from being on a SSD at all, it would be a complete waste of expensive real estate on a SSD.
Just take my system for example: I have a 160 GB SSD, and 6 TB of mechanical HDD capacity. The SSD isn't even half full (55 GB used / 93 GB free) and yet with the single exception of Steam (because games also don't really benefit from being on SSDs), I have every program on my system installed on the SSD. Getting a 1 TB SSD at the going rate for quality flash memory right now would set you back about $1000, for that price you could easily get a top of the line 256 GB SSD and 15 TB of mechanical HDD capacity.
Simply put, for storing big bulky data there is no substitute for a mechanical hard drive. And the best part about it is the bulkier data is, the more likely its sequentially accessed which works perfectly because mechanical drives have very solid sequential performance.
The only bulky data that would benefit from being stored on a SSD would be a bulky database, but that isn't even something that happens in the consumer space.
Just take my system for example: I have a 160 GB SSD, and 6 TB of mechanical HDD capacity. The SSD isn't even half full (55 GB used / 93 GB free) and yet with the single exception of Steam (because games also don't really benefit from being on SSDs), I have every program on my system installed on the SSD. Getting a 1 TB SSD at the going rate for quality flash memory right now would set you back about $1000, for that price you could easily get a top of the line 256 GB SSD and 15 TB of mechanical HDD capacity.
Simply put, for storing big bulky data there is no substitute for a mechanical hard drive. And the best part about it is the bulkier data is, the more likely its sequentially accessed which works perfectly because mechanical drives have very solid sequential performance.
The only bulky data that would benefit from being stored on a SSD would be a bulky database, but that isn't even something that happens in the consumer space.
Re: The SSD megathread.
ok so a game in the terms of something like BF3? Mechanical or SSD? the game stores everything minus the config files in its own directory..... the config files are in a My Documents folder of its own.
- Krom
- DBB Database Master
- Posts: 16137
- Joined: Sun Nov 29, 1998 3:01 am
- Location: Camping the energy center. BTW, did you know you can have up to 100 characters in this location box?
- Contact:
Re: The SSD megathread.
The general consensus seems to be that BF3 loads levels in about half the time on a high end SATA III SSD vs a 5400 RPM SATA II hard drive. Faster hard drives than low power/eco friendly HDDs would probably split the difference.
More RAM also stands a pretty good chance of nullifying the difference between HDDs and SSDs. I have a game that takes about 5-10 seconds to load on my PC, at least till I figured out that for whatever reason if you disable desktop composition (in order to keep the second monitor from going black) windows will cache the games data to DRAM, reducing the load time to half a second regardless of the storage medium in use.
More RAM also stands a pretty good chance of nullifying the difference between HDDs and SSDs. I have a game that takes about 5-10 seconds to load on my PC, at least till I figured out that for whatever reason if you disable desktop composition (in order to keep the second monitor from going black) windows will cache the games data to DRAM, reducing the load time to half a second regardless of the storage medium in use.
Re: The SSD megathread.
so 8GB of DDR3 1600 on XMP 1 having BF3 loaded on a WD Black 750 w/ 64MB Cache wouldnt make much different on a OCZ Vertex 4 SSD? we'll have to see, i just purchased a OCZ Vertex 4 128GB SSD.....the Max IOPS one 120K and 90K IOPS listed, im downloading ASSD or whatever that program is called to verify
Re: The SSD megathread.
A little benchmarking:
Bootloader -> login: old:28s, new: 13.5s
Opening Firefox: old: 12s, new: 3s
Opening Libreoffice: old 13s, new: 1.4s
Nice...
Bootloader -> login: old:28s, new: 13.5s
Opening Firefox: old: 12s, new: 3s
Opening Libreoffice: old 13s, new: 1.4s
Nice...
Arch Linux x86-64, Openbox
"We'll just set a new course for that empty region over there, near that blackish, holeish thing. " Zapp Brannigan
"We'll just set a new course for that empty region over there, near that blackish, holeish thing. " Zapp Brannigan
Re: The SSD megathread.
I just purchased a sandisk 240gb extreme ssd which has performed very nicely with the games that I frequently play. Performance gains are mostly just loading times, but its nice and zippy, and reboot times are very fast. I used to have a WD 500gb 7200rpm HD coupled with a 60gb crucial m4, which stopped working after a couple months. I went in the store planning to buy the ocz but was told not to because of their poor performance and to go with the sandisk. Expensive, but worth it.
Re: The SSD megathread.
GOSD THIS THTRREA DMAKES ME SO HARD
- CDN_Merlin
- DBB_Master
- Posts: 9781
- Joined: Thu Nov 05, 1998 12:01 pm
- Location: Capital Of Canada
Re: The SSD megathread.
My next upgrade (just posted in Tech) will include a 250Gig SSD. Can't wait.
Corsair Vengeance 64GB 2x32 6000 DDR5, Asus PRIME B760-PLUS S1700 ATX, Corsair RM1000x 1000 Watt PS 80 Plus Gold,WD Black SN770 2TB NVMe M.2 SSD, WD Blue SN580 1TB M.2 NVMe SSD, Noctua NH-D15S Universal CPU Cooler, Intel Core i7-14700K 5.6GHz, Corsair 5000D AIRFLOW Tempered Glass Mid-Tower ATX, Asus GF RTX 4070 Ti Super ProArt OC 16GB Video, WD Black 6TB 7200RPM 256MB 3.5" SATA3, Windows 11
Re: The SSD megathread.
have a 1terabyte wd ssd replaces a 1teracyte wd hard drive. Quick comparison:
1)in Eve the WD hd got about 60 fps, thr ssd gets over 200fps. The ssd only cost 79.00.
1)in Eve the WD hd got about 60 fps, thr ssd gets over 200fps. The ssd only cost 79.00.
Liberal speak: "Convenience for you means control for him, free and the price is astronomical, you're the product for sale". Neil Oliver
Leftist are Evil, and Liberals keep voting for them. Dennis Prager
A mouse might be in a cookie jar.... but he is not a cookie" ... Casper Ten Boom
If your life revolves around the ability to have an abortion, what does that say about your life? Anonymous
Leftist are Evil, and Liberals keep voting for them. Dennis Prager
A mouse might be in a cookie jar.... but he is not a cookie" ... Casper Ten Boom
If your life revolves around the ability to have an abortion, what does that say about your life? Anonymous