The SSD megathread.
Posted: Sun Aug 05, 2012 4:06 pm
Some of the regulars here have probably heard a few of us talking/bragging about Solid State Drives for the last year or so, and if you haven't heard it enough, or even if you have, this is the thread for you. I'm going to answer some common questions about SSDs, and explain some aspects of how hard drives work and why SSDs are often better.
First, lets talk about Hard Drives.
Hard drives are fairly consistent devices because of their mechanical nature. A common desktop hard drive has one or more disks that spin at a constant speed of 7200 revolutions per minute (that's 120 revolutions per second), the disks are coated with a magnetic surface and the surface is divided into small blocks called sectors. On modern drives sectors are 4096 bytes (4 KB), on older drives they were 512 bytes and a sector always takes up the same amount of physical size on the disk regardless of where on the disk they are. Sectors are arranged in a spiral pattern, starting on the outer most edge of the disk and working their way in to the center (which is why hard drives are faster at the start than at the end, more sectors fit on the outer edge of the disk than on the inner edge). The sector is the smallest unit a hard drive understands and can operate on, which means you cannot read or write individual bits from/to the disk, you can only read/write whole sectors. The drive keeps track of the sectors with a simple number, starting at 0 and counting up to as many sectors as the drive has. Floating above the disk there is an actuator arm with the drive heads at the tip, the heads are responsible for writing to or reading from the magnetic coating on the disk. The heads require a brief but fairly predictable time to seek to a location of some data on the disk, and typically require a bit more time after that for the disk to rotate and the correct data to pass beneath them. This time is called the access latency, on a typical desktop hard drive this latency is 12-14 milliseconds.
Once a hard drive seeks to some location, it can easily average over 100 MB/sec reading/writing it since the data moves about that quickly beneath the heads in a modern drive; this is great if you are trying to read/write a big piece of data like 100+ MB, but it can be terrible if you are only trying to read/write a single bit. The reason is because the access latency is the same regardless of the size of data to be read/written and a single 4 KB sector only requires a tiny fraction of a millisecond to read or write. At 100 MB/sec, it requires only 0.0390625 milliseconds to completely read or write a single 4 KB sector, if you want to read or write the next 4 KB sector behind that one (a sequential operation) it only requires an additional 0.0390625 milliseconds, but if you want to read a different sector somewhere else on the disk, it could require up to the whole 12-14 MS access latency again to reach it and start over. As a result the data rate for asking for one big 100 MB chunk may easily reach 100 MB/sec, but asking for hundreds of little 4 KB to 128 KB chunks of data would result in a rate of as little as 1 MB/sec. So when you are reading or writing scattered individual sectors it is very inefficient because the vast majority of the time is spent seeking around and waiting for the heads and disk to line up rather than actually doing the work of reading or writing the data.
Now we know that hard drives are efficient at handling large chunks of data, but inefficient at handling one or more small chunks of data. So what?
Each time your computer requests a drive to do something it generates an event on the drive called an In/Out Operation, and a common method for benchmarking drives is the number of these operations the drive can perform in a second. However its important to remember that an IOP isn't a fixed size (although 4 KB is the smallest practical size on modern file systems/drives, and benchmarks control the IOP sizes they generate). The system can request 1 byte from a drive, or it can request 1 million bytes from a drive both with a single IOP. An IOP is simply a command like "go to sector X and read/write something" or "go to sector X and read/write all data until you reach sector Y". Which is why it is important to also benchmark drives in the actual throughput in megabytes per second. Also keep in mind that an IOP does not equal reading or writing a file to the system as a human would understand files, a single file that is larger than the 4 KB sector size can require more than one IOP to be read or write if it is fragmented. So when you want to read a file that is split into three fragments from a hard drive, the operating system will generate three IOPs to read it, which would come out like "read sector 123" "read sector 234" "read sector 345". And each IOP requires seeking the heads and waiting for the disk to rotate to the proper orientation which brings us back to that inefficiency of small operations on a hard drive because of the access latency (and that is why its important to keep your hard drive defragmented).
This brings us to how your computer behaves, like it or not, on average well over half the operations people do on their computers generate large numbers of those small and inefficient IOPs. Booting up your computer, loading most desktop applications, updating software, installing regular applications, the vast majority of these operations work in small IOPs and there is nothing that can be done to change that. So, because hard drives are so inefficient at doing this, people started looking for a device that could handle those compact IOPs faster, and the answer was the Solid State Drive.
Okay already, what is a SSD and why is it better than a hard drive?
Well to put it simply, it is a drive made up of the same basic type of memory that a USB thumb drive has in it (although the memory in SSDs is manufactured to a much higher quality). SSDs have an array of flash memory chips to store the data on, they have no spinning disks or heads on actuators, in fact they have no moving parts at all! This means that when you send an IOP to a SSD asking for a piece of data, it doesn't have to wait for any moving parts, it can simply immediately retrieve the data the moment it looks up where it is and send it to the system right then and there. SSDs still have an access latency, but its usually measured in microseconds or even nanoseconds. The much shorter access latency means that even when dealing with a whole bunch of small and inefficient IOPs, SSDs can still keep the actual data rate dramatically higher when compared to a hard drive. Basically they took the problem of access latency and disk "thrashing", and applied a simple but effective brute force solution to the problem. So with a SSD, when you boot up your computer, or load an application, or install or update something, or do anything that causes those small inefficient IOPs, it happens MUCH faster because the rest of the system doesn't have to wait anywhere near as long for the drive to deliver. It isn't that SSDs don't thrash, in fact they do thrash just like a hard drive, the difference is they thrash 300 times faster. The amount of time this can save can be tremendous, you are probably so used to waiting for your PC as the hard drive thrashes away that you don't even realize that the accumulation of its 12-14 MS access latency is literally wasting hours of your time every week.
But SSDs are expensive...
Indeed, there is no way around this one: The flash memory used in SSDs is expensive when compared to a hard drive. Fortunately the price has been steadily falling for the last few years, initially it was well over $5/GB but now its down to $0.80/GB, mid sized SSDs are starting to reach quite affordable levels. Sure hard drives are still a lot cheaper at as little as $0.05/GB, but if you have a well balanced modern computer already or are planning on building one soon, its absolutely worth every penny to include a SSD.
How reliable are SSDs?
People sometimes ask with some concern "Is a SSD as reliable as a hard drive?", and to that I usually reply "I certainly hope not!". For a technology that has been around for over 50 years, hard drives are surprisingly unreliable. If you were shopping for brand new cars and 1 in 20 would grind to a halt the moment you tried to move it off the lot, you would be furious, but statistically speaking that is the dead-on-arrival ratio for mechanical hard drives. Sure, the early SSDs from a couple years ago definitely had their issues, but like any good technology they have since largely grown out of it. (For the record, my brother has one of the early generation 1 OCZ vertex drives that still works and he has been using every day for years. And in the same time he has had to RMA a half dozen hard drives from the arrays he keeps around.) Giving up on moving parts has allowed the SSDs a lot of room to push for higher reliability than mechanical hard drives, and as the industry has matured they have started capitalizing on that so modern SSDs are very reliable.
I've heard that SSDs have a limited life span, wouldn't that be a problem for me?
It is true that the individual cells that make up flash memory in SSDs will wear out after it has been written (or programmed) a certain number of times, which is at least 3000 times in a modern SSD, this is known as the Program/Erase cycle endurance. 3000 might sound like a small number but it is actually more than sufficient for any desktop PC user. SSDs use a feature called wear leveling to spread out the writes on a drive in order to make sure no cells wear out prematurely. Wear leveling works because SSDs actually have additional capacity that is reserved and you cannot directly access called spare area. Virtually all SSDs have spare area, typically around 6% for smaller drives (up to 200 GB) and 12% for larger drives (200GB and over). SSDs use spare area as guaranteed free space for shuffling around writes to frequently updated files. Spare area also means SSDs behave vastly different from hard drives on the inside, even though they present themselves to the operating system as working the same. To your OS a SSD is just like a hard drive, it starts at sector 0 and works its way up, but internally SSDs can remap any "sector" to any physical location on its flash memory, and it can even change these mappings as necessary.
Spare area is also important because of a difference in how flash memory is rewritten. Flash memory starts out with the cell, which is the smallest part and on consumer drives can hold two bits of information, cells are arranged into pages, which are typically 4 KB and work pretty much like sectors on a hard drive, but unlike hard drives, pages are further organized into blocks of typically 512 KB. When you update/overwrite a file on a mechanical hard drive, it can simply overwrite it with the new data in exactly the same location, but flash memory cannot do that. Instead flash memory has to be erased before it can be rewritten, unfortunately individual pages cannot be erased, only whole blocks can. So in order to update a single page within a block, the controller must first read all the valid data in the block to a special cache, then erase the whole block, then write it all back with the updated data. We call this phenomenon "Write Amplification", which is another use for spare area. When a file is updated, rather than updating it in-place, SSDs will simply write the updated data to a completely different page and mark the old page as invalid, in doing so it avoids write amplification. This causes the invalid pages to eventually build up, so once a block is filled with mostly invalid data it triggers the drives aptly named "garbage collection" routine which will save the valid data (sometimes in a different block entirely) erase the block and then add it back to the spare area pool. Additionally modern drives when used with a supporting operating system are smart about using everything they have available for spare area, which means when you use Windows 7 or recent Linux kernels on a TRIM supporting SSD, anything that isn't filled with valid data gets applied to the spare area pool, including ALL the free space within the file system. (If you're wondering "Wouldn't that fragment the files and be bad?". The answer is "It would if SSDs were anything like a hard drive, fortunately they aren't.". SSDs are so different from from hard disks that they fragment themselves on purpose and actually work better that way!)
As for how long a SSD will actually last in practice... Fortunately for us most recent SSDs keep track of their used lifespan; a couple months after I got mine I decided to check and see how much endurance I had used up and how long it would take to use up what was left. I loaded up the SSD toolbox for my drive and checked the amount of data that had been written to it and the write amplification that had occurred, then calculated the remaining lifespan assuming it would last exactly 3000 P/E cycles. Once I arrived at a number, I promptly closed my calculator and the toolbox and never let a thought of endurance cross my mind again. The reason I gave up caring is because I simply couldn't imagine a scenario where I would still be using this drive in ~275 years (give or take a couple decades) when I would reach the projected 3000 P/E cycle endurance limit. Needless to say, many online discussions on SSDs greatly inflate the importance of the P/E cycle endurance. It is actually possible to wear out a SSD within a couple years if you dedicate a machine to constantly write and rewrite garbage to the drive at its highest speed 24/7/365, but that is so astronomically beyond even the heaviest desktop user workload that it instead becomes a good example of just how durable SSDs are.
How much faster is a SSD compared to a hard drive.
A good modern desktop hard drive can usually thrash out a 1 MB/sec score in the 4 KB random access test, which is roughly 270 IOPS (I/O Operations per Second). The high end SSDs these days can do the same task at over 350 MB/sec, or upwards of 85,000 IOPS. Even the inexpensive entry level SSDs can reach 60 MB/sec or around 15,000 IOPS at this test, and they keep getting better all the time. So depending on what type of work the drive is doing, SSDs are anywhere from 5 times faster, to 350 times faster than a common mechanical hard drive. Granted it is necessary to keep it in perspective, not every time consuming or slow task your computer does is demanding on or waiting for the drive. SSDs will accelerate things like booting up, installing/uninstalling/updating/launching applications, virus scanning files, decompressing files, loading the next level in video games and anything else that is I/O bound, typically right up until your system finds whatever other limiting factor it has (CPU, memory, etc). However they will not accelerate things that aren't I/O bound, such as video game or movie playback frame rates (GPU/CPU bound) or video/audio encoding, image editing/word processing/spreadsheets (CPU bound), or copying/moving stuff to or from other non SSD drives or other computers (limited by the other drive/computer or the connections between them). Your computer probably wont boot its operating system or load all applications instantly, but it probably will do the vast majority of it noticeably faster, how much depends on the computer and what it is loading. SSDs may not save much time during some common tasks, but they will save a lot of time between tasks because they allow you to begin the next task much faster.
Why now?
At this point I wouldn't expect SSDs to get much faster for a while, the reason is because of the SATA connection that is most common, most drives are already reaching the interface limits from the SATA 6.0 gbps channel and have no room left to grow. If you look at benchmarks of SSDs, you tend to notice that the performance of sometimes vastly different drives (different chipsets, firmware, capacity, etc) still tend to cluster in a few areas; mainly 550-560 MB/sec sequential tests and 350 MB/sec 4 KB random tests. The probable reason behind this is because the SATA connection is the limit (4 KB tests generate more more overhead than sequential tests, which explains why they are slower). Soon drives will no longer be able to differentiate each other very well in performance and capacity is largely dependent on the price of the memory which is pretty much the same between all vendors, so there should be a lot more competition on reliability and other features in order for each manufacturer to try and distinguish themselves, which means its a good time to start watching the market.
How fast could SSDs really be without SATA 6.0 gbps holding them back? Well take my entry level SSD, which is actually one of the slower ones still on the market these days and is only a SATA 3.0 gbps drive. Internally each flash chip on my drive can deliver 160 MB/sec, which doesn't sound like *that* much considering hard drives are only a little slower and many newer SSDs can hit 500+ MB/sec, but like any other SSD this one is an array of multiple chips, my specific model is a 10 channel device and each channel is capable of 160 MB/sec. So if you eliminated the controller imposed and SATA 3.0 gbps interface speed limitations, even my entry level drive could hit an absurd 1.6 gigabytes per second sequential throughput (equal to an array of more than 15 conventional hard drives in RAID0). Perhaps not that surprisingly, even the first generation of SSDs had the potential to completely shatter the SATA 6.0 gbps interface limits even though their flash was slower (although they didn't because they were held back by their comparatively slow controllers and other bottlenecks that hadn't been worked out yet).
Is there anything else my computer would need before I add a SSD to it.
Potentially yes, if your computer has a slow CPU or doesn't have enough DRAM, you won't see the full benefits of using a SSD. You would have to fix the other bottlenecks of your system before you should invest in a SSD or you will just be waiting for a different component. It doesn't take a super fast or expensive system to extract the benefits of a SSD, but it does require a reasonably balanced one.
First, lets talk about Hard Drives.
Hard drives are fairly consistent devices because of their mechanical nature. A common desktop hard drive has one or more disks that spin at a constant speed of 7200 revolutions per minute (that's 120 revolutions per second), the disks are coated with a magnetic surface and the surface is divided into small blocks called sectors. On modern drives sectors are 4096 bytes (4 KB), on older drives they were 512 bytes and a sector always takes up the same amount of physical size on the disk regardless of where on the disk they are. Sectors are arranged in a spiral pattern, starting on the outer most edge of the disk and working their way in to the center (which is why hard drives are faster at the start than at the end, more sectors fit on the outer edge of the disk than on the inner edge). The sector is the smallest unit a hard drive understands and can operate on, which means you cannot read or write individual bits from/to the disk, you can only read/write whole sectors. The drive keeps track of the sectors with a simple number, starting at 0 and counting up to as many sectors as the drive has. Floating above the disk there is an actuator arm with the drive heads at the tip, the heads are responsible for writing to or reading from the magnetic coating on the disk. The heads require a brief but fairly predictable time to seek to a location of some data on the disk, and typically require a bit more time after that for the disk to rotate and the correct data to pass beneath them. This time is called the access latency, on a typical desktop hard drive this latency is 12-14 milliseconds.
Once a hard drive seeks to some location, it can easily average over 100 MB/sec reading/writing it since the data moves about that quickly beneath the heads in a modern drive; this is great if you are trying to read/write a big piece of data like 100+ MB, but it can be terrible if you are only trying to read/write a single bit. The reason is because the access latency is the same regardless of the size of data to be read/written and a single 4 KB sector only requires a tiny fraction of a millisecond to read or write. At 100 MB/sec, it requires only 0.0390625 milliseconds to completely read or write a single 4 KB sector, if you want to read or write the next 4 KB sector behind that one (a sequential operation) it only requires an additional 0.0390625 milliseconds, but if you want to read a different sector somewhere else on the disk, it could require up to the whole 12-14 MS access latency again to reach it and start over. As a result the data rate for asking for one big 100 MB chunk may easily reach 100 MB/sec, but asking for hundreds of little 4 KB to 128 KB chunks of data would result in a rate of as little as 1 MB/sec. So when you are reading or writing scattered individual sectors it is very inefficient because the vast majority of the time is spent seeking around and waiting for the heads and disk to line up rather than actually doing the work of reading or writing the data.
Now we know that hard drives are efficient at handling large chunks of data, but inefficient at handling one or more small chunks of data. So what?
Each time your computer requests a drive to do something it generates an event on the drive called an In/Out Operation, and a common method for benchmarking drives is the number of these operations the drive can perform in a second. However its important to remember that an IOP isn't a fixed size (although 4 KB is the smallest practical size on modern file systems/drives, and benchmarks control the IOP sizes they generate). The system can request 1 byte from a drive, or it can request 1 million bytes from a drive both with a single IOP. An IOP is simply a command like "go to sector X and read/write something" or "go to sector X and read/write all data until you reach sector Y". Which is why it is important to also benchmark drives in the actual throughput in megabytes per second. Also keep in mind that an IOP does not equal reading or writing a file to the system as a human would understand files, a single file that is larger than the 4 KB sector size can require more than one IOP to be read or write if it is fragmented. So when you want to read a file that is split into three fragments from a hard drive, the operating system will generate three IOPs to read it, which would come out like "read sector 123" "read sector 234" "read sector 345". And each IOP requires seeking the heads and waiting for the disk to rotate to the proper orientation which brings us back to that inefficiency of small operations on a hard drive because of the access latency (and that is why its important to keep your hard drive defragmented).
This brings us to how your computer behaves, like it or not, on average well over half the operations people do on their computers generate large numbers of those small and inefficient IOPs. Booting up your computer, loading most desktop applications, updating software, installing regular applications, the vast majority of these operations work in small IOPs and there is nothing that can be done to change that. So, because hard drives are so inefficient at doing this, people started looking for a device that could handle those compact IOPs faster, and the answer was the Solid State Drive.
Okay already, what is a SSD and why is it better than a hard drive?
Well to put it simply, it is a drive made up of the same basic type of memory that a USB thumb drive has in it (although the memory in SSDs is manufactured to a much higher quality). SSDs have an array of flash memory chips to store the data on, they have no spinning disks or heads on actuators, in fact they have no moving parts at all! This means that when you send an IOP to a SSD asking for a piece of data, it doesn't have to wait for any moving parts, it can simply immediately retrieve the data the moment it looks up where it is and send it to the system right then and there. SSDs still have an access latency, but its usually measured in microseconds or even nanoseconds. The much shorter access latency means that even when dealing with a whole bunch of small and inefficient IOPs, SSDs can still keep the actual data rate dramatically higher when compared to a hard drive. Basically they took the problem of access latency and disk "thrashing", and applied a simple but effective brute force solution to the problem. So with a SSD, when you boot up your computer, or load an application, or install or update something, or do anything that causes those small inefficient IOPs, it happens MUCH faster because the rest of the system doesn't have to wait anywhere near as long for the drive to deliver. It isn't that SSDs don't thrash, in fact they do thrash just like a hard drive, the difference is they thrash 300 times faster. The amount of time this can save can be tremendous, you are probably so used to waiting for your PC as the hard drive thrashes away that you don't even realize that the accumulation of its 12-14 MS access latency is literally wasting hours of your time every week.
But SSDs are expensive...
Indeed, there is no way around this one: The flash memory used in SSDs is expensive when compared to a hard drive. Fortunately the price has been steadily falling for the last few years, initially it was well over $5/GB but now its down to $0.80/GB, mid sized SSDs are starting to reach quite affordable levels. Sure hard drives are still a lot cheaper at as little as $0.05/GB, but if you have a well balanced modern computer already or are planning on building one soon, its absolutely worth every penny to include a SSD.
How reliable are SSDs?
People sometimes ask with some concern "Is a SSD as reliable as a hard drive?", and to that I usually reply "I certainly hope not!". For a technology that has been around for over 50 years, hard drives are surprisingly unreliable. If you were shopping for brand new cars and 1 in 20 would grind to a halt the moment you tried to move it off the lot, you would be furious, but statistically speaking that is the dead-on-arrival ratio for mechanical hard drives. Sure, the early SSDs from a couple years ago definitely had their issues, but like any good technology they have since largely grown out of it. (For the record, my brother has one of the early generation 1 OCZ vertex drives that still works and he has been using every day for years. And in the same time he has had to RMA a half dozen hard drives from the arrays he keeps around.) Giving up on moving parts has allowed the SSDs a lot of room to push for higher reliability than mechanical hard drives, and as the industry has matured they have started capitalizing on that so modern SSDs are very reliable.
I've heard that SSDs have a limited life span, wouldn't that be a problem for me?
It is true that the individual cells that make up flash memory in SSDs will wear out after it has been written (or programmed) a certain number of times, which is at least 3000 times in a modern SSD, this is known as the Program/Erase cycle endurance. 3000 might sound like a small number but it is actually more than sufficient for any desktop PC user. SSDs use a feature called wear leveling to spread out the writes on a drive in order to make sure no cells wear out prematurely. Wear leveling works because SSDs actually have additional capacity that is reserved and you cannot directly access called spare area. Virtually all SSDs have spare area, typically around 6% for smaller drives (up to 200 GB) and 12% for larger drives (200GB and over). SSDs use spare area as guaranteed free space for shuffling around writes to frequently updated files. Spare area also means SSDs behave vastly different from hard drives on the inside, even though they present themselves to the operating system as working the same. To your OS a SSD is just like a hard drive, it starts at sector 0 and works its way up, but internally SSDs can remap any "sector" to any physical location on its flash memory, and it can even change these mappings as necessary.
Spare area is also important because of a difference in how flash memory is rewritten. Flash memory starts out with the cell, which is the smallest part and on consumer drives can hold two bits of information, cells are arranged into pages, which are typically 4 KB and work pretty much like sectors on a hard drive, but unlike hard drives, pages are further organized into blocks of typically 512 KB. When you update/overwrite a file on a mechanical hard drive, it can simply overwrite it with the new data in exactly the same location, but flash memory cannot do that. Instead flash memory has to be erased before it can be rewritten, unfortunately individual pages cannot be erased, only whole blocks can. So in order to update a single page within a block, the controller must first read all the valid data in the block to a special cache, then erase the whole block, then write it all back with the updated data. We call this phenomenon "Write Amplification", which is another use for spare area. When a file is updated, rather than updating it in-place, SSDs will simply write the updated data to a completely different page and mark the old page as invalid, in doing so it avoids write amplification. This causes the invalid pages to eventually build up, so once a block is filled with mostly invalid data it triggers the drives aptly named "garbage collection" routine which will save the valid data (sometimes in a different block entirely) erase the block and then add it back to the spare area pool. Additionally modern drives when used with a supporting operating system are smart about using everything they have available for spare area, which means when you use Windows 7 or recent Linux kernels on a TRIM supporting SSD, anything that isn't filled with valid data gets applied to the spare area pool, including ALL the free space within the file system. (If you're wondering "Wouldn't that fragment the files and be bad?". The answer is "It would if SSDs were anything like a hard drive, fortunately they aren't.". SSDs are so different from from hard disks that they fragment themselves on purpose and actually work better that way!)
As for how long a SSD will actually last in practice... Fortunately for us most recent SSDs keep track of their used lifespan; a couple months after I got mine I decided to check and see how much endurance I had used up and how long it would take to use up what was left. I loaded up the SSD toolbox for my drive and checked the amount of data that had been written to it and the write amplification that had occurred, then calculated the remaining lifespan assuming it would last exactly 3000 P/E cycles. Once I arrived at a number, I promptly closed my calculator and the toolbox and never let a thought of endurance cross my mind again. The reason I gave up caring is because I simply couldn't imagine a scenario where I would still be using this drive in ~275 years (give or take a couple decades) when I would reach the projected 3000 P/E cycle endurance limit. Needless to say, many online discussions on SSDs greatly inflate the importance of the P/E cycle endurance. It is actually possible to wear out a SSD within a couple years if you dedicate a machine to constantly write and rewrite garbage to the drive at its highest speed 24/7/365, but that is so astronomically beyond even the heaviest desktop user workload that it instead becomes a good example of just how durable SSDs are.
How much faster is a SSD compared to a hard drive.
A good modern desktop hard drive can usually thrash out a 1 MB/sec score in the 4 KB random access test, which is roughly 270 IOPS (I/O Operations per Second). The high end SSDs these days can do the same task at over 350 MB/sec, or upwards of 85,000 IOPS. Even the inexpensive entry level SSDs can reach 60 MB/sec or around 15,000 IOPS at this test, and they keep getting better all the time. So depending on what type of work the drive is doing, SSDs are anywhere from 5 times faster, to 350 times faster than a common mechanical hard drive. Granted it is necessary to keep it in perspective, not every time consuming or slow task your computer does is demanding on or waiting for the drive. SSDs will accelerate things like booting up, installing/uninstalling/updating/launching applications, virus scanning files, decompressing files, loading the next level in video games and anything else that is I/O bound, typically right up until your system finds whatever other limiting factor it has (CPU, memory, etc). However they will not accelerate things that aren't I/O bound, such as video game or movie playback frame rates (GPU/CPU bound) or video/audio encoding, image editing/word processing/spreadsheets (CPU bound), or copying/moving stuff to or from other non SSD drives or other computers (limited by the other drive/computer or the connections between them). Your computer probably wont boot its operating system or load all applications instantly, but it probably will do the vast majority of it noticeably faster, how much depends on the computer and what it is loading. SSDs may not save much time during some common tasks, but they will save a lot of time between tasks because they allow you to begin the next task much faster.
Why now?
At this point I wouldn't expect SSDs to get much faster for a while, the reason is because of the SATA connection that is most common, most drives are already reaching the interface limits from the SATA 6.0 gbps channel and have no room left to grow. If you look at benchmarks of SSDs, you tend to notice that the performance of sometimes vastly different drives (different chipsets, firmware, capacity, etc) still tend to cluster in a few areas; mainly 550-560 MB/sec sequential tests and 350 MB/sec 4 KB random tests. The probable reason behind this is because the SATA connection is the limit (4 KB tests generate more more overhead than sequential tests, which explains why they are slower). Soon drives will no longer be able to differentiate each other very well in performance and capacity is largely dependent on the price of the memory which is pretty much the same between all vendors, so there should be a lot more competition on reliability and other features in order for each manufacturer to try and distinguish themselves, which means its a good time to start watching the market.
How fast could SSDs really be without SATA 6.0 gbps holding them back? Well take my entry level SSD, which is actually one of the slower ones still on the market these days and is only a SATA 3.0 gbps drive. Internally each flash chip on my drive can deliver 160 MB/sec, which doesn't sound like *that* much considering hard drives are only a little slower and many newer SSDs can hit 500+ MB/sec, but like any other SSD this one is an array of multiple chips, my specific model is a 10 channel device and each channel is capable of 160 MB/sec. So if you eliminated the controller imposed and SATA 3.0 gbps interface speed limitations, even my entry level drive could hit an absurd 1.6 gigabytes per second sequential throughput (equal to an array of more than 15 conventional hard drives in RAID0). Perhaps not that surprisingly, even the first generation of SSDs had the potential to completely shatter the SATA 6.0 gbps interface limits even though their flash was slower (although they didn't because they were held back by their comparatively slow controllers and other bottlenecks that hadn't been worked out yet).
Is there anything else my computer would need before I add a SSD to it.
Potentially yes, if your computer has a slow CPU or doesn't have enough DRAM, you won't see the full benefits of using a SSD. You would have to fix the other bottlenecks of your system before you should invest in a SSD or you will just be waiting for a different component. It doesn't take a super fast or expensive system to extract the benefits of a SSD, but it does require a reasonably balanced one.