There are two main classes of cache, the read cache, and the write cache.
However, data from one cache may be further cached down in the next memory tier.
The lowest to highest tiers are CPU cache/registers, system RAM, storage drives, and archival storage.
Each step down the memory tiers offers increased access speed but reduced capacity.
Most home users only have the lowest three storage tiers in the real world.
Archival storage generally refers to tape storage intended for long-term and offline storage.
These examples are significantly more likely to be found in the home but are still not that common.
Note:To some degree, cloud storage could be considered a variant of archival storage.
It is very much online but not necessarily accessible immediately and is generally slow to access.
Removable media such as USB memory also somewhat straddle the boundary between storage drive and archival storage.
There are three types of disk cache.
The read cache would involve copying some data from archival storage temporarily to make access quicker while its needed.
A write cache could take the form of an SLC cache on an SSD.
An I/O cache would generally be some flash memory or DRAM used to cache both read and write operations.
The defining feature of all these is that the cache is on the disk itself.
Archival storage, by its very definition, is rarely needed.
Data can also be read directly from archival media.
The issue is speed.
Read speeds depend on the archival medium but will generally be enough for most cases.
But may not be ideal for high bandwidth requirements such as high-definition video viewing.
Write Disk Cache
Modern SSDs are blazing fast, offering incredibly fast read and write speeds.
What you might not realize is that this isnt technically true.
Most SSDs on the market are TLC, aka Triple Layer Cells.
This means that each memory cell can store three bits of data.
Tip:TLC flash is still fast.
Its many times faster than the peak bandwidth of the SATA 3 bus used by HDDs and early SSDs.
QLC flash or Quad Level Cells are even slower, in some tests actually performing slower than HDDs.
The SLC cache was invented to hide the slow write speeds from the user.
SLC cache simply treats the TLC flash as SLC flash, allowing it to operate at increased speeds.
This technique works excellently, offering increasing speeds that have necessitated the development of new, faster standards.
SLC caches, however, have some caveats.
The size of the SLC cache is 1/3 of the remaining free space of the SSD.
As the SSD fills up, the SLC cache size decreases.
This meaning would not generally be assumed, though.
I/O Disk Cache
HDDs are generally pretty slow, even in their optimum workloads.
To help hide this from the user as much as possible, an I/O cache can be used.
An I/O cache caches both read and write operations as needed.
This cache is typically made up of either Flash memory or DRAM in the drive itself.
Caching reads means that the HDD doesnt have to find and then read the data.
This can offer excellent performance benefits, but only on subsequent read operations.
The first read is always slow.
This offers faster speeds but sees a big performance dip if the cache is ever exhausted.
Note:SSDs can technically also use their onboard DRAM as an I/O cache.
Conclusion
A disk cache is a cache that exists directly on a storage drive.
It can take the form of a read-or-write cache or an I/O cache.
Read caches typically cache data from slower, archival storage.
Write caches hide the slow write speeds of storage disks from the user.
I/O caches hide both slow read and slow write speeds from the user.
Caches are excellent usability tools but can cause some headaches for users when depleted.
This is especially true for dynamic write caches such as the SLC cache.