Systems Recipes - Issue #7
Welcome to Issue #7 of Systems Recipes. We’re going down a persistent route today and talk about the art of storing data reliably and durably. Specifically, let’s talk about databases!
Instead of giving three links, we’re going to switch it up a little and talk about each of the things we find fascinating in three categories of databases. This includes various links to papers and articles. We hope you enjoy it 🙏
Relational Databases
Relational Databases have been around since the 70s and entire companies and industries have been built on the relational model of storing data.
Many concepts proposed originally by Codd, a researcher at IBM who published “A Relational Model of Data for Large Shared Data Banks” are relevant today in our conceptual thinking.
Databases are complex systems under the hood, involving many components that work together to make sure your data is durably stored. “Architecture of a Database System” takes you through all the components involved and how they fit in to the puzzle.
Sometimes, hardware or low level kernel implementations can have drastic (and sometimes very negative) consequences on durability. Postgres had a fascinating bug where an expectation of the fsync system call turned out to be completely the opposite. A really interesting relevant paper was presented in USENIX 2020 titled “Can Applications Recover from fsync Failures?”.
NoSQL and Key/Value Stores
NoSQL databases and Key Value Stores are a huge domain and have been synonymous with making data storage more scalable and highly available. A paper from Amazon titled “Dynamo: Amazon’s Highly Available Key-value Store” was the catalyst in this massively growing area of research. It influenced key systems like Cassandra and DynamoDB.
Many folks saw NoSQL as an answer to all their scaling woes. However, you can’t just take a relational schema and expect it work and scale in a NoSQL system. This excellent talk by Rick Houlihan talks about DynamoDB Advanced Design Patterns. This talk is well worth a watch, even if you don’t use DynamoDB.
Google published a paper (and a cloud service of the same name) on “Spanner: Google’s Globally-Distributed Database”. It leverages key infrastructure components like Google’s network and storage layers as well as time API called TrueTime to support externally-consistent distributed transactions.
Analytical Processing
When designing a performant data store, you want to minimize things like Disk Seeks and thus you want to store your data accordingly. If your workload often involves aggregating across columns (for example, in analytics workloads when you want to find an average across a column), it makes more sense to structure your data on disk as such.
These systems are often touted as data warehouses or systems geared for analytics processing (in contrast to transactional real-time processing).
C-Store: A Column-oriented DBMS was very influential in this field, it proposed the idea of storing data in a columnar format. This has significant advantages in being able to compress the data much more efficiently as well as making queries cheaper by only reading the columns needed for each query (as opposed to reading all the columns in a row oriented data store).
This paper by Google on “Dremel: Interactive Analysis of Web-Scale Datasets” talks about Google’s massively parallel database to process multiple terabytes of data is seconds by farming out queries to thousands of workers.