- that can be distributed (available, partition tolerant, eventual consistent).
- that can be dynamically extended by the applications operating on it.
- where the cardinality of a relation is decided at query time.
- where relations are dynamically typed and queries are statically typed.
- where all past values of relations are kept in the data model, to make it possible to revisit them at a later time.
- where hashes are used to reference values, which make them location independent and makes it possible to store values redundant.
- that is built on simple principles and easy to implement using conventional technology.
RelationsIn a relational data model, it is the associations between the entities (objects, things) that we are interested in. They are collected in relations. This representation is complementary to the object-oriented (hierarchical, document) data model, where objects (records, documents) contain references to other objects. One advantage of the relational data model is that it doesn't impose any particular access pattern, as relations can be navigated in any direction .
The entities in the relational data model we are about to construct are represented by unique identifiers, which are used to track the entities across multiple relations. An entity is defined by the relations it is part of. It has no other attributes or properties in addition to this.
To illustrate this, imagine that we want to keep track of the persons which appears in the photos in our digital photo collection. For this we need the relation appears_in20.
CREATE TABLE appears_in20 (a uuid, b uuid);
The relation is a binary relation with 2 entities and 0 values, indicated by the digits '20' at the end of the name. As we can see, there is nothing in the relation that mentions persons or photos. Any type of entities can be related in this relation. This could cause us trouble further on when constructing queries that should only give us persons and photos. Therefore, we will also create unary identity relations for each type of entity.
CREATE TABLE person10 (a uuid);
CREATE TABLE photo10 (a uuid);
So far, we have shown how to relate entities to form a graph. To go beyond the pure graph and fill the model with some content, we need to be able to relate entities with values (attributes) as well. A value is a sequence of bytes which is given meaning through its type. A sequence of bytes can be hashed to give us a uniform reference to any value .
We go back to the persons and photos, and decide that a person should have a name encoded in utf8 (text/plain) and that a photo should be associated with a jpeg file (image/jpeg). A SHA256 hash in hex format is 64 characters long.
CREATE TABLE name11 (a uuid, h char(64));
CREATE TABLE image11 (a uuid, h char(64));
CREATE TABLE text_plain01 (h char(64));
CREATE TABLE image_jpeg01 (h char(64));
Still, we have no actual values in the model, just references to them. This is solved by adding a relation between the hash and the value.
CREATE TABLE text_value (h char(64), v text);
CREATE TABLE binary_value (h char(64), v bytea);
There will be one value relation for each native database type we want to use.
Large binary data may not be suitable to store in the database. It can also be more practical to fetch an image or document from a simple web server instead of the database. It depends on the system architecture. For these values, we introduce a relation to associate the value with an URL. The URL is itself a value with type text/plain.
CREATE TABLE location02 (h char(64), i char(64));
The mapping of mime types to database types and whether values of a certain type are stored in the database or not is knowledge that needs to be shared among the various parts in the system that interacts with the relational model.
The arrangement to use hashes as references to values allow us to store values redundant, duplicated at several locations. Typically, the location02 relation is populated by crawlers that find and track values stored in archives, on the web or local.
A legitimate concern with the proposed data model is the many joins required to put together a record with all values and associations for an entity. The overall performance of the data model may not be as good as a more complex relational model, but we are willing to trade performance for simplicity in this case. The goal is to allow the data model to grow dynamically, using just a few core concepts. To simplify the construction of queries, the plan is to design a specialized query language that can take advantage of the symmetries in the data model.
We observe that the relation appears_in20 has the cardinality many-to-many. How other cardinalities should be represented will be addressed later. First, we will take a step back and see how we can create and update relations.
A Distributed Event LogUsing the log at LinkedIn  as a source of inspiration, we will use a log of events as the original source of information flowing into the database. The log is kept in a directory in the filesystem. A log entry is one file, containing a JSON array of objects. Each event can either associate or dissociate entities and values using named relations.
With a modest amount of mental effort, it is possible to map this notation to the relations defined earlier. JSON attributes 'a' - 'g' are entities and 'h' - 'n' are values. The first log entry that introduces a new relation will effectively create the corresponding table, indexes and views, allowing the schema to grow dynamically. The JSON attribute 'r' names the relation in question. The JSON attribute 'o' names the operation to perform on the relation, where 'associate' adds the association to the relation. The counteraction is 'dissociate', which removes the association from the relation. The log entry is fixed in time with the JSON attribute 't'. (We can also see that values are logged using a simpler relation without time and operation. The hash of a value simply exists as an immutable fact.) All of these attributes come together to create an append-only relation in the database.
CREATE TABLE appears_in20all (a uuid, b uuid, o text, t timestamp);
From this, we derive the appears_in20 relation. The plan is to include a pair (a', b') only if the number of rows with o='associate' for the pair is more common than the number of rows with o='dissociate'. Expressed alternatively, there must be more log entries that want to associate a' and b', than there are log entries that want to dissociate the two.
CREATE VIEW appears_in20 AS (
SELECT a, b FROM (
SELECT DISTINCT ON (a, b) a, b, o FROM appears_in20all
GROUP BY o, a, b
ORDER BY a, b, COUNT(o) DESC, o DESC) AS c
The query groups the related entity pair with the operation in the inner SELECT and counts how many times each combination occurs. It then sorts them in descending order to get the most frequently occurring combination at the top. When the number of 'associate' and 'dissociate' operations are equal for a pair, 'dissociate' will end up on the top by a descending sort. It then picks the top row for each pair with DISTINCT ON (a, b). As a final touch, the outer SELECT will only keep the rows where the operation is 'associate'.
The reason for this arrangement is to let the database be distributable while trying to reach eventual consistency. We want to distribute the database to get data redundancy rather than load balancing. By synchronizing the event log directory between the participating nodes on the file level , changes from different nodes are merged. Just make sure the file names of the log entries are unique, for example a hash of the contents. I know I am out on thin ice here, but I believe this corresponds to an Available and Partition tolerant system in the context of the CAP theorem. The distributed system as a whole does not guarantee Consistency, even though it will eventually (sometimes) be consistent. The limitation of this arrangement is that the nodes must be cooperating and can't compete for limited resources. You need distributed transactions or transactions on a common write node to be able to do that.
Another benefit is the ability to go back in time, by creating a view like appears_in20 with a time constraint. Yet another benefit is the possibility for an application to read the entire history of how a relation has evolved to become what it is today, and make it possible to undo changes.
The data doesn't have to be merged into a single log. It is possible to partition the log locally and let the data be merged in the database. You can partition on where the data comes from, who is the author, what domain it belongs to or any other reason. This makes it easy to incorporate, discard and recombine information from different sources in the local database.
CardinalitySo far we have defined a many-to-many relation. We would also like to define one-to-many and one-to-one relations. This can be done using the recorded timestamp to only select the most recent association. We will begin with a redefinition of the many-to-many relation.
CREATE VIEW appears_in20cnn AS (
SELECT a, b, t FROM (
SELECT DISTINCT ON (a, b) a, b, o, max(t) as t FROM appears_in20all
GROUP BY o, a, b
ORDER BY a, b, COUNT(o) DESC, o DESC) AS c
CREATE VIEW appears_in20c1n AS (
SELECT DISTINCT ON (b) a, b, t FROM appears_in20cnn
ORDER BY a, t DESC);
CREATE VIEW appears_in20cn1 AS (
SELECT DISTINCT ON (a) a, b, t FROM appears_in20cnn
ORDER BY b, t DESC);
CREATE VIEW appears_in20c11 AS (
SELECT a, b, t from appears_in20c1n
SELECT a, b, t from appears_in20cn1);
Worth mentioning here is if a relation is used in different contexts, for example the relation portrait_of20 which relates both families and photos, and persons and photos, the relations with lower cardinality are type agnostic, meaning that a photo can only be associated one-to-one with either a person or a family.
Another observation is that for one-to-many relations, when we use the 'associate' operation to overwrite the side with cardinality one, the previous association will be pushed away into a stack of old associations. Later, when the current association is 'dissociated', the association on top of the stack will reappear. An example would be a person associated with the name 'Per' and later corrected with a new association with the name 'Pär'. When 'Pär' is dissociated, 'Per' will reappear as the associated name. To avoid this, the application will have to dissociate the old name when the name changes. This procedure also makes the many-to-many relation identical to the one-to-many relation.
To undo the effects of a log entry, just change 'associate' to 'dissociate' and vice versa to form the inverse log entry.
Case Study: the Family AlbumBelow is the auto generated schema for a relational model representing a family album.
Wrapping UpWe have designed a distributed relational data model, dynamically extensible by users through a replicated event log. Relations associate entities, identified by UUID, with other entities and/or values, identified by their hash. Values are stored in remote or local archives and are accessed using URLs. Values can also be stored in the event log and be cached in the data model. Users can access historical values of a relation, alongside many-to-many, one-to-many and one-to-one interpretations of the current value of the relation.
The architecture supports a number of variations:
- Log Reader and File Crawler can work over the web.
- Log synchronization must not be bidirectional.
- Data sets from different domains can easily be merged.
- It is always possible to drop and recreate the relational database from the Event Log. This may take a long time though, if there are many events.
- How do we merge different uuids that represent the same thing? (The aka20 relation!)
- How far will it scale? Are there any bottlenecks?
- What would a domain specific query language look like? Compiled to SQL.
- Can the Web Application itself be stored in the Archive? Can it be modified in the GUI? Can it compile replacement parts of itself from source at runtime? Integrated make. Can the compiler and the build system also be stored in the archive and be modifiable? Can we undo if the ability to change the application breaks due to changes to its development environment? What if undo also breaks? Which interpreter do we target?
References Out of the Tar Pit, Ben Moseley and Peter Marks
A text in line with my own thoughts on how to design information systems. In particular, I found section 7 particularly familiar, where they describe how to turn informal requirements into formal requirements, which are then executed. I have written about this earlier here and here.
 The Mess we're in, Joe Armstrong
36 minutes into the video, Joe introduces the idea to use the hashes as references to values.
 The Log: What every software engineer should know about real-time data's unifying abtraction, Jay Kreps
Inspiring article about the usefulness of the log.
 Turning the Databse Inside-out with Apache Samsa, Martin Kleppman
More thoughts around the log and materialized views.
File replication software.
File replication software.