Performance Statistics

The questions will inevitably arise: How does Fossil perform? Does it use a lot of disk space or bandwidth? Is it scalable? In an attempt to answers these questions, this report looks at several projects that use fossil for configuration management and examines how well they are working. The following table is a summary of the results. (Last updated on 2012-02-26.) Explanation and analysis follows the table.
Project Number Of Artifacts Number Of Check-ins Project Duration
(as of 2009-08-23)
Average Check-ins Per Day Uncompressed Size Repository Size Compression Ratio Clone Bandwidth
[http://www.sqlite.org/src/timeline | SQLite] 41113 9943 4290 days
11.75 yrs
2.32 2.09 GB 33.2 MB 63:1 23.2 MB
[http://core.tcl.tk/tcl/timeline | TCL] 74806 13541 5085 days
13.92 yrs
2.66 5.2 GB 86 MB 60:1 67.0 MB
[/timeline | Fossil] 15561 3764 1681 days
4.6 yrs
2.24 721 MB 18.8 MB 38:1 12.0 MB
[http://www.sqlite.org/slt/timeline | SLT] 2174 100 1183 days
3.24 yrs
0.08 1.94 GB 143 MB 12:1 141 MB
[http://www.sqlite.org/th3.html | TH3] 5624 1472 1248 days
3.42 yrs
1.78 252 MB 12.5 MB 20:1 12.2 MB
[http://www.sqlite.org/docsrc/timeline | SQLite Docs] 3664 1003 1567 days
4.29 yrs
0.64 108 MB 6.6 MB 16:1 5.71 MB

Measured Attributes

In Fossil, every version of every file, every wiki page, every change to every ticket, and every check-in is a separate "artifact". One way to think of a Fossil project is as a bag of artifacts. Of course, there is a lot more than this going on in Fossil. Many of the artifacts have meaning and are related to other artifacts. But at a low level (for example when synchronizing two instances of the same project) the only thing that matters is the unordered collection of artifacts. In fact, one of the key characteristics of Fossil is that the entire project history can be reconstructed simply by scanning the artifacts in an arbitrary order. The number of check-ins is the number of times that the "commit" command has been run. A single check-in might change a 3 or 4 files, or it might change dozens or hundreds of files. Regardless of the number of files changed, it still only counts as one check-in. The "Uncompressed Size" is the total size of all the artifacts within the repository assuming they were all uncompressed and stored separately on the disk. Fossil makes use of delta compression between related versions of the same file, and then uses zlib compression on the resulting deltas. The total resulting repository size is shown after the uncompressed size. For this chart, "fossil rebuild --compress" was run on each repository prior to measuring its compressed size. Repository sizes would typically be 20% larger without that rebuild. On the right end of the table, we show the "Clone Bandwidth". This is the total number of bytes sent from server back to the client. The number of bytes sent from client to server is neglible in comparison. These byte counts include HTTP protocol overhead. In the table and throughout this article, "GB" means gigabytes (109 bytes) not gibibytes (230 bytes). Similarly, "MB" and "KB" means megabytes and kilobytes, not mebibytes and kibibytes.

Analysis And Supplimental Data

Perhaps the two most interesting datapoints in the above table are SQLite and SLT. SQLite is a long-running project with long revision chains. Some of the files in SQLite have been edited over a thousand times. Each of these edits is stored as a delta, and hence the SQLite project gets excellent 63:1 compression. SLT, on the other hand, consists of many large (megabyte-sized) SQL scripts that have one or maybe two edits each. There is very little delta compression occurring and so the overall repository compression ratio is much lower. Note also that quite a bit more bandwidth is required to clone SLT than SQLite. For the first nine years of its development, SQLite was versioned by CVS. The resulting CVS repository measured over 320MB in size. So, the developers were surprised to see that this entire project could be cloned in fossil using only about 23.2MB of network traffic. (This 23.2MB includes all the changes to SQLite that have been made since the conversion from CVS. Of those changes are omitted, the clone bandwidth drops to 13MB.) The "sync" protocol used by fossil has turned out to be surprisingly efficient. A typical check-in on SQLite might use 3 or 4KB of network bandwidth total. Hardly worth measuring. The sync protocol is efficient enough that, once cloned, Fossil could easily be used over a dial-up connection.