Fossil

Artifact [69851d26]
Login

Artifact 69851d26ed42c1ba987f526dd85dac2f3891a673:


<title>Fossil Performance</title>
<h1 align="center">Performance Statistics</h1>

The questions will inevitably arise:  How does Fossil perform? 
Does it use a lot of disk space or bandwidth?  Is it scalable?

In an attempt to answers these questions, this report looks at several
projects that use fossil for configuration management and examines how
well they are working.  The following table is a summary of the results.
(Last updated on 2012-02-26.)
Explanation and analysis follows the table.

<table border=1>
<tr>
<th>Project</th>
<th>Number Of Artifacts</th>
<th>Number Of Check-ins</th>
<th>Project&nbsp;Duration<br>(as of 2009-08-23)</th>
<th>Average Check-ins Per Day</th>
<th>Uncompressed Size</th>
<th>Repository Size</th>
<th>Compression Ratio</th>
<th>Clone Bandwidth</th>
</tr>

<tr align="center">
<td>[http://www.sqlite.org/src/timeline | SQLite]
<td>41113
<td>9943
<td>4290&nbsp;days<br>11.75&nbsp;yrs
<td>2.32
<td>2.09&nbsp;GB
<td>33.2&nbsp;MB
<td>63:1
<td>23.2&nbsp;MB
</tr>

<tr align="center">
<td>[http://core.tcl.tk/tcl/timeline | TCL]
<td>74806
<td>13541
<td>5085&nbsp;days<br>13.92&nbsp;yrs
<td>2.66
<td>5.2&nbsp;GB
<td>86&nbsp;MB
<td>60:1
<td>67.0&nbsp;MB
</tr>

<tr align="center">
<td>[/timeline | Fossil]
<td>15561
<td>3764
<td>1681&nbsp;days<br>4.6&nbsp;yrs
<td>2.24
<td>721&nbsp;MB
<td>18.8&nbsp;MB
<td>38:1
<td>12.0&nbsp;MB
</tr>

<tr align="center">
<td>[http://www.sqlite.org/slt/timeline | SLT]
<td>2174
<td>100
<td>1183&nbsp;days<br>3.24&nbsp;yrs
<td>0.08
<td>1.94&nbsp;GB
<td>143&nbsp;MB
<td>12:1
<td>141&nbsp;MB
</tr>

<tr align="center">
<td>[http://www.sqlite.org/th3.html | TH3]
<td>5624
<td>1472
<td>1248&nbsp;days<br>3.42&nbsp;yrs
<td>1.78
<td>252&nbsp;MB
<td>12.5&nbsp;MB
<td>20:1
<td>12.2&nbsp;MB
</tr>

<tr align="center">
<td>[http://www.sqlite.org/docsrc/timeline | SQLite Docs]
<td>3664
<td>1003
<td>1567&nbsp;days<br>4.29&nbsp;yrs
<td>0.64
<td>108&nbsp;MB
<td>6.6&nbsp;MB
<td>16:1
<td>5.71&nbsp;MB
</tr>

</table>

<h2>Measured Attributes</h2>

In Fossil, every version of every file, every wiki page, every change to
every ticket, and every check-in is a separate "artifact".  One way to
think of a Fossil project is as a bag of artifacts.  Of course, there is
a lot more than this going on in Fossil.  Many of the artifacts have meaning
and are related to other artifacts.  But at a low level (for example when
synchronizing two instances of the same project) the only thing that matters
is the unordered collection of artifacts.  In fact, one of the key 
characteristics of Fossil is that the entire project history can be
reconstructed simply by scanning the artifacts in an arbitrary order.

The number of check-ins is the number of times that the "commit" command
has been run.  A single check-in might change a 3 or 4 files, or it might
change dozens or hundreds of files.  Regardless of the number of files
changed, it still only counts as one check-in.

The "Uncompressed Size" is the total size of all the artifacts within
the repository assuming they were all uncompressed and stored 
separately on the disk.  Fossil makes use of delta compression between related
versions of the same file, and then uses zlib compression on the resulting
deltas.  The total resulting repository size is shown after the uncompressed
size.  For this chart, "fossil rebuild --compress" was run on each repository
prior to measuring its compressed size.  Repository sizes would typically 
be 20% larger without that rebuild.

On the right end of the table, we show the "Clone Bandwidth".  This is the
total number of bytes sent from server back to the client.  The number of
bytes sent from client to server is neglible in comparison.
These byte counts include HTTP protocol overhead.

In the table and throughout this article,
"GB" means gigabytes (10<sup><small>9</small></sup> bytes)
not <a href="http://en.wikipedia.org/wiki/Gibibyte">gibibytes</a>
(2<sup><small>30</small></sup> bytes).  Similarly, "MB" and "KB"
means megabytes and kilobytes, not mebibytes and kibibytes.

<h2>Analysis And Supplimental Data</h2>

Perhaps the two most interesting datapoints in the above table are SQLite
and SLT.  SQLite is a long-running project with long revision chains.
Some of the files in SQLite have been edited over a thousand times.
Each of these edits is stored as a delta, and hence the SQLite project
gets excellent 63:1 compression.  SLT, on the other hand, consists of
many large (megabyte-sized) SQL scripts that have one or maybe two
edits each.  There is very little delta compression occurring and so the
overall repository compression ratio is much lower.  Note also that
quite a bit more bandwidth is required to clone SLT than SQLite.

For the first nine years of its development, SQLite was versioned by CVS.
The resulting CVS repository measured over 320MB in size.  So, the
developers were surprised to see that this entire project could be cloned in
fossil using only about 23.2MB of network traffic.  (This 23.2MB includes
all the changes to SQLite that have been made since the conversion from
CVS.  Of those changes are omitted, the clone bandwidth drops to 13MB.)
The "sync" protocol
used by fossil has turned out to be surprisingly efficient.  A typical
check-in on SQLite might use 3 or 4KB of network bandwidth total.  Hardly
worth measuring.  The sync protocol is efficient enough that, once cloned,
Fossil could easily be used over a dial-up connection.