Turns out, you can have TO FEW JAVASCRIPT in your html :-/
If you take a closer look at the Chrome profiler tool, you realize the
"initial rendering" of any page is really quick, often less than 100 msec,
no matter if the requested page is a "big" or "small" html / plaintext file.
After the initial rendering, Chromium seems to prefer receiving small junks of data,
performing a additional rendering after each and every junk/part of the full content it
receives.
- and that's what causes Chromium based browsers to be MUCH slower in processing
large amounts of data.
You can easily bypass this weird "performance flaw" by rubbing a little JavaScript on it:
Simply create a wrapper-page, which loads the actual content by
performing a XMLHttpRequest request and updates the DOM only once.
1 initial + 1 rendering after the content is loaded and set into the dom = 2 renderings, instead of 100.000ish.
By using the following code, I've been able to get the load time of a 20 MB plaintext file from ~280 secs down to approx 4 seconds in Google Chrome, current version.
<body>
<div id="file-content">loading, please wait</div>
<script type="text/javascript">
function delayLoad(path, callback) {
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
if (xhr.readyState == 4) {
if (xhr.status == 200) {
callback(xhr.responseText);
} else {
callback(null);
}
}
};
xhr.open("GET", path);
xhr.send();
}
function setFileContent(fileData) {
var element = document.getElementById('file-content');
if (!fileData) {
element.innerHTML = "error loading data";
return;
}
element.innerHTML = fileData;
}
delayLoad("bongo_files/bongo_20M.txt", setFileContent);
</script>
</body>