Chrome net::ERR_INCOMPLETE_CHUNKED_ENCODING error
Asked Answered
P

43

171

For the past two months, I have been receiving the following error on Chrome's developer console:

net::ERR_INCOMPLETE_CHUNKED_ENCODING

Symptoms:

  • Pages not loading.
  • Truncated CSS and JS files.
  • Pages hanging.

Server environment:

  • Apache 2.2.22
  • PHP
  • Ubuntu

This is happening to me on our in-house Apache server. It is not happening to anybody else - i.e. None of our users are experiencing this problem - nor is anybody else on our dev team.

Other people are accessing the exact same server with the exact same version of Chrome. I have also tried disabling all extensions and browsing in Incognito mode - to no effect.

I have used Firefox and the exact same thing is occurring. Truncated files and whatnot. The only thing is, Firefox doesn't raise any console errors so you need to inspect the HTTP request via Firebug to see the problem.

Response Headers from Apache:

Cache-Control:no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Connection:close
Content-Encoding:gzip
Content-Type:text/html; charset=utf-8
Date:Mon, 27 Apr 2015 10:52:52 GMT
Expires:Thu, 19 Nov 1981 08:52:00 GMT
Pragma:no-cache
Server:Apache/2.2.22 (Ubuntu)
Transfer-Encoding:chunked
Vary:Accept-Encoding
X-Powered-By:PHP/5.3.10-1ubuntu3.8

While testing, I was able to fix the issue by forcing HTTP 1.0 in my htaccess file:

SetEnv downgrade-1.0

This gets rid of the problem. However, forcing HTTP 1.0 over HTTP 1.1 is not a proper solution.

Update: Because I'm the only one experiencing this issue, I figured that I needed to spend more time investigating whether or not it was a client side issue. If I go into Chrome's settings and use the "Restore to Default" option, the problem will disappear for about 10-20 minutes. Then it returns.

Paeon answered 27/4, 2015 at 11:12 Comment(17)
You have forgotten a braket. This is correct -> while($row = mysql_fetch_assoc($result))Rapeseed
@PHPMan Didn't copy and paste it properly. Fixed now. I wish that was the cause.Paeon
Do you create the links in the while loop? if so do you create links without the full url (relative links)? Also could you post more code ?Kcal
I create checkboxes inside the while loop. It doesn't seem to matter, however, as I also have the same issue on other pages that don't generate HTML inside while loops.Paeon
Have you tried all the suggestions in: #22609064 ?Cantaloupe
@Cantaloupe I did. I think I've looked at every single topic that mentions the error.Paeon
Its looks like a MySQL BUG/Error. Look at the usage of mysql at this timeMow
What happens if you clear and disable the cache in the Chrome Developer console under Network?Axseed
@Axseed Same result, unfortunately. Clearing the cache seems to have no effect and the Cache option under the Network tab doesn't seem to do anything either.Paeon
can you add your apache access and error logs ?Marniemaro
also, need to know the generated HTML by this code while($row = mysql_fetch_assoc($result)) may be too much empty lines that causes the truncation by web browsersMarniemaro
That error is raised if the client doesn't receive the final 0-length chunk of a chunked transfer. In your place I would fire up Wireshark and capture the TCP traffic to see what's going on.Axseed
This could be caused by a network issue and not an application issue (especially since you are the only one having it). So, you should probably try first ruling network issue out as a possible cause by monitoring the traffic, as @Axseed suggested.Cosmogony
Disabling my antivirus seems to have fixed the issue. I'm waiting a bit to see if the issue has truly been solved.Paeon
I've had this happen to me when I was using Nginix as a reverse proxy that had been working perfectly. If I went directly to the server (another machine inside my network), everything was fine. When I went to it through Nginx, or outside of my network, it gave the net::ERR_INCOMPLETE_CHUNKED_ENCODING error. It turned out that the machine running Nginx had a full disk. Deleting a few logs of some other long running tasks (the logs were >100GB) fixed the problem without restarting Nginx. I'm not saying that this is your problem, but it's 1 possible reason that you could get that specific error.Romish
It happened to me a few times and the error is gone after restarting apache. But I still don't the cause of the problem.Aldis
I'm also having this issue & the error is gone after restarting nginx. I don't know the cause of the problem either. @ursuleacv, did you figure out what was causing this on your end?Delvalle
P
92

OK. I've triple-tested this and I am 100% sure that it is being caused by my anti-virus (ESET NOD32 ANTIVIRUS 5).

Whenever I disable the Real-Time protection, the issue disappears. Today, I left the Real-Time protection off for 6-7 hours and the issue never occurred.

A few moments ago, I switched it back on, only for the problem to surface within a minute.

Over the course of the last 24 hours, I have switched the Real-Time protection on and off again, just to be sure. Each time - the result has been the same.

Update: I have come across another developer who had the exact same problem with the Real-Time protection on his Kaspersky anti-virus. He disabled it and the problem went away. i.e. This issue doesn't seem to be limited to ESET.

Paeon answered 1/5, 2015 at 16:4 Comment(13)
When you do use the antivirus and send the content-length header, does it work then? If the problem is Eset + visiting your page from whatever ip, it may be a good idea to fix it. Supplying a content-length header does not hurt, it costs maybe 1ms per request.Sequel
by the way if you bounty to yourself answer, the bounty will be gone.Limestone
@Sequel While attempting to debug this issue in the past, I repeatedly tried to manually set the content-length header. Unfortunately, it didn't help.Paeon
Have you found any potential solution other than deactivating the real time protection feature?Stubstad
From long experience, anti viruses cause much more harm than good.Fullmouthed
As per the update to my answer - I was able to replicate this error. It happened while output buffering was taking place and PHP threw a fatal error resulting in mangled output.Volar
For anyone having this issue with Kaspersky, the problem is with it's "Inject script into web traffic" feature. You can find this in the network settings.Dependency
@Dependency Thank you for your comment, this could have taken us a very long time to figure out on our own.Experimental
@Experimental Glad to help. I have submitted a support request about this, I'll post back when it's fixed.Dependency
AVAST has the same problem... It got so bad I couldn't even visit some sites anymore. I disabled webscanning and everything went back to working normally...Extravagate
Yep, Avast was the issue for me too. Specifically the Script Scanning option under Web Shield.Hillaryhillbilly
i had this problem on nginx first check server ram then check file system size with df command maybe your temp file is fullMonstrosity
Malwarebytes caused this for me. However, it was my local development environment, and once I disabled real-time web monitoring in Malwarebytes, the app loaded, and I saw that I had a stack overflow error. Maybe Malwarebytes detected that something was up.Frick
V
59

The error is trying to say that Chrome was cut off while the page was being sent. Your issue is trying to figure out why.

Apparently, this might be a known issue impacting a couple of versions of Chrome. As far as I can tell, it is an issue of these versions being massively sensitive to the content length of the chunk being sent and the expressed size of that chunk (I could be far off on that one). In short, a slightly imperfect headers issue.

On the other hand, it could be that the server does not send the terminal 0-length chunk. Which might be fixable with ob_flush();. It is also possible that Chrome (or connection or something) is being slow. So when the connection is closed, the page is not yet loaded. I have no idea why this might happen.

Here is the paranoid programmers answer:

<?php
    // ... your code
    flush();
    ob_flush();
    sleep(2);
    exit(0);
?>

In your case, it might be a case of the script timing out. I am not really sure why it should affect only you but it could be down to a bunch of race conditions? That's an utter guess. You should be able to test this by extending the script execution time.

<?php
    // ... your while code
    set_time_limit(30);
    // ... more while code
?>

It also may be as simple as you need to update your Chrome install (as this problem is Chrome specific).

UPDATE: I was able to replicate this error (at last) when a fatal error was thrown while PHP (on the same localhost) was output buffering. I imagine the output was too badly mangled to be of much use (headers but little or no content).

Specifically, I accidentally had my code recursively calling itself until PHP, rightly, gave up. Thus, the server did not send the terminal 0-length chunk - which was the problem I identified earlier.

Volar answered 30/4, 2015 at 13:43 Comment(3)
I don't know, but this is really useful to me: set_time_limit(30);Baisden
Increasing the memory limit helped my case: ini_set('memory_limit', '500M');Retirement
The issue is actually when you close the connection without flushing the response. This causes chrome not to receive the final byte of the stream. In vertx, do response.end() instead of response.close()Osmometer
Z
50

I had this issue. Tracked it down after trying most the other answers on this question. It was caused by the owner and permissions of the /var/lib/nginx and more specifically the /var/lib/nginx/tmp directory being incorrect.

The tmp directory is used by fast-cgi to cache responses as they are generated, but only if they are above a certain size. So the issue is intermittent and only occurs when the generated response is large.

Check the nginx <host_name>.error_log to see if you are having permission issues.

To fix, ensure the owner and group of /var/lib/nginx and all sub-dirs is nginx.

I have also seen this intermittently occur when space on the storage device is too low to create the temporary file. The solution in this case is to free up some space on the device.

Zircon answered 5/4, 2016 at 14:51 Comment(7)
Same here, chown on /var/lib/nginx fixed it for me 👍Clovis
Same here, BUT my homebrew install made a /usr/local/var/run/nginx/fastcgi_temp directory that I had to give read/write permissions to.Duffie
I had similar problems, but in my case the permissions problems was related to other directory: /etc/nginx/proxy_temp/. After fixed this, at least for now, the problem disappeared.Janusfaced
In my case, the problem seemed to be related to the SSL certificate being expired.Mohn
In my case, a service generated tremendous log file, result in not space left in my reverse proxy server. I solved it after I logged in this reverse server . Wish I could see this answer quickly .Romish
Another homebrew (M1 Mac, macOS 11.5.2) user here. The directory I had to chown was located at /opt/homebrew/var/run/nginx/fastcgi_temp.Sedum
I had the same error, adding ownership to nginx solved the issue for me chown -R www-data:nginx /var/cache/nginx/fastcgi_temp //gives group ownership to nginx sudo chmod -R 775 /var/cache/nginx/fastcgi_temp/ //gives write ownershipt to nginx The issue is completely gone!Pipistrelle
S
20

The following should fix it for every client.

//Gather output (if it is not already in a variable, use ob_start() and ob_get_clean() )    

// Before sending output:
header('Content-length: ' . strlen($output));

But in my case the following was a better option and fixed it as well:

.htaccess:

php_value opcache.enable 0
Sequel answered 30/4, 2015 at 14:41 Comment(5)
This really fix it for me. I'm loading PHP generated content on divs by ajax and get Chrome net::ERR_INCOMPLETE_CHUNKED_ENCODING error 2 times from 3 when the file is more than 2MB. Adding Content-length fix my problem. Thank you!Tanika
This solution worked for us, having a site where angular was reading a json... in our case, it was php_flag opcache.enable Off in the .htaccess. We knew it wasn't related to antivirus because even on mac we were having this issue. Thx!!Headpiece
That's great :) Is the webserver running PHP 5.6? Upgrading to PHP 7 will also resolve the issue, I suppose. At least that is my experience now!Sequel
This ^ ^ ^ A thousand times this! I ran into this problem on a Drupal 8 site we're developing. As soon as I set it to aggregate CSS and JS, it started having trouble loading the admin pages in Chrome. No problems in Firefox.Meister
how to do it in a jsp-servlet based application deployed on tomcat serverAspirate
E
20

OMG, I solved the same problem 5 minutes ago. I spent several hours to find a solution. At first sight disabling antivirus solved problem on Windows. But then I noticed issue on other linux pc with no antivirus. No errors in nginx logs. My uwsgi shown something about "Broken pipe" but not on all requests.

Know what? It was no space left on device, which I found when restarted server on Database log, and df approved this. Only explanation about why antivirus was solved this is that it prevented browser caching somehow (it should check every request), but browser with some strange behavior can simply ignore bad response and show cached responses.

Update:

To monitor disk space and get real-time alert to Slack/Telegram/Email my company (which I founded several years after writing this answer) created open-source tool https://github.com/devforth/hothost

Elisabethelisabethville answered 12/2, 2016 at 22:1 Comment(3)
have been fumbling around with this problem for the last 24 hr , you really saved me. It was because of a full root partition , it was on my django installation, the pgbouncer logs filled up the root partition. Thanks manPuli
Saved me! My root partition was full, affected nginx too-Dedradedric
Same error got when haven't enough space in /var file system in nginx. check usage using df -h and free some spaces. It solve the issue.University
M
14

if you can get the proper response in your localhost and getting this error kind of error and if you are using nginx.

  1. Go to Server and open nginx.conf with :

    nano etc/nginx/nginx.conf

  2. Add following line in http block :

    proxy_buffering off;

  3. Save and exit the file

This solved my issue

Mcguigan answered 12/2, 2021 at 12:0 Comment(3)
thank you!! this helped me when trying to access the server on another computer in local network.Drama
This is working for me as well.Vouchsafe
This is usually due to write permission errors to /var/lib/nginx/fastcgi make sure the owner matches the one set in nginx.confIorgo
M
9

In my case i was having /usr/local/var/run/nginx/fastcgi_temp/3/07/0000000073" failed (13: Permission denied) which was probably resulting the Chrome net::ERR_INCOMPLETE_CHUNKED_ENCODING error.

I had to remove /usr/local/var/run/nginx/ and let nginx create it again.

$ sudo rm -rf /usr/local/var/run/nginx/
$ sudo nginx -s stop
$ sudo mkdir /usr/local/var/run/nginx/
$ sudo chown nobody:nobody /usr/local/var/run/nginx/
$ sudo nginx
Mansell answered 5/5, 2016 at 1:32 Comment(1)
On a mac, I ended up uninstalling and reinstalling nginx thru brew, then a stop and start of nginx and that fixed it! Thanks for posting.Cuprite
I
4

It is known Chrome problem. According to Chrome and Chromium bug trackers there is no universal solution for this. This problem is not related with server type and version, it is right in Chrome.

Setting Content-Encoding header to identity solved this problem to me.

from developer.mozilla.org

identity | Indicates the identity function (i.e. no compression, nor modification).

So, I can suggest, that in some cases Chrome can not perform gzip compress correctly.

Inhesion answered 29/3, 2016 at 13:13 Comment(0)
D
4

The easiest solution is to increase the proxy_read_timeout for your set proxy location to a higher value (let say 120s) in your nginx.conf.

location / {
....
proxy_read_timeout 120s
....
}

I found this solution here https://rijulaggarwal.wordpress.com/2018/01/10/atmosphere-long-polling-on-nginx-chunked-encoding-error/

Dispeople answered 5/7, 2019 at 12:5 Comment(1)
Please give more context as to when this would happen instead of just copying from another site.Damick
F
4

For me it was caused by insufficient free space on hard drive.

Frohman answered 25/9, 2019 at 7:14 Comment(0)
P
3

I just stared having a similar problem. And noticed it was only happening when the page contained UTF-8 characters with an ordinal value greater than 255 (i.e. multibyte).

What ended up being the problem was how the Content-Length header was being calculated. The underlying backend was computing character length, rather than byte length. Turning off content-length headers fixed the problem temporarily until I could fix the back end template system.

Physicochemical answered 3/7, 2015 at 17:54 Comment(0)
I
3

When i faced this error( while making AJAX call from javascript ); the reason was the response from controller was erroneous; it was returning a JSON which was not of valid format.

Ipecac answered 10/7, 2019 at 6:5 Comment(0)
P
3

As of 2022 using Amazon Linux 2 I came up with this problem, and the solution was to give proper permissions to the /var/lib/nginx folder and its subtree folders. As my nginx user was user then my command was:

sudo chown -R user:user /var/lib/nginx/

Protectorate answered 31/3, 2022 at 21:35 Comment(1)
This works. It seems to be a permission problem.Ominous
A
2

Here the problem was my Avast AV. As soon I disabled it, the problem was gone.

But, I really would like to understand the cause of this behavior.

Archilochus answered 11/5, 2015 at 16:48 Comment(0)
U
2

I just wanted to share my experience with you if someone might has the same problem with MOODLE.

Our moodle platform was suddenly very slowly, the dashboard took about 2-3 times longer to load (up to 6 seconds) then usual and from time to time some pages didn't get loaded at all (not a 404 error but a blank page). In the Developer Tools Console the following error was visible: net::ERR_INCOMPLETE_CHUNKED_ENCODING.

Searching for this error, it looks like Chrome is the issue, but we had the problem with various browsers. After hours of research and comparing the databases from the days before I finally found the problem, someone turned the Event Monitoring on. However, in the "Config changes" log, this change wasn't visible! Turning Event Monitoring off, finally solved the problem - we had no rules defined for event monitoring.

We're running Moodle 3.1.2+ with MariaDB and PHP 5.4.

Unwind answered 19/10, 2016 at 12:32 Comment(0)
P
2

This was happening on two different clients' servers separated by several years, using the same code that was deployed on hundreds of other servers for that time without issue.

For these clients, it happened mostly on PHP scripts that had streaming HTML - that is, "Connection: close" pages where output was sent to the browser as the output became available.

It turned out that the connection between the PHP process and the web server was dropping prematurely, before the script completed and way before any timeout.

The problem was opcache.fast_shutdown = 1 in the main php.ini file. This directive is disabled by default, but it seems some server administrators believe there is a performance boost to be had here. In all of my tests, I have never noted a positive difference using this setting. In my experience, it has caused some scripts to actually execute more slowly, and has an awful track record of sometimes entering shutdown while the script is still executing, or even at the end of execution while the web server is still reading from the buffer. There is an old bug report from 2013, unresolved as of Feb 2017, which may be related: https://github.com/zendtech/ZendOptimizerPlus/issues/146

I have seen the following errors appear due to this ERR_INCOMPLETE_CHUNKED_ENCODING ERR_SPDY_PROTOCOL_ERROR Sometimes there are correlative segfaults logged; sometimes not.

If you experience either one, check your phpinfo, and make sure opcache.fast_shutdown is disabled.

Pennyworth answered 25/1, 2017 at 0:2 Comment(0)
F
2

I had this problem (showing ERR_INCOMPLETE_CHUNKED_ENCODING in Chrome, nothing in other browsers). Turned out the problem was my hosting provider GoDaddy adding a monitoring script at the end of my output.

https://www.godaddy.com/community/cPanel-Hosting/how-to-remove-additional-quot-monitoring-quot-script-added/td-p/62592

Famish answered 23/5, 2018 at 5:3 Comment(0)
E
2

this was happening for me for a different reason altogether. net::ERR_INCOMPLETE_CHUNKED_ENCODING 200 when i inspect the page and go to newtork tab, i see that the vendor.js page has failed to load. Upon checking it seems that the size of the js file is big ~ 6.5 mb.Thats when i realised that i needed to compress the js. I checked that I was using ng build command to build. Instead when i used ng build --prod --aot --vendor-chunk --common-chunk --delete-output-path --buildOptimizer it worked for me.see https://github.com/angular/angular-cli/issues/9016

Earthiness answered 28/3, 2020 at 2:19 Comment(0)
I
2

I our case there was a problem with nginx server, there ended free disc space and it cause problem with buffering

Imf answered 7/7, 2022 at 8:15 Comment(0)
B
1

I'm sorry to say, I don't have a precise answer for you. But I did encounter this problem as well, and, at least in my case, found a way around it. So maybe it'll offer some clues to someone else who knows more about Php under the hood.

The scenario is, I have an array passed to a function. The content of this array is being used to produce an HTML string to be sent back to the browser, by placing it all inside a global variable that's later printed. (This function isn't actually returning anything. Sloppy, I know, but that's beside the point.) Inside this array, among other things, are a couple of elements carrying, by reference, nested associative arrays that were defined outside of this function. By process-of-elimination, I found that manipulation of any element inside this array within this function, referenced or not, including an attempt to unset those referenced elements, results in Chrome throwing a net::ERR_INCOMPLETE_CHUNKED_ENCODING error and displaying no content. This is despite the fact that the HTML string in the global variable is exactly what it should be.

Only by re-tooling the script to not apply references to the array elements in the first place did things start working normally again. I suspect this is actually a Php bug having something to do with the presence of the referenced elements throwing off the content-length headers, but I really don't know enough about this to say for sure.

Baerl answered 4/9, 2015 at 18:20 Comment(1)
I had a similar experience with this error message, however I found there was an error in my code that probably should have tripped an out-of-memory error, although I probably wasn't using any extra memory within the recursion. Anyways, I think PHP dies quietly without flushing the output buffer, and without generating any PHP error.Richart
S
1

I had this problem with a site in Chrome and Firefox. If I turned off the Avast Web Shield it went away. I seem to have managed to get it to work with the Web Shield running by adding some of the html5 boilerplate htaccess to my htaccess file:

# ------------------------------------------------------------------------------
# | Expires headers (for better cache control)                                 |
# ------------------------------------------------------------------------------

# The following expires headers are set pretty far in the future. If you don't
# control versioning with filename-based cache busting, consider lowering the
# cache time for resources like CSS and JS to something like 1 week.

<IfModule mod_expires.c>

    ExpiresActive on
    ExpiresDefault                                      "access plus 1 month"

  # CSS
    ExpiresByType text/css                              "access plus 1 week"

  # Data interchange
    ExpiresByType application/json                      "access plus 0 seconds"
    ExpiresByType application/xml                       "access plus 0 seconds"
    ExpiresByType text/xml                              "access plus 0 seconds"

  # Favicon (cannot be renamed!)
    ExpiresByType image/x-icon                          "access plus 1 week"

  # HTML components (HTCs)
    ExpiresByType text/x-component                      "access plus 1 month"

  # HTML
    ExpiresByType text/html                             "access plus 0 seconds"

  # JavaScript
    ExpiresByType application/javascript                "access plus 1 week"

  # Manifest files
    ExpiresByType application/x-web-app-manifest+json   "access plus 0 seconds"
    ExpiresByType text/cache-manifest                   "access plus 0 seconds"

  # Media
    ExpiresByType audio/ogg                             "access plus 1 month"
    ExpiresByType image/gif                             "access plus 1 month"
    ExpiresByType image/jpeg                            "access plus 1 month"
    ExpiresByType image/png                             "access plus 1 month"
    ExpiresByType video/mp4                             "access plus 1 month"
    ExpiresByType video/ogg                             "access plus 1 month"
    ExpiresByType video/webm                            "access plus 1 month"

  # Web feeds
    ExpiresByType application/atom+xml                  "access plus 1 hour"
    ExpiresByType application/rss+xml                   "access plus 1 hour"

  # Web fonts
    ExpiresByType application/font-woff                 "access plus 1 month"
    ExpiresByType application/vnd.ms-fontobject         "access plus 1 month"
    ExpiresByType application/x-font-ttf                "access plus 1 month"
    ExpiresByType font/opentype                         "access plus 1 month"
    ExpiresByType image/svg+xml                         "access plus 1 month"

</IfModule>

# ------------------------------------------------------------------------------
# | Compression                                                                |
# ------------------------------------------------------------------------------

<IfModule mod_deflate.c>

    # Force compression for mangled headers.
    # http://developer.yahoo.com/blogs/ydn/posts/2010/12/pushing-beyond-gzipping
    <IfModule mod_setenvif.c>
        <IfModule mod_headers.c>
            SetEnvIfNoCase ^(Accept-EncodXng|X-cept-Encoding|X{15}|~{15}|-{15})$ ^((gzip|deflate)\s*,?\s*)+|[X~-]{4,13}$ HAVE_Accept-Encoding
            RequestHeader append Accept-Encoding "gzip,deflate" env=HAVE_Accept-Encoding
        </IfModule>
    </IfModule>

    # Compress all output labeled with one of the following MIME-types
    # (for Apache versions below 2.3.7, you don't need to enable `mod_filter`
    #  and can remove the `<IfModule mod_filter.c>` and `</IfModule>` lines
    #  as `AddOutputFilterByType` is still in the core directives).
    <IfModule mod_filter.c>
        AddOutputFilterByType DEFLATE application/atom+xml \
                                      application/javascript \
                                      application/json \
                                      application/rss+xml \
                                      application/vnd.ms-fontobject \
                                      application/x-font-ttf \
                                      application/x-web-app-manifest+json \
                                      application/xhtml+xml \
                                      application/xml \
                                      font/opentype \
                                      image/svg+xml \
                                      image/x-icon \
                                      text/css \
                                      text/html \
                                      text/plain \
                                      text/x-component \
                                      text/xml
    </IfModule>

</IfModule>

# ------------------------------------------------------------------------------
# | Persistent connections                                                     |
# ------------------------------------------------------------------------------

# Allow multiple requests to be sent over the same TCP connection:
# http://httpd.apache.org/docs/current/en/mod/core.html#keepalive.

# Enable if you serve a lot of static content but, be aware of the
# possible disadvantages!

 <IfModule mod_headers.c>
    Header set Connection Keep-Alive
 </IfModule>
Stateless answered 7/1, 2016 at 15:36 Comment(0)
U
1

My fix is:

<?php  ob_start(); ?>
<!DOCTYPE html>
<html lang="de">
.....
....//your whole code
....
</html>
<?php
        ob_clean();
ob_end_flush();

ob_flush();

?>

Hope this will help someone in future, and in my case its a Kaspersky issue but the fix above works great :)

Urina answered 3/2, 2017 at 13:12 Comment(0)
S
1

I was getting net::ERR_INCOMPLETE_CHUNKED_ENCODING, upon closer inspection of the server error logs I found that it was due to PHP script execution timeout.

Adding this line on top of PHP script solved it for me:

ini_set('max_execution_time', 300); //300 seconds = 5 minutes

Ref: Fatal error: Maximum execution time of 30 seconds exceeded

Shrapnel answered 29/4, 2017 at 11:1 Comment(0)
H
1

In my case it was happening during json serialization of a web api return payload - I had a 'circular' reference in my Entity Framework model, I was returning a simple one-to-many object graph back but the child had a reference back to the parent, which apparently the json serializer doensn't like. Removing the property on the child that was referencing the parent did the trick.

Hope this helps someone who might have a similar issue.

Homebody answered 18/9, 2017 at 18:24 Comment(0)
X
1

This generally raises when the client sends a burst of requests to the server, next to a client side event.

This is generally a sign of "bad" programming in client side.

Imagine I am updating all lines of a table.

The bad way is to send many requests to update each row (many requests in rafale without waiting for request complete). To correct it , be sure, the request is complete, before sending another one.

The good way would be to send a request with all updated rows. (one request)

So, at first, look at what is happening client side and refactoring code if necessary.

Use wireshark to identify what goes wrong in requests.

Xeres answered 27/2, 2019 at 15:25 Comment(2)
This has nothing to do with how the client behaves.Banas
This is not true. All browsers have the capability to enqueue requests.Tillotson
R
1

Check the nginx folder permission and set appache permission for that:

chown -R www-data:www-data /var/lib/nginx
Rodroda answered 10/3, 2019 at 14:54 Comment(0)
V
0

Well. Not long ago I also met this question. And finally I get the solutions which really address this issue.

My problem symptoms are also the pages not loading and find the json data was be randomly truncated.

Here are the solutions which I summary could help to solve this problem

1.Kill the anti-virus software process
2.Close chrome's Prerendering Instant pages feature
3.Try to close all the apps in your browser
4.Try to define your Content-Length header
  <?php
     header('Content-length: ' . strlen($output));
  ?>
5.Check your nginx fastcgi buffer is right 
6.Check your nginx gzip is open
Vegetation answered 2/12, 2015 at 5:9 Comment(0)
L
0

If there are any loop or item which is not existing then you face this issue.

When running the App on Chrome, the page is blank and become unresponsive.

Scenario Start:

Dev Environment: MAC, STS 3.7.3, tc Pivotal Server 3.1, Spring MVC Web,

in ${myObj.getfName()}

Scenario End:

Reason of issue: getfName() function is not defined on the myObj.

Hope it help you.

Litho answered 31/3, 2016 at 10:42 Comment(0)
B
0

my guess is the server is not correctly handling the chunked transfer-encoding. It needs to terminal a chunked files with a terminal chunk to indicate the entire file has been transferred.So the code below maybe work:

echo "\n";
flush();
ob_flush();
exit(0);
Bobbe answered 19/7, 2016 at 7:34 Comment(0)
R
0

In my case it was broken config for mysqlnd_ms php extension on the server. Funny thing is that it was working fine on requests with short duration. There was a warning in server error log, so we have fixed it quick.

Resile answered 5/8, 2016 at 15:0 Comment(0)
E
0

This seems like a common problem with multiple causes and solutions, so I'm going to put my answer here for anyone who may require it.

I was getting net::ERR_INCOMPLETE_CHUNKED_ENCODING on Chrome, osx, php70, httpd24 combination, but the same code ran fine on the production server.

I initially tailed the regular logs but nothing really showed up. A quick ls -later showed system.log was the latest touched file in /var/log, and tailing that gave me

Saved crash report for httpd[99969] version 2.4.16 (805) 
to /Library/Logs/DiagnosticReports/httpd.crash

Contained within:

Process:               httpd [99974]
Path:                  /usr/sbin/httpd
Identifier:            httpd
Version:               2.4.16 (805)
Code Type:             X86-64 (Native)
Parent Process:        httpd [99245]
Responsible:           httpd [99974]
User ID:               70

PlugIn Path:             /usr/local/opt/php70-mongodb/mongodb.so
PlugIn Identifier:       mongodb.so

A brew uninstall php70-mongodb and a httpd -k restart later and everything was smooth sailing.

Encephalon answered 7/12, 2016 at 23:21 Comment(0)
S
0

In my case it was issue of html. There was '\n' in json response causing the issue. So I removed that.

Smokestack answered 27/12, 2016 at 13:50 Comment(0)
U
0

Fascinating to see how many different causes there are for this issue!

Many say it's a Chrome issue, so I tried Safari and still had issues. Then tried all solutions in this thread, including turning off my AVG Realtime Protection, no luck.

For me, the issue was my .htaccess file. All it contained was FallbackResource index.php, but when I renamed it to htaccess.txt, my issue was resolved.

Ultravirus answered 18/1, 2017 at 8:13 Comment(2)
What the...? I have the same thing... But if it gets renamed to htaccess.txt, shouldn't it no longer work?Mccray
Precisely. A better question might be, why is index.php handling errors? And why is it causing this?Syndactyl
A
0

Hmmm I just stumbled upon a similar issue but with different reasons behind...

I'm using Laravel Valet on a vanilla PHP project with Laravel Mix. When I opened the site in Chrome, it was throwing net::ERR_INCOMPLETE_CHUNKED_ENCODING errors. (If I had the site loaded on HTTPS protocol, the error changed to net::ERR_SPDY_PROTOCOL_ERROR.)

I checked the php.ini and opcache was not enabled. I found that in my case the problem was related to versioning the asset files - for some reason, it did not seem to like a query string in the URL of the assets (well, oddly enough, just one in particular?).

I have removed mix.version() for the local environment, and the site loads just fine in my Chrome on both HTTP and HTTPS protocols.

Alvarez answered 11/4, 2018 at 11:49 Comment(0)
V
0

In the context of a Controller in Drupal 8 (Symfony Framework) this solution worked for me:

$response = new Response($form_markup, 200, array(
  'Cache-Control' => 'no-cache',
));

$content = $response->getContent();
$contentLength = strlen($content);
$response->headers->set('Content-Length', $contentLength);

return $response;

Otherwise the response header 'Transfer-Encoding' got a value 'chunked'. This may be a problem for Chrome browser.

Vexation answered 14/5, 2018 at 14:38 Comment(0)
T
0

I have redirected from http:// to https:// and this problem was resolved!

Theresita answered 17/3, 2020 at 18:10 Comment(1)
Does that directly address the issue?Kibitzer
V
0

I'm pretty sure that this issue can have multiple causes — both on server and on client sides.

Recently I faced it with a website I'm hosting on VPS (Ubuntu 18.04, PHP 7.4 FPM, nginx + certbot, site is powered by WordPress) — admin pages were loaded without CSS/JS.

It took me few hours to try different solutions, none of which helped.

Finally, I discovered, that for some reason (probably it was changed by me earlier, but also I can't exclude possibility it was like this by default) in my /etc/nginx/nginx.conf a first line of this file was commented:

# user www-data

I uncommented it, restarted nginx with sudo service nginx restart, and the issue was gone.

If anyone can check on pristine Ubuntu 18.04 with nginx installed if this line is commented by default — please put it in comments.

Vivianaviviane answered 24/3, 2020 at 13:40 Comment(0)
M
0

In my case, it was a sloppy application issue. An AJAX call is being made to a PHP which had sloppy includes (there was trailing white space after the PHP closing delimiter in two of the includes). This meant that spaces were being output to the response ahead of the expected JSON output. I only discovered this when I put a Header for JSON output just ahead of the response, and the chunking error was replaced by the error that "headers could not be sent because output had occurred". In other words, the AJAX was expecting a JSON response, and it got that - sort of - but the response wasn't clean because a JSON response shouldn't have white ahead of it. This was apparent when looking at the Response from the PHP in Firebug Network - the response looked right justified in the panel because of the leading spaces. Strangely, not all white space triggered the error - the chunking error only occurred when the entire length of the response exceeded some certain length.

Marsala answered 10/6, 2020 at 4:13 Comment(0)
H
0

I solved this by changing datatype from '.js' to '.json'.

Hanker answered 18/12, 2020 at 9:21 Comment(0)
M
0

In my case (Windows 10) the source of the problem was the fact that I disabled the WWW publishing service (I did that to resolve the XAMPP and IIS port 80 conflict). I solved the problem by turning the service back on in services.msc. I tought the service is related only to the traffic on port 80 but turning it down caused a mess in the whole HTTP traffic.

Maltreat answered 24/8, 2021 at 21:43 Comment(0)
P
0

As the error title ERR_INCOMPLETE_CHUNKED_ENCODING this is just about Encoding problem , not more. Perhaps some body solved it by disabling Antivirus or more ... which is not the right way. In my case and some other who use`s encoding languages as response like China or Arabic alphabet , the best way is

Returning the data in English instead of your site main response language and handle the translation in UI part.

Of course the main problem is about outdated browsers which can`t handle it.

Percy answered 13/4, 2022 at 10:7 Comment(0)
E
0

Having encountered the same problem while using C#, here is a possible fix for those with access to the server side code:

Offending code: (returning bytes, which worked for some of the data displayed, but not all)

var bytes = await GetBytesCached(url);
await HttpContext.Response.Body.WriteAsync(bytes, 0, bytes.Length);
await HttpContext.Response.Body.FlushAsync();
HttpContext.Response.Body.Close();

Correct code: (simply convert bytes to string of correct encoding)

var bytes = await GetBytesCached(url);
var xmlstring = System.Text.Encoding.UTF8.GetString(bytes);
return new ContentResult() { Content = xmlstring ,ContentType= "application/xml",StatusCode=200};

Additional info: the behavior was inconsistent while returning bytes: dev environment worked for all data sources, production environment work only for what appeared to be the smalled batches of data. Moving to returning xml instead of bytes seems to avoid the problem on all browsers regardless of antivirus and pluggins.

Elroy answered 20/6, 2022 at 9:26 Comment(0)
F
-2

I had the same problem with my application. My project uses DevOps and The problem was because of the unhealthy computes. Replacing them fixed the issue for me

Flitch answered 6/1, 2020 at 8:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.