Because the HTTP GET method is specified as idempotent, a GET request, by specification, can be resubmitted with the assumption that it will not change anything on the server. This is not the case for a HTTP POST which by specification can change the status of the application running on the server.
So, by specification, one can perform an HTTP GET against a page N number of times without worrying of being changing its status.
Not respecting the specification may have various undesired results. For example, Web crawlers follow through GET request to index a site, but not POST. If you allowed an HTTP GET request to make changes to the database, you can easily understand the undesired implication it can have.
Respecting a specification is like respecting an agreement between your service or website and an array of different consumers which can be normal users' browsers but also other services like web crawlers.
You could build a site that uses a GET to insert a record, but you should also expect that whatever is built around to consume your site is functioning with the assumption that you are respecting the agreement.
As a last example, web browsers warn users when they try to refresh a page that was reached by a HTTP POST request warning that some data might be resubmitted. You do not get that layer of protection built-in browsers if the page is reached by a HTTP GET request.
You can read more here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html