I have a very simple query that returns a couple thousand rows with only two columns:
SELECT "id", "value" FROM "table" LIMIT 10000;
After issuing sql.Query()
, I traverse the result set with the following code:
data := map[uint8]string{}
for rows.Next() {
var (
id uint8
value string
)
if error := rows.Scan(&id, &value); error == nil {
data[id] = value
}
}
If I run the exact same query directly on the database, I get all results back within a couple of milliseconds, but the Go code takes far longer complete, sometimes almost 10 seconds!
I started commenting out several parts of the code and it seems that rows.Scan()
is the culprit.
Scan copies the columns in the current row into the values pointed at by dest.
If an argument has type *[]byte, Scan saves in that argument a copy of the corresponding data. The copy is owned by the caller and can be modified and held indefinitely. The copy can be avoided by using an argument of type *RawBytes instead; see the documentation for RawBytes for restrictions on its use. If an argument has type *interface{}, Scan copies the value provided by the underlying driver without conversion. If the value is of type []byte, a copy is made and the caller owns the result.
Can any expect any speed improvement if I use *[]byte
, *RawBytes
or *interface{}
instead?
Looking at the code, it looks like the convertAssign()
function is doing a lot of stuff that isn't necessary for this particular query. So my question is: how can I make the Scan
process faster?
I thought about overloading the function to expect predetermined types, but that isn't possible in Go...
Any ideas?
*[]byte
,*RawBytes
and*interface{}
? – Cowbind*RawBytes
seems to go away whenever you callrows.Next()
. I haven't tried the other two, I was merely asking if it would help with anything. If you look at theconvertAssign
source code (linked in the answer), theuint8
type still requires going thru reflection I think. – Stuyvesant