The gob
package wasn't designed to be used this way. A gob stream has to be written by a single gob.Encoder
, and it also has to be read by a single gob.Decoder
.
The reason for this is because the gob
package not only serializes the values you pass to it, it also transmits data to describe their types:
A stream of gobs is self-describing. Each data item in the stream is preceded by a specification of its type, expressed in terms of a small set of predefined types.
This is a state of the encoder / decoder–about what types and how they have been transmitted–, a subsequent new encoder / decoder will not (cannot) analyze the "preceeding" stream to reconstruct the same state and continue where a previous encoder / decoder left off.
Of course if you create a single gob.Encoder
, you may use it to serialize as many values as you'd like to.
Also you can create a gob.Encoder
and write to a file, and then later create a new gob.Encoder
, and append to the same file, but you must use 2 gob.Decoder
s to read those values, exactly matching the encoding process.
As a demonstration, let's follow an example. This example will write to an in-memory buffer (bytes.Buffer
). 2 subsequent encoders will write to it, then we will use 2 subsequent decoders to read the values. We'll write values of this struct:
type Point struct {
X, Y int
}
For short, compact code, I use this "error handler" function:
func he(err error) {
if err != nil {
panic(err)
}
}
And now the code:
const n, m = 3, 2
buf := &bytes.Buffer{}
e := gob.NewEncoder(buf)
for i := 0; i < n; i++ {
he(e.Encode(&Point{X: i, Y: i * 2}))
}
e = gob.NewEncoder(buf)
for i := 0; i < m; i++ {
he(e.Encode(&Point{X: i, Y: 10 + i}))
}
d := gob.NewDecoder(buf)
for i := 0; i < n; i++ {
var p *Point
he(d.Decode(&p))
fmt.Println(p)
}
d = gob.NewDecoder(buf)
for i := 0; i < m; i++ {
var p *Point
he(d.Decode(&p))
fmt.Println(p)
}
Output (try it on the Go Playground):
&{0 0}
&{1 2}
&{2 4}
&{0 10}
&{1 11}
Note that if we'd use only 1 decoder to read all the values (looping until i < n + m
, we'd get the same error message you posted in your question when the iteration reaches n + 1
, because the subsequent data is not a serialized Point
, but the start of a new gob
stream.
So if you want to stick with the gob
package for doing what you want to do, you have to slightly modify, enhance your encoding / decoding process. You have to somehow mark the boundaries when a new encoder is used (so when decoding, you'll know you have to create a new decoder to read subsequent values).
You may use different techniques to achieve this:
- You may write out a number, a count before you proceed to write values, and this number would tell how many values were written using the current encoder.
- If you don't want to or can't tell how many values will be written with the current encoder, you may opt to write out a special end-of-encoder value when you don't write more values with the current encoder. When decoding, if you encounter this special end-of-encoder value, you'll know you have to create a new decoder to be able to read more values.
Some things to note here:
- The
gob
package is most efficient, most compact if only a single encoder is used, because each time you create and use a new encoder, the type specifications will have to be re-transmitted, causing more overhead, and making the encoding / decoding process slower.
- You can't seek in the data stream, you can only decode any value if you read the whole file from the beginning up until the value you want. Note that this somewhat applies even if you use other formats (such as JSON or XML).
If you want seeking functionality, you'd need to manage an index file separately, which would tell at which positions new encoders / decoders start, so you could seek to that position, create a new decoder, and start reading values from there.
Warning
gob.NewDecoder()
documents that:
If r does not also implement io.ByteReader, it will be wrapped in a bufio.Reader.
This means that if you use os.File
for example (it does not implement io.ByteReader
), the internally used bufio.Reader
might read more data from the passed reader than what gob.Decoder
actually uses (as its name says, it does buffered IO). So using multiple decoders on the same input reader might result in decoding errors, as the internally used bufio.Reader
of a previous decoder might read data that will not be used and passed on to the next decoder.
A solution / workaround to this is to explicitly pass a reader that implements io.ByteReader
that does not read a buffer "ahead". For example:
type byteReader struct {
io.Reader
buf []byte
}
func (br byteReader) ReadByte() (byte, error) {
if _, err := io.ReadFull(br, br.buf); err != nil {
return 0, err
}
return br.buf[0], nil
}
func newByteReader(r io.Reader) byteReader {
return byteReader{r, make([]byte, 1)}
}
See a faulty example without this wrapper: https://go.dev/play/p/dp1a4dMDmNc
And see how the above wrapper fixes the problem: https://go.dev/play/p/iw528FTFxmU
Check a related question: Efficient Go serialization of struct to disk
gob
I would suggest using boltdb. Write a hook for your logger and log them into boltdb. Also moving boltdb file around is easy since it's just a file - you can also create new boltdb files based on size or time or level. If you absolutely need plain files, I suggest using lumberjack and write a hook for your logger to transform entries to base64. Anyway using gob make analytical and monitoring hard, so InfluxDB or Prometheus are also valid options. – Surcharge