Consider the following code: http://hpaste.org/90394
I am memory mapping a large 460mb file to a lazy ByteString. The length of the ByteString reports 471053056
.
When nxNodeFromID file 110000
is changed to a lower node ID, ie: 10000
, it works perfectly. However; as soon as I try and serialize anything past exactly 2^18 bytes (262144
) of the ByteString I get Segmentation fault/access violation in generated code
and termination.
I'm running Windows and using GHC 7.4.2.
Please advise whether this is my fault, or some issue with the laziness, or, some issue with Haskell.
getNXNode
doesn't match theNXNode
data definition. If that's intentional, it would be worth a comment. But I don't see how that would cause a segfault here. – LubricateNXNode 0 <$> ...
:) – Jermaineskip
20 bytes, and read only 12 per node. – LubricategetNXMetadata
contributes to 2 (type) + 8 (data) optional bytes (which are null if they aren't used) totaling 20 for the size of the node (when added to the 4 + 4 + 2 read ingetNXNode
). – JermainegetNXMetadata
is complete, it becomes more or less obvious. – LubricatemmapFileByteString
instead ofmmapFileByteStringLazy
? (You'd need to wrap the returned strictByteString
of that in afromChunks [buf]
or so to get a lazyByteString
for the rest of your code to work with.) – LubricatemapChunk handle (offset,size) = unsafePerformIO $
), but it could be something else. Can't really investigate, though. (Well, if I had to, I could boot into Windows; but I'd still need a file to work on, and a great incentive to expose myself to the inconvenience.) – Lubricatemmap
, he should have a better idea what the problem might be. – Lubricate