I am doing a bulk generation of pdf files based on templates and I ran into big performance issues pretty fast. My current scenario is as follows:
- get data to be filled from db
- create fdf based on single data row and pdf form
- write
.fdf
file to disk - merge the pdf with fdf using pdftk (fill_form with flatten command)
- continue iterating over rows until all
.pdf
's are generated - all the generated files are merged together in the end and the single pdf is given to the client
I use passthru
to give the raw output to the client (saves time writing file), but this is just a little performance improvements. The total operation time is about 50 seconds for 200 records and I would like to get down to at least 10 seconds in some way.
The ideal scenario would be operating all these pdfs in memory and not writing every single one of them to separate file but then the output would be impossible to do as I can't pass that kind of data to external tool like pdftk.
One other idea was to generate one big .fdf
file with all those rows, but it looks like that is not allowed.
Am I missing something very trivial here?
I'm thanksfull for any advice.
PS. I know I could use some good library like pdflib but I am considering only open licensed libraries now.
EDIT:
I am up to figuring out the syntax to build an .fdf
file with multiple pages using the same pdf as a template, spent few hours and couldn't find any good documentation.