Oracle PL/SQL UTL_FILE.PUT buffering
Asked Answered
C

4

6

I'm writing a large file > 7MB from an Oracle stored procedure and the requirements are to have no line termination characters (no carriage return/line feed) at the end of each record.

I've written a stored procedure using UTL_FILE.PUT and I'm following each call to UTL_FILE.PUT with a UTL_FILE.FFLUSH. This procedure errors with a write error once I get to the point where I've written more than the buffer size (set to max 32767) although I'm making the FFLUSH calls. The procedure works fine if I replace the PUT calls with PUT_LINE calls.

Is it not possible to write more than the buffer size without a newline character? If so, is there a work around?

Cheder answered 26/8, 2011 at 14:36 Comment(0)
H
4

Dustin,

The Oracle documentation here: http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/u_file.htm#i1003404

States that: FFLUSH physically writes pending data to the file identified by the file handle. Normally, data being written to a file is buffered. The FFLUSH procedure forces the buffered data to be written to the file. The data must be terminated with a newline character.

The last sentence being the most pertinent.

Could you not write the data using UTL_FILE.PUT_LINE before then searching the resulting file for the line terminators and removing them?

Just a thought....

Hemstitch answered 26/8, 2011 at 14:51 Comment(2)
Thanks for clarifying how FFLUSH works, I hadn't seen the last sentence in my searches.Cheder
Hi @Hemstitch , I am still confused about how FFLUSH works, for(1-->32767 +n), every time I put, and I fflush. Why should a new line still needed ? Would you help to explain?Petronia
E
4

deleted quote from docs, see Ollie's answer

Another possible way to do this is a Java stored procedure, where you can use the more full-featured Java API for creating and writing to files.

Eichelberger answered 26/8, 2011 at 14:55 Comment(3)
I'm intrigued by the java stored procedure. Although I believe the Java code itself would be straightforward, we don't do any java coding in our shop, so it would be a maintenance issue for someone to have to pick up.Cheder
Would you please help to explain how we can resolve this problem with BFILE Object?Petronia
@Jaskey, actually I think I was incorrect about that. I believe that BFILEs are read-only so couldn't be used to write data out to an OS file.Eichelberger
T
1

Although it is less than desirable, you could always PUT until you have detected that you are nearing the buffer size. When this occurs, you can FCLOSE the file handle (flushing the buffer) and re-open that same file with FOPEN using 'a' (append) as the mode. Again, this technique should generally be avoided, especially if other processes are also trying to access the file (for example: closing a file usually revokes any locks the process had placed upon it, freeing up any other processes that were trying to acquire a lock).

Tetrahedron answered 27/8, 2011 at 12:17 Comment(0)
C
0

Thanks for all the great responses, they have been very helpful. The java stored procedure looked like the way to go, but since we don't have a lot of java expertise in-house, it would be frowned upon by management. But, I was able to find a way to do this from the stored procedure. I had to open the file in write byte mode 'WB'. Then, for each record I'm writing to the file, I convert it to the RAW datatype with UTL_RAW.CAST_TO_RAW. Then use UTL_FILE.PUT_RAW to write to the file followed by any necessary FFLUSH calls to flush the buffers. The receiving system has been able to read the files; so far so good.

Cheder answered 30/8, 2011 at 11:33 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.