When writing records to an AWS Firehose which is configured with S3 as the output destination, how long is this data buffered before it is written to S3? Or is there a minimum size threshold?
For example, I'm doing the following to add records:
aws cli:
aws firehose put-record --delivery-stream-name mytestfirehoseafds --record='Data="{\"asdf\":\"testam\"}"'
result:
{
"RecordId": "meESlTCUOBQwXaJ9NOVwKOLrEL+7y/glB0mIJ6h6Sz8lOJGUX/N+DlZttq4BQuY528j6ResbxQBR4To+V1RMbBvE4rcxP3kYwg0lmdBAEFWlNnzUb3nP214ywtRYRQ7IzCOjY9o1YPpqHNCCYkPd4Qr0StIFxIiBHHZvTcfW+qMbQkcy7Rr3R+wb+RVs9fEF2Fa8P6mD2NXJOE84sasPNYB/mrjaSMn9"
}
I do not see this immediately in my S3 bucket, however if I use the "Test Data" feature in the aws console I will see files being added to S3 with the test data including my above test record.
So my questions are,
1.) Does Firehose have some kind of buffer threshold that it must reach before it writes it's buffered data to it's output source?
2.) How can I determine what data/records are within the firehose buffer at any given time?