I'd recommend using another indexed column on the scan
model where you could store another id or token that you can use to query the scan record. Maybe call it sync_id
or something.
If you take this route, you don't have to worry about the differing scan_id
s on the background process records. Just be sure to send the background process records with the scan's JSON body. (Assuming your using JSON as the format for your APIs.)
Here's the general idea.... You would make sure that your sending API service sends the entire scan record with dependent background processes. The receiving API service, then needs to use that scan records sync_id
to query for an existing scan record and update it. You'll need to use some sort of unique identifier on the background process records as well to ensure you're not creating duplicates. If need be, create a sync_id
on the background processes as well. If the scan record with that id doesn't exist, then create it and the dependent background processes.
Essentially, the sending service's API POST request might look something like this:
{
id: 1,
sync_id: "sometoken"
... # other record columns
background_process: [
{
id: 123,
... # other record columns
}
]
}
Be sure the sync_id
you use is unique. Use something like this in the scan model to generate it on a before_create hook:
def set_sync_id
random_token = SecureRandom.urlsafe_base64
while Scan.where(sync_id: random_token).present? do
random_token = SecureRandom.urlsafe_base64
end
self.sync_id = random_token
end