Mac M1 `cp`ing binary over another results in crash
Asked Answered
F

2

9

Recently, I've been observing an issue that happens after copying a binary file over another binary file without first deleting it on my M1. After some experimentation (after hitting this issue), I've come up with a reproducible method of hitting this issue on Apple's new hardware on the latest 11.3 release of Big Sur.

The issue happens when copying a differing binary over another binary after they have been run at least once. Not sure what is causing this issue, but it's very perplexing and could potentially lead to some security issues.

For example, this produces the error:

> ./binaryA
# output A
> ./binaryB
# output B
> cp binaryA binaryB
> ./binaryB
Killed: 9 

Setup

In order to reproduce the above behavior, we can create two simple C files with the following contents:

// binaryA.c
#include<stdio.h>

int main() {
    printf("Hello world!");
}
// binaryB.c
#include<stdio.h>
const char s[] = "Hello world 123!"; // to make sizes differ for clarity

int main() {
    printf("%s", s);
}

Now, you can run the following commands and get the error described (the programs must be run before the issue can be reproduced, so running the programs below is necessary):

> gcc -o binaryA binaryA.c
> gcc -o binaryB binaryB.c
> ./binaryA
Hello world!
> ./binaryB
Hello world 123!
> cp binaryA binaryB
> ./binaryB
Killed: 9

As you can see, the binaryB binary no longer works. For all intents and purposes, the two binaries are equal but one runs and one doesn't. A diff of both binaries returns nothing.

I'm assuming this is some sort of signature issue? But it shouldn't be because both binaries are not signed anyways.

Does anyone have a theory behind this behavior or is it a bug? Also, if it is a bug, where would I even file this?

Freebooter answered 4/5, 2021 at 2:42 Comment(1)
Note that overwriting code with cp is generally unsafe, and mv should be used to update files, so that any process that is reading the file while it's overwritten gets a consistent version. This is especially risky for shell scripts, which continue reading the file while it executes.Buchan
N
5

Whenever you update a signed file, you need to create a new file.

Specifically, the code signing information (code directory hash) is hung off the vnode within the kernel, and modifying the file behind that cache will cause problems. You need a new vnode, which means a new file, that is, a new inode. Documented in WWDC 2019 Session 703 All About Notarization - see slide 65.

This is because Big Sur on ARM M1 processor requires all code to be validly signed (if only ad hoc) or the operating system will not execute it, instead killing it on launch.

Nimrod answered 4/5, 2021 at 11:20 Comment(1)
Apple's official documentation on that topic: developer.apple.com/documentation/security/…Eamon
L
2

While Trev's answer is technically correct (best kind of correct?), the likely answer is also that this is a bug in cp - or at least an oversight in the interaction between cp and the security sandbox, which is causing a bad user experience (and bad UX == bug in my book, no matter the intention)

I'm going to take a wild guess (best kind of guess!) and posit that when this was first implemented, someone hooked into the inode deletion as a trigger for resetting the binary signature state. It is very possible that, at the time that they implemented this, cp actually removed/destructively replaced the vnode/inode as part of the copy, so everything worked great. Then, at some point, someone else went and optimized cp to no longer be a destructive inode operation - and this is how the best bugs come to be!

Lordosis answered 9/9, 2022 at 9:36 Comment(2)
I don't believe it's a cp bug - reason: even when a compiler overwrites an earlier binary the issue also arises and is how I first came across it :)Nimrod
what surprise me is that the process got killed when I use cp to clone the binary, but it works when I use mv to move the binaryBrahui

© 2022 - 2024 — McMap. All rights reserved.