Having multiple threads on a single-core CPU can improve performance in the majority of cases, because in the majority of cases a thread is not busy doing computations, it is waiting for things to happen.
This includes I/O, such as waiting for a disk operation to complete, waiting for the user to press a key on the keyboard or move a mouse, etc., and even some non-I/O situations, such as waiting for a different thread to signal that an event has occurred, waiting for a timer to fire, etc.
So, since threads spend the vast majority of their time doing nothing but waiting, they compete against each other for the CPU far less frequently than you might think.
That's why if you look at the number of active threads in a modern desktop computer you are likely to see hundreds of threads, and if you look at a server, you are likely to see thousands of threads. That's clearly a lot more than the number of cores that the computer has, and obviously, it would not be done if there was no benefit from it.
The only situation where multiple threads on a single core will not improve performance is when the threads are busy doing non-stop computations. This tends to only happen in specialized situations, like scientific computing, cryptocurrency mining, etc.
So, multiple threads on a single-core system do usually increase performance, but this has very little to do with memory, and to the extent that it does, it has nothing to do with any notion of "separated" memory, whatever you mean by that term.
As a matter of fact, running multiple threads on the same core or even different cores on the same chip that mostly access different areas of memory tends to hurt performance, because each time the CPU switches from one thread to the other it begins to access a different set of memory locations, which are unlikely to be in the CPU's cache, so each context switch tends to be followed by a barrage of cache misses, which represent overhead. But usually, it is still worth it.