The primary advantage is just that you can send objects to Copy-Item
through a pipe instead of strings or filespecs. So you could do:
Get-ChildItem '\\fileserver\photos\*.jpeg' -File | `
Where-Object { ($_.LastAccessTime -ge (Get-Date).AddDays(-1)) -and ($_.Length -le 500000) } | `
Copy-Item -Destination '\\webserver\photos\'
That's kind of a poor example (you could do that with Copy-Item -Filter
), but it's an easy one to come up with on-the-fly. It's pretty common when working with files to end up with a pipeline from Get-ChildItem
, and I personally tend to do that a lot just because of the -Recurse -Include
bug with Remove-Item
.
You also get PowerShell's error trapping, special parameters like -Passthru
, -WhatIf
, -UseTransaction
, and all the common parameters as well. Copy-Item -Recurse
can replicate some of xcopy's tree copying abilities, but it's pretty bare-bones.
Now, if you need to maintain ACLs, ownership, auditing, and the like, then xcopy
or robocopy
are probably going to be much easier because that functionality is built in. I'm not sure how Copy-Item
handles copying encrypted files to non-encrypted locations (xcopy has some ability to do this), and I don't believe Copy-Item
supports managing the archive attribute directly.
If it's speed you're looking for, then I would suspect that xcopy and robocopy would win out. Managed code has higher overhead in general. Xcopy and robocopy also offer a lot more control over how well they work with the network.
xcopy <source> <dest> <options>
in his existing batch file to the PS nativeCopy-Item
cmdlet, or just leave it as is. We do need more info, but I feel the comments and down votes are being overly harsh on this one. – Wound