Downloading artifacts from Jenkins using wget or curl
Asked Answered
M

1

18

I am trying to download an artifact from a Jenkins project using a DOS batch script. The reason that this is more than trivial is that my artifact is a ZIP file which includes the Jenkins build number in its name, hence I don't know the exact file name.

My current plan of attack is to use wget pointing at: /lastSuccessfulBuild/artifact/ to do some sort of recursive/mirror download.

If I do the following:

wget -r -np -l 1 -A zip --auth-no-challenge --http-user=**** --http-password=****  http://*.*.*.*:8080/job/MyProject/lastSuccessfulBuild/artifact/

(*s are chars I've changed for posting to SO)

I never get a ZIP file. If I omit the -A zip option, I do get the index.html, so I think the authorisation is working, unless it's some sort of session caching issue?

With -A zip I get as part of the response:

Removing ...+8080/job/MyProject/lastSuccessfulBuild/artifact/index.html since it should be rejected.

So I'm not sure if maybe it's removing that file and so not following its links? But doing -A zip,html doesn't work either.

I've tried several wget options, and also curl, but I am getting nowhere.

I don't know if I have the wrong wget options or whether there is something special about Jenkins authentication.

Merkle answered 15/7, 2015 at 13:44 Comment(0)
T
14

You can add /*zip*/desired_archive_name.zip to any folder of the artifacts location.

If your ZIP file is the only artifact that the job archives, you can use:

http://*.*.*.*:8080/job/MyProject/lastSuccessfulBuild/artifact/*zip*/myfile.zip

where myfile.zip is just a name you assign to the downloadable archive, could be anything.

If you have multiple artifacts archived, you can either still get the ZIP file of all of them, and deal with individual ones on extraction. Or place the artifact that you want into a separate folder, and apply the /*zip*/ to that folder.

Therm answered 15/7, 2015 at 15:0 Comment(7)
That's quite an interesting idea, which I didn't know you could do, thanks. But it still leaves me the headache of a double zipped file, one of which I don't know the name of. I'd still be interested to know why the wget command isn't working.Merkle
You shouldn't need -A zip to get the artifacts, however you do need to know the artifact name. If you don't know the artifact name, you can /*zip*/ the whole folder. There is no problem with zip inside a zip. Once you get it, extract the first zip into some folder, and there you have: your whole artifacts content. You can now use filesystem wildcards to get to the rest of your files (you do know part of the filename, right?)Therm
The /artifact/ endpoint is an index.html page that shows the web interface to get at the actual artifact (by exact name, or zip method)Therm
I got round to trying your idea in anger and it really isn't that difficult to handle the double zip. I used curl in the end, but I imagine wget will work just as well. So after downloading as you suggested to say download.zip, I did: 7z x downloaded.zip -Ounzip1 -r -y then 7z x unzip1\archive\*.zip -OfinalDestinationDir -r -yMerkle
So thank you for all your help. I'll accept your answer. Cheers!Merkle
The "/*zip*/" saved my day. Thanks!Ruggles
I was also able to get the /*zip*/ undocumented feature working. Pretty cool. But I can't use it bc for me, it defeats the purpose of not just using copyArtifact, which is to save time. When you fetch <ARTIFACT_SUBFOLDER>/*zip*/, it generates a new zip file which take considerable time for larger artifacts. So I've been experimenting with just using scp, which already has a rich interface for filtering filesystem globs. The problem with that is determining the source path, which for multibranch pipeline jobs in my case, are non-trivial..Denticulation

© 2022 - 2024 — McMap. All rights reserved.