I am downloading a file from www.examplesite.com/textfile.txt
When running the following command
wget www.examplesite.com/textfile.txt
the file is saved as textfile
. How can I save it as newfile.txt
?
I am downloading a file from www.examplesite.com/textfile.txt
When running the following command
wget www.examplesite.com/textfile.txt
the file is saved as textfile
. How can I save it as newfile.txt
?
Use the -O file
option.
E.g.
wget google.com
...
16:07:52 (538.47 MB/s) - `index.html' saved [10728]
vs.
wget -O foo.html google.com
...
16:08:00 (1.57 MB/s) - `foo.html' saved [10728]
wget -O file http://foo
is intended to work like wget -O - http://foo > file
; file will be truncated immediately, and all downloaded content will be written there." –
Persuader wget
in console and append it to file. For example wget -O - -o /dev/null http://google.com >> foo.html
. Reference –
Okun -o
! –
Westminster wget
apply the extension of the file you're downloading? Ex wget -P /my/dir --stem foo https://example.com/img.jpg
→ /my/dir/foo.jpg
–
Eugenieeugenio Also notice the order of parameters on the command line. At least on some systems (e.g. CentOS 6):
wget -O FILE URL
works. But:
wget URL -O FILE
does not work.
You would use the command Mechanical snail listed. Notice the uppercase O. Full command line to use could be:
wget www.examplesite.com/textfile.txt --output-document=newfile.txt
or
wget www.examplesite.com/textfile.txt -O newfile.txt
Hope that helps.
--output-document=newfile.txt
is what worked for me. All attempts to use -O failed with the error Resolving webmin_1.630_all.deb (webmin_1.630_all.deb)... failed: Name or service not known.
–
Quittance Either curl
or wget
can be used in this case. All 3 of these commands do the same thing, downloading the file at http://path/to/file.txt and saving it locally into "my_file.txt".
Note that in all commands below, I also recommend using the -L
or --location
option with curl
in order to follow HTML 302 redirects to the new location of the file, if it has moved. wget
requires no additional options to do this, as it does this automatically.
# save the file locally as my_file.txt
wget http://path/to/file.txt -O my_file.txt # my favorite--it has a progress bar
curl -L http://path/to/file.txt -o my_file.txt
curl -L http://path/to/file.txt > my_file.txt
Alternatively, to save the file as the same name locally as it is remotely, use either wget
by itself, or curl
with -O
or --remote-name
:
# save the file locally as file.txt
wget http://path/to/file.txt
curl -LO http://path/to/file.txt
curl -L --remote-name http://path/to/file.txt
Notice that the -O
in all of the commands above is the capital letter "O".
The nice thing about the wget
command is it shows a nice progress bar.
You can prove the files downloaded by each of the sets of 3 techniques above are exactly identical by comparing their sha512 hashes. Running sha512sum my_file.txt
after running each of the commands above, and comparing the results, reveals all 3 files to have the exact same sha hashes (sha sums), meaning the files are exactly identical, byte-for-byte.
-L
option with curl
here: Is there a way to follow redirects with command line cURL?See also: How to capture cURL output to a file?
wget -O yourfilename.zip remote-storage.url/theirfilename.zip
will do the trick for you.
Note:
a) its a capital O.
b) wget -O filename url
will only work. Putting -O
last will not.
Using CentOS Linux I found that the easiest syntax would be:
wget "link" -O file.ext
where "link"
is the web address you want to save and "file.ext"
is the filename and extension of your choice.
© 2022 - 2024 — McMap. All rights reserved.
wget -O newfile.txt
. – Steigerwget -o
will output log information to a file.wget -O
will output the downloaded content.man wget
will tell you all of this and more. – Cozwget
will use for the output file? I'd like to usewget
's default name and then process the file afterwards, but to do the post processing I need the name thatwget
used for the file. – Eugenieeugenio