In the example code, I included a ‘–spider’ option for wget. This option keeps wget from saving the downloaded page as a file – handy for not cluttering up your home directory.
Unfortunately, the –spider option means that wget only does a head request for the file which may not cause the code in the file to be executed. I had two virtually identical commands set up (with –spider) – one worked and one didn’t.
I’ve updated the online versions of the documentation to exclude the –spider option (good thing the distributed docs also include a link to the online version of the page).
I’ve received the following recommendations:
- For wget, pass in a filename with -O
wget -q -O temp.txt http://example.com....
so that each run will overwrite the same file.
- For curl, pipe the output to /dev/null