In my Tasks Pro™ and Tasks documentation, I have example code for setting up CRON jobs to load URLs. In the case of my software, those URLs kick off the e-mail reminders and activate recurring tasks.
In the example code, I included a ‘–spider’ option for wget. This option keeps wget from saving the downloaded page as a file – handy for not cluttering up your home directory.
Unfortunately, the –spider option means that wget only does a head request for the file which may not cause the code in the file to be executed. I had two virtually identical commands set up (with –spider) – one worked and one didn’t.
I’ve updated the online versions of the documentation to exclude the –spider option (good thing the distributed docs also include a link to the online version of the page).
I’ve received the following recommendations:
- For wget, pass in a filename with -O
wget -q -O temp.txt http://example.com....
so that each run will overwrite the same file.
- For curl, pipe the output to /dev/null
I’m not a server geek, so concrete examples (especially using the examples in the doc pages) would be quite welcome.
This post is part of the project: Tasks Pro™. View the project timeline for more context on this post.
wget -q -O- http://www.example.c[...]Bpassword%5D > /dev/null
-O- sends the output to standard out, which can then be directed to /dev/null
I use this :
wget -q –delete-after URL
Gaston
I tried -o- but I didn’t get any mail sent with this
currently I’m trying the -nc switch which should make it overwrite the previous file.
wget -q –nc URL
the wget manual can be found here
http://www.gnu.org/s[...]al/wget.html
I think –delete-after is the right option.
“-O – > /dev/null” can be shortened to “-O /dev/null”