The script gets an URL as parameter and loads the content behind from the web into a file on a as parameter given location on the filesystem and a as parameter given filename. (example usage: [login to view URL] [login to view URL] /tempdirectory/ [login to view URL])
With each start of the script it uses an different IP address so that the webserver has difficulties to recognize, that all downloads came from the same source if you use the script more times (no tor).
There has also be the possibility to include cookies and header information in the request. So it may be good to use phantomjs or similar not programming the request and download on you own.
The script is given in source code and works on macosx terminal. So perfect would be perl, phyton or java, but other could be ok (please ask)
hi, I can provide the solution using Perl + phantomjs. you can run from macosx terminal Relevant Skills and Experience 10+ years in perl development Proposed Milestones $155 USD - perl scraper (+phantomjs support)