I have a Word document with hyperlinks to 37 pages on [login to view URL] (USA).
For the first page (of 37), your web scraping code would need to:
(1) Copy all the images on the first page; then
(2) Open EACH image (on the first page) and copy all the images on the subsequent page that opens; then
(3) Go to the second page (of 37) and repeat process from step (1)
Assuming there were 100 images on each of the 37 pages AND 100 images on each of the subsequent pages, this would be approximately 100 x 37 x 100 = 370,000 images
(37 folders with approximately 10,000 pictures inside each folder)
Here is the first page:
[login to view URL]=african%7Ctyped&term_meta=american%7Ctyped&term_meta=art%7Ctyped
If you have any questions, please feel free to ask.
Hi, I can automate it on my side and provide you with a folder with all downloaded pictures. do you have any naming preferences for the pictures? like [login to view URL], etc, etc
Dear Sir/Ma'am, I can do Web Scraping 10000 Images into folders 37. If you have questions or doubts about anything, please feel free to ask me. Sincerely, Mir
Hi I understood the task perfectly. Before start work, I will show you sample and I ensure I will give you all accurate data. Here is high quality web scraping expert. Please contact with me. Thank you.
I will build an automated script which recursively scans every URL and it will download every image for that url. I have some experience with this type of projects. Contact me so we could talk better. Thanks
Everything that you want to be done can be achieved with web Python and its scrapy framework and if websites are too ajax heavy then with multi-threaded headless browsers which emulates human interface.