Looking for a way to save a whole site in its entirety and keeping its functions on android
I will be stuck in low or no internet areas and having a way to save a whole website (such as a small community wiki or something) to browse while bored would be very nice. It'd be nice if its features like search could be kept working. Any suggestions for a Foss app that can do this?
you can't. How do you imagine saving the sql databases, that you need for logging in and viewing user profiles and so on? at most, you can save a snapshot
I used wget to download static sites, or at least ones with simpler JavaScript, but it won't download any required files that are only linked in JS code, so it probably won't work for many sites.
You also need to be careful when spanning hosts so that you don't accidentally (attempt to) download the entire internet. And rate-limiting, useragent, robots file, filename limitations (so that it doesn't save files with filename characters that have other uses in URLs like # and ?), filename extensions (to serve them back with correct mimetype), getting filenames from server rather than URL when appropriate, converting links (works in HTML files only), and I am probably forgetting something else.
Oh, and it's a single process doing one request at a time, so even just a page with too many images will take ages. E.g.: http://cyber.dabamos.de/88x31/ (currently offline).
You can then easily serve them using NGINX, or just browse as files, though the latter may not work well on something like a phone. Oh, one more thing, image.jpg and Image.jpg would conflict on Android, and some websites have differences like that. It can only be stored within Termux (and served using NGINX in Termux).
You can download the website's static files then (html, css, images, etc.) but features such as search won't function if it works by querying some database.
Iirc most browsers have a way to make website's available offline. I know chromium has it, but firefox does not. You'd probably need an extension for that. Or you can download the static files, store them in a directory manually and then open the index.html with firefox. That should work.
@marcie@asudox On Firefox, can you try CTRL+S and choosing complete web page save? It usually is enough. Though if it calls an API for searching, that's not gonna work.
Kiwix isn't a web browser exactly and doesn't download web pages the way your browser saves them. It uses a specialized file format, and it can be used to back up an entire site. For instance the kiwix library has an offline copy of wikipedia (no images), but it weighs in at more than 100GB last I looked.