It seems that the pagination for report results might be causing more bloat in time rather than helping in processing. I thought the purpose of paginating the results was to improve processing time, but when I run a hefty report (say, against the statistics table), it takes a long time to process 20 results, and then the same amount of time for each page I switch to. If feels like it is gathering the entire set of results, then only carving out the number of results it is supposed to show. I think pagination is only beneficial if the report is only processing the range of results being shown. Otherwise, it should just show all results. For example, a report of 1000 results being paginated in groups of 20 should, in theory, process in about 1/50th the amount of time it would take to process all 1000 results. But it feels like it is taking the time to process all 1000 results. I get the logistics of this. In order to process pages of all the results, sorted, grouped and ordered properly, the whole result needs to be taken into consideration, then broken down. But with each page change, it has to do this all over again. It seems that when the report is run, the results need to be stored in a way that the paging would refer to the stored results, not re-run the report. I don't think there is a way to run the report for each page, just for that page's section of results. It seems that the report needs to run in its entirety initially, but then we should have a more efficient way of going through the pages of results.
Overall, it sounds like you understand the existing process, but let me try to break it down in a simple way: Each time you're on a report page, the SQL query is run. With result paging/limiting, only a limited number of results are returned. It takes X time to run the SQL query, and Y time to actually fetch Z results. The more Z results, the longer Y time it will take. The X time stays relatively constant. The more results on a page, the longer it will take to load the page, so the paging does help with the time taken to load the page. (NOTE: One of the problems with this is that if your results are actively changing, the paging will be inaccurate, as it considers the new result set for each run.) -- Your idea is interesting. It would still take X time to run the SQL query, and it would take Y time to fetch Z results. So fetching all the results will still be slow initially. But you're right. *In theory*, if we run once, then we page through the stored result set, the paging will be fast (and as per the above note, it will actually be perfectly accurate). Some obstacles here are storage format and storage space. The current report strategy has very little memory or storage overhead, because it's only handling a very small amount of data at a time. If you store every report result, that could add up quickly and surprisingly increase database sizes. Since the report can be made of infinite permutations of columns, we couldn't store the result in a tabular database format. It would have to be in something like JSON. Traditionally, JSON isn't optimized for database storage. MySQL and MariaDB both have a JSON data type. MySQL claims theirs is optimized for database storage, while MariaDB says theirs is just an alias for LONGTEXT, and claim their benchmarks show its roughly as performant as MySQL's, but... we'd have to do our own testing there. What I mean is that it could be slow to re-parse the whole result set each time, and then extract only the paginated number. That said, it would be very cool to be able to refer back to a particular report run without having to download it to CSV and manage it offline. -- So overall... there's pros and cons to both approaches. It's an interesting idea though. I think that I have seen other library systems that store the results of their reports. We could also possibly make it an optional setting, and put guide rails around it to prevent data sets beyond a certain limit from being saved to manage storage sizes a bit. -- Personally, I can't see this happening without sponsorship though.