Began refactoring filesystem data structures to address lump name collision issues:
Do not assume files are in WAD or ZIP format by name.
All resource data files (WAD, ZIP and single lump) are now managed transparently with de::LumpDirectory data structures (these replace the Wad- and Zip- module specific lump/file list structures).
All resource data files are now linked into a single list of opened files in reverse load order (including those opend using the auxiliary mechanic), allowing us to perform a logical inversion of the load process. Additionally this also means that ccmd "listfiles" now lists Zips as well.
Files in Zips now use the same public filesystem references as those in Wads.
Implemented file-level lump caching facilities for ZipFile and LumpFile.
Fixed: Stablized the pre-prune sort used to remove duplicate files and extended to support pruning dupes from the same source file (i.e., now respects Wad lump directory search logic as later files from the same package override earlier ones). This could stand some closer attention. Perhaps use another sorting algorithm entirely (unique merge/radix sort which respects load order indexes?).
Fixed: Pruning duplicate files left the Wad lump directory in an invalid state.
Optimize: Publish Zip files to the ZipLumpDirectory in a batch rather than singularly.
Optimize (minor): Prune ranges of duplicates rather than singularly.
Optimize (minor): When attempting to buffer data from a lump published from ZipFile/ WadFile/LumpFile; try to reuse a cached copy, potentially avoiding a file system read.
Todo:
Searching for files in Zips by name is presently O(n)
with de::LumpDirectory data structures (these replace the Wad- and Zip- module
specific lump/file list structures).
load order (including those opend using the auxiliary mechanic), allowing us to
perform a logical inversion of the load process. Additionally this also means that
ccmd "listfiles" now lists Zips as well.
to support pruning dupes from the same source file (i.e., now respects Wad lump
directory search logic as later files from the same package override earlier ones).
This could stand some closer attention. Perhaps use another sorting algorithm entirely
(unique merge/radix sort which respects load order indexes?).
WadFile/LumpFile; try to reuse a cached copy, potentially avoiding a file system read.