was: current work: par2+file manager/indexing server (v2.0)
was: current work: par2+file manager/indexing server (v2.0)
something along those lines.
the focus is mainly on quality, in part on the optimal performance (when it goes to calculations - at least the same fast as quickpar, but it is like multiset quickpar embedded into newsreader and processing data in the real time), not a cheap solution like running par2.exe or its code on probable sets how it is done in many unpackers, so the implementation will take a little time. also a user driven mode will be possible.
since for par2 we need to track downloaded files, i raised it to the interface level as well, so it may eventually become a kind of file manager.
the focus is mainly on quality, in part on the optimal performance (when it goes to calculations - at least the same fast as quickpar, but it is like multiset quickpar embedded into newsreader and processing data in the real time), not a cheap solution like running par2.exe or its code on probable sets how it is done in many unpackers, so the implementation will take a little time. also a user driven mode will be possible.
since for par2 we need to track downloaded files, i raised it to the interface level as well, so it may eventually become a kind of file manager.
i've resumed work on the data structures, i realized a redesign is needed as the development is on the basic data structure level with some elements of interface it is easier to do now.
par2 contains ID so par2 files or source files can be dealt with even when being destined to different save locations, but i've decided to make the design with the artifical restriction of par2 file set or any kind of multipart file to be contained in a single save directory, otherwise the interface becomes too cumbersome (still several sets can be in the same directory of course).
we've exhausted v1.x version numbers, so those features will appear in v2.0
par2 contains ID so par2 files or source files can be dealt with even when being destined to different save locations, but i've decided to make the design with the artifical restriction of par2 file set or any kind of multipart file to be contained in a single save directory, otherwise the interface becomes too cumbersome (still several sets can be in the same directory of course).
we've exhausted v1.x version numbers, so those features will appear in v2.0
the basic data structure proved to be inadequate (too abstract, most likely would be difficult to expand later), so i'm readding it from scratch, i still can reuse most of the already written code.
as to the server i wrote a squeezing function (not applied yet) which would be needed if another server retention upgrade would take place, with UE it is automatic but as to the server it is a long procedure, i tested it on a small sample, but i wouldn't risk to run it on the current database (if applied it would decrease server database full backup time), i will do it only on a spare computer when available, i'm not even sure how much time it will take (those are mostly disk read/write operations). even without the function the server database is shrinking gradually to the optimal size, but if to increase retention the natural process might take too long.
as to the server i wrote a squeezing function (not applied yet) which would be needed if another server retention upgrade would take place, with UE it is automatic but as to the server it is a long procedure, i tested it on a small sample, but i wouldn't risk to run it on the current database (if applied it would decrease server database full backup time), i will do it only on a spare computer when available, i'm not even sure how much time it will take (those are mostly disk read/write operations). even without the function the server database is shrinking gradually to the optimal size, but if to increase retention the natural process might take too long.
i've added some basic interface.
next maybe i'll add the repair/unpacking code - since the heavy weight on resources functions will be done in a different thread (in the same thread which saves attachments to avoid too much disk thrashing), synchronization issue is quite important, but to finalize the synchronization model i need to have working functionality which i need to synchronize.
next maybe i'll add the repair/unpacking code - since the heavy weight on resources functions will be done in a different thread (in the same thread which saves attachments to avoid too much disk thrashing), synchronization issue is quite important, but to finalize the synchronization model i need to have working functionality which i need to synchronize.
i've realized that to follow the right implementation order i need to add task manager (article/save queues) to unpack integration first - to know when to trigger unpack jobs , so in a sense the scope is still widening.
this is the part which is not possible with separate unpacker programs.
it is now one month since i had started working on the topic very closely, freezing the rest of the project in the meantime.
this is the part which is not possible with separate unpacker programs.
it is now one month since i had started working on the topic very closely, freezing the rest of the project in the meantime.