way to limit server fetching to n days?

Post Reply
BearCan
Posts: 14
Joined: Wed Jul 30, 2008 4:32 pm

way to limit server fetching to n days?

Post by BearCan »

Is there a way to tell a specific server to not try and fetch items older then n days.

in my case I want the server not try from any group after 15 days.

currently I use the detach newsgroup from server feature.
this solution works but it's very messy.

I also use import groups to avoid a server but this is messy also.
alex
Posts: 4515
Joined: Thu Feb 27, 2003 5:57 pm

Post by alex »

in edit menu->properties->newsgroups you can limit the server retention.

when comes the option:

edit menu->properties->general, if "let all incoming headers through" is checked (it is default) - it will download all headers intially (the only reason is not to wait to long until something appears) and then it will limit headers to retention when newsgroup is loaded the next time with subsequent header downloads as long as the header range or headers in the newsgroups have not been reset.

i'm considering refinement of the feature given increased server retentions.
BearCan
Posts: 14
Joined: Wed Jul 30, 2008 4:32 pm

Post by BearCan »

Thanks Alex,

It's my understanding this will limit all servers for that group
not just one server. I still need the headers so others can
try and fetch when it's their time to try.

I have access to a distribution server. it's primary function
is server to server not the traditional giganews access.
It's superfast, 80% complete, low retention, most important it's free.

So I grab what I can. then the bodies typically sit around for up
to 3 weeks before they reach the queue line other servers are
processing at. right now I'm averaging around the 150 day mark.

typically I turn off the other servers and process this one server.
then I turn it off and the others on so processing can go on as usual.
during this period I lose DL time where I'm backed up the most.
If I do not do this (dis)enable swapping the one server ends up
getting hammered trying to fetch articles I know are not there.
not a good thing.

there is a side problem to all this also.
If I shut down UE after one server has been tried but not the others,
UE will retry those articles it previously knew were not on a server
when I start up again. Why does UE drop this data when exiting?

A partial solution is to give us the ability to retag what was lost.
in fact this would also eliminate my need to set day limit on a server
since I could just tag articles I know a server isn't carrying anymore.
alex
Posts: 4515
Joined: Thu Feb 27, 2003 5:57 pm

Post by alex »

do you have retry failed articles every x minutes in properties->tasks set to something?

if the field is empty errors should be preserved as errors after restart.

if you don't download headers from the server and bombard it with requests for articles it doesn't have in 20% of cases still should be ok.

no such article errors shouldn't slow down anything, since it will just pass missing articles to other servers faster, than they are able to download them.

just two boundary cases - say this server has all articles - you prefer to download from that server and nothing is passed to other servers; if this server doesn't have any articles - it may well reject them faster than other servers are able to download them, bandwidth on errors is insignificant, so you get the same download time as if there would be no server at all.
NoNo
Posts: 80
Joined: Sun Aug 10, 2003 9:50 pm
Location: France

Post by NoNo »

For what I understand of BearCan wants and how ue seems to work, the only way atm it can works that way is to use duplicate newsgroups.
Basically :
- 1 group with short retention 15d attached to distribution server (high priority strict)
- 1 same group with longer retention attached to other server(s) (normal or lesser prirority than distribution)
- 1 virtual newsgroup containing those 2 simple groups where you start downloading from
(In fact I used 1 group with natural retention&articles numbers and 1 duplicate with fixed longer retention&compact binary)

As long as your headers are uptodate it will only download in distribution server :
- if wanted articles are still in the short retention range
- unless articles exist no more in distribution it will go to other server(s)
- or if wanted articles are over short retention it will download only from other server(s) and no request will be made to distribution server.

Of course that can't help you much if you want all your newsgroups to behave that way because of the work to put it in place and maybe the space required by ue.
Might as well wait for Alex to refine something in that way :wink:
alex
Posts: 4515
Joined: Thu Feb 27, 2003 5:57 pm

Post by alex »

yes, duplicate groups is the most efficient way to handle that comparing e.g. to adding separate retention per server as a feature and then using other than compact binary newsgroup type.

i just was in the middle of thinking to add something related to retention among other potential developments for v2.5, but then i had decided to close all known issues first, so not to add something which may require further adjustments later given servers don't allow an easy way to produce the right point. then, maybe, i was talking about something different :)
BearCan
Posts: 14
Joined: Wed Jul 30, 2008 4:32 pm

Post by BearCan »

Seems I had Retry Failed Tasks set to blank.
I set it to 999. I'm aware this high of a value will force me to use the
menu Retry option on selected articles. At least the try data won't get lost when exiting.

Duplicate newsgroup?!? :idea: There's a thought. This might work.
Disk space is an issue I can deal with.
I believe NoNo wants the same feature since he devised a workaround. Which I'm going to give a try. I like the idea of natural retention instead of a fixed range.

just a couple of notes.
one reason i try to avoid hammering is because the server will lock out the connection by getting lost in timeout land. server side timeout for whatever reason takes about 20 minutes. I use the connect delay at 1-5 seconds but this only delays the lockout from happening. I think it's more of a hardware bug somewhere then software. either way a couple thousand failed articles in a row triggers the lock out.

to show just how good UE is,
my box is 383m ram on a 550mhz celron :oops:. "DB is to large to fit in memory" is unchecked. as long as I only load 1 NG at a time everything runs smooth as silk. More then one NG places me in disk swap lag. Articles tab shows 350g queued. which is to say it's very large. (:wink:, and yes this is :evil: even for me. a lot won't actually get downloaded. At least UE handles it and I know what is available.) I can't use auto par/rar feature yet due to the cpu. I tend to burn files in rar form along with a few pars anyway so no big lose. Despite the extreme hardware constraints and large databases :D UE gives exceptional performance :lol:

I'm in the process of building a 4g ram duo cpu box. it will be like dying and going to heaven when it comes online.
Post Reply