Maintaining Permanent nodes
 Maintaining Permanent Nodes
(Formerly: On the Proper Care and Feeding of Permanent Nodes)
Getting Freenet to work for you isn't necessarily an easy task. There are certain intricacies that aren't obvious, but are very important to having success in making use of Freenet and helping the network function and grow. I'm trying not to embed any topics within other topics, but I'm sure it'll happen anyway so it's best to take the time to read the entire document. None of the information here is to be considered 100% correct. I'm not a Freenet developer and I don't know the inner workings of a single node, or the network at large. Also, as changes are made to Freenet, some of this information may become out of date. I welcome all corrections and suggestions, and can be contacted on IIP, and I monitor the dev mailing list. - Filla Ment
(Copied from Filla Ment's site here: SSK@WxBcPJd1ppZSZ~C8IJI-D Hx 94o IP Ag M/otpcafopn/1//)
GUI configuration tools are great and all, but you need to become familiar with the freenet.ini or freenet.conf. The GUI configuration tools don't present all the options that are in the ini/conf file.
- is a comment character. Anything after # on a line is ignored. # comments give information about what an option does. % is also a comment character. It is put in front of options to say 'ignore what I have, use the default'. If you change an option and the node doesn't seem to recognize the change, check to see whether or not there's a % in front of the option.
Before changing an option read the associated comments. Read them again. Make sure what you think the option does matches with what the comment says the option does. If you have to specify a number, make sure you understand what kind of number Freenet is expecting. Hours or seconds? Bits or bytes? Kilo or Mega?
If you change an option, make sure you save the ini/conf file =)
Every so many minutes (between 5 and 10, configurable I think) Freenet re-reads is ini/conf file. This means some changes to the configuration can be made without having to restart the node. All you have to do is wait. Not all options can be updated this way and require a restart. To find out which, run Freenet with the --onTheFly Flag. At the Moment, that return on unstable
java -cp freenet.jar freenet.node.Main --onTheFly
logLevel, targetMaxThreads, tfTolerableQueueDelay, tfAbsoluteMaxThreads, inputBandwidthLimit,
outputBandwidthLimit, configUpdateInterval, seednodesUpdateInterval, aggressiveGC,
maxHopsToLive, newNodePollInterval, announcementAttempts, announcementThreads,
initialRequests, initialRequestHTL, requestDelayCutoff, successfulDelayCutoff,
requestSendTimeCutoff, successfulSendTimeCutoff, logLevelDetail'
If you need to restart your node for a config update, make sure you've read the Restarting the Node section of this site.
(Comment by Rudi: Currently outputBandwidthLimit definitely doesn't work)
 Workings of the node
Running the Node
Keeping Freenet going requires a bit of attention. A busy node can require lots of resources and, if not kept in check, bring a lower end machine to its knees. However, as long as you're willing to keep an eye on it, it's not too hard to keep things on track.
Monitoring the Node
The most obvious indicator of how your node is running is the responsiveness of the gateway page on FProxy. If it really drags loading the basic HTML, your node is bogged down. Don't mind the Active Links
for the gateway sites, their loading or failing to load isn't necessarily because of your node. If it seems bogged down, you might want to give it a restart.
The next level is to take a peek at the logs. On Linux you can separate the regular messages from the errors. I highly recommend this. Not sure if Windows can do the same thing. Given separate logs, I almost never see anything in the error log unless there's actually a problem. I actually have a script that 'tail's the error file then scans my FCP,FProxy, and FNP ports respectively to see what's actually running.
Restarting the Node
I've given up on trusting that Freenet will be restarted when I run the restart script. I can't speak for windows, but on my box I often find that the stop freenet script doesn't manage to kill Freenet, and had I done a restart, now I'd have a second copy of Freenet trying to start. It'll fail because the ports are already in use, but now on Linux, the process id file has been overwritten and I have to do a 'killall java', which sucks because I have other java processes running. Sometimes, 'killall java' doesn't even work and Freenet will continue to churn away until I do 'killall -9 java'. What's more fun, sometimes when there's lots of IO oriented threads running and I do 'killall -9 java' I get a kernel panic. This isn't the fault of Freenet, but Freenet's the only app I've seen cause it. This is significantly reduced using Sun JVM over Blackdown JVM, but it's still present with Sun.
The short of it is, when you go to stop or restart the node, just stop it then make sure it stops. It may take a few seconds so tell it to stop, wait at least 15 seconds, then check the process table. If it's still running, try stopping it again, wait, check the process table. Lather, rinse, repeat at your own discretion.
 Network Participation
Participating in the network involves getting integrated, routing requests, serving data, and correcting integration loss. The biggest hurdle is initial integration so it gets the greatest bulk of attention here.
If you expect to actually retrieve much of anything on the network, your node needs to be integrated into the network. Integration basically means that your node can reliably talk to several other nodes and several other nodes can reliably contact and talk to yours.
The easiest way to keep your node from getting integrated is being set transient. I believe that transient is off by default but check and make sure in your ini/conf. In fact, screw the default and explicitly set it to transient=false.
Next, if you're behind NAT, you need to forward your FNP port to the machine hosting your node. Do NOT, you hear me? Do NOT set DMZ for your machine unless you really like the machine your node runs on getting messed with by leet skript kiddiez. This is not a problem with Freenet, this is the fact that DMZ disregards all protection provided by NAT for that machine. Make sure the port you forward corresponds to the FNP port set in your config. By default it's some random 5 digit port, NOT 8481 or 8888. This is less important than it used to be now bidirectional connections are implemented en masse, but it will still make a real difference in how well your node can participate on the network.
Third, if you don't have static IP, use Dynamic DNS of some sort, like what's provided by www.dyndns.org . If you have a broadband router, I recommend using ddclient to keep your address up to date. Ddclient can just read the address from your router so it's not network spammy and you can set it to check every couple minutes if you want. Home-build routers like the ones you can build with Smooth Wall
should have a built-in dynamic dns updater that update your address as soon as your IP changes. In your ini/conf, specify the name address instead of an IP. This will help other nodes follow your node if an IP changes.
Every once in a while your update scripts will give you an empty seednodes.ref. This can obviously be a problem so make sure you have a proper seednodes.ref after you update. Once your node starts to talk on the network, it begins keeping a private database of nodes it knows of so the seednodes.ref becomes less important. Seednodes.ref is never unimportant though, because all the nodes your node knows about could go down. See the maintanence section about getting valid seednodes.
Some choices related to Freenet are definitely not obvious, and can only be decided on properly with proper information. I'll try to provide this here as well as possible.
A lot of people run unstable because they hear it works 'better' or 'faster' than the latest stable. This is sometimes the case, but other times, unstable runs like crap because of some feature being worked on. Your mileage will vary, and it will vary from one mile to the next. Stable, on the other hand, has much more consistant performance. In my opinion, the real reason as to why you should choose stable is because, unless you're helping debug Freenet, you really shouldn't be using unstable. Unstable exists so the developers can try new features and fixes and have relevant information reported back to them by experienced people who know what to look for. Stable, on the other hand, exists for common usage. Since the stable and unstable represent separate networks, a node on unstable does nothing for the stable network, and the stable network needs reliable permanent nodes.
Really wish I could give you a number here, but I can really only give you the principles. The larger your datastore, the more keys you can house. The more keys you manage to serve to other nodes, the better your node seems in the eyes of other nodes. The better your node seems to other nodes, the more connected your node is.
If only it were that simple. Enter: Specialization. The idea here is that when your DS is almost full, like 90-95%, it begins to prefer some keys over others, like ones who's first 2 bytes are F32 A. Because of this specialization, other nodes learn that your node is more likely to have keys that begin with F32 A which, in theory, helps routing tremendously.
The question here is whether it's better for routing to have more keys or to become specialized quicker. In my opinion, the better choice is to host more keys by having a larger DS, and a minimum of 2GB if you can spare it.
Again, I can't give you a specific number, but a couple guidelines. The primary thing to know here is that Freenet has a history of not obeying bandwidth restrictions to the letter, if at all. Lately, the restriction that it is doing seems to be either really close to the mark or on the mark exactly. I haven't been paying close attention, because it hasn't been flooding my connection. My recommendation, however, is to use some external means of bandwidth limiting. I tried trickle but have no idea whether or not it really works with Sun's JVM on linux (which is what I'm using for my perm node). External controls are good to have, even if not for Freenet. A lot of P2P programs out there don't have good bandwidth throttling and could use a more reliable leash.
Freenet can open a lot of connections. If you're node is directly connected to the internet, this shouldn't be an issue, but I think most broadband users are behind some kind of NAT, like a broadband router. I had one of those Linksys blue-box broadband router/switch combos and it was pretty sweet. However, between Freenet and other P2P programs we had running, it would freak out and reset it self with no discernable pattern until we cut back on the connection usage. Since we've switched to a linux PC based router, everything's been fine and I've had no problem letting Freenet decide how many connections it should have open. Be warned, cheap SOHO type routers have limited resources that can be overrun by Freenet in conjunction with other connection-hogging programs.
 When Things Go Wrong
Freenet is a work in progress, so things will definitely break once in a while. The best thing you can do when something does break is write up as much information as you can about the problem, including circumstances of the break, OS, JVM, config settings, and relevent logs. Gather all this up and contact a dev either on IIP or on the dev mailing list to see what they want you to do with it. The real message here is that IIP is your friend for getting Freenet to work through odd problems. There are a lot of people there that have probably already had the same problem or at least have some idea as to how to fix it.
Freenet is sensitive about the access times on files in the DS, for caching and possibly anonymity purposes. Clock skew happens when your clock goes back in time so that the timestamp on a file in the DS appears to be in the future.
Prevention is the best solution here and you can minimize the chances of clock skew on linux by running ntpd and on windows by running a comparable solution (PTBSync for example: http://elmue.de.vu/
). On my Freenet box, I've found that lots of hard drive usage screws up my clock and if it gets too far from the proper time, ntpd will exit and you'll have to fix the time by hand. If this happens to you and Freenet is what's causing the disk usage, see the
Monitoring the Node section above.
If you do fall victim to clock skew, the dates on the files in your DS need to be set to something reasonable. I have a script to do this for linux and would appreciate one for windows. The downside of this is that your timestamps are all set to 'now' which slightly throws off your cache. A better script would only change the timestamps of files that are in the future.
The other option is to destroy your DS. Before you do this, shut down your node. In linux you can destroy your DS with "rm -rf /path/to/freenet/store/*". In windows, you just find the store folder and Right-Click, Hold down Shift, then select Delete. The Shift prevents the contents from going to the recycle bin. Obviously, the downside to this is that you lose the contents of your DS, and consequently your node's specialization.
 Useless Routing Table
If no matter what, your node gets RNFs with 0s across the board, there's a good chance your routing tables are screwed. The solution to this i to delete the routing table related files, but this is a drastic solution and is really naughty. To fix this, go to the freenet folder and delete all files that begin with "ngrt", all that begin with "lsnodes", and all that begin with "rt". That completed, download a new seednodes.ref. The easiest way to do this is run the "Update Node" procedure for your platform. I can't stress enough that deleting your RT is a bad thing, however, sometimes it's what you have to do.
(Updated by dolphin 2005-1-14)
OK, we may as well face it: Fred and Java are pigs. They'll happily gobble up as much memory as you can throw at them, gorging themselves relentlessly until they finally...*explode*. Not a pretty sight.
So then, is there anything you can do about it? Yes, there is!
Luckily, there are a few rather obscure, and not very well-documented command line options in Sun's
JVM that can greatly reduce the chances of encountering the dreaded OOMs (Out Of Memory errors). Here's what I'm currently using, with great success, as my freenet startup command line (CLASSPATH omitted, as it varies, depending on your setup):
java -server -XX:+AggressiveHeap -XX:+DisableExplicitGC -XX:-UseTLAB -XX:+UseBoundThreads -XX:+UseThreadPriorities freenet.node.Main
Hmmm...now that I'm looking at it, that's not too pretty a sight, either, is it? :-)
- The line above is 10x more stable than the freenet default. This should be the line used in *nix fred distributions by default.
OK, here's the breakdown of the above:
1. java -server
ALWAYS use the -server switch. I don't care what anyone else says, this switch is a MUST. It will greatly enhance performance by enabling the JVM (Java Virtual Machine) to do on-the-fly compiling of those sections of the program that it determines are getting the heaviest usage. I'll spare you the technical details of how all of this works; if you're really interested, go visit Sun's Java pages
and read up on it.
Now, on to the juicier stuff, and the point of this section.
The single most important thing you can do to avoid OOMs. This will cause the JVM to be much more aggressive in its heap management, actually *freeing up* previously allocated memory from time to time, instead of greedily hoarding it like some miserly Scrooge. You can see the effect of this on the Environment page, where you'll notice the "Maximum memory the JVM will allocate" growing and shrinking. Very cool.
Disable any calls to System.gc() (the JVM garbage collector) within the actual freenet daemon code. Sun
strongly advises on their web site
against trying to manually interfere with the JVM's heap management, but nonetheless, there are several instances of this call within the freenet code. This renders them impotent.
Use thread-local heap allocation. The technical specifics of this option are rather involved. If you'd like to know more, visit Sun's site
Bind JVM threads to native kernel threads. Sun
claims this only works on Solaris
, but I have my doubts. Be advised, though, that this option may have no effect on your system
The reason I'm using it myself is that I'm also using a special library under FreeBSD
that provides a 1-to-1 mapping of process threads to kernel threads. This requires a little behind-the-scenes "magic" involving the use of FreeBSD's
library mapping facility (via /etc/libmap.conf) to map Java's use of the standard pthreads library to the libthr library instead. Your mileage may vary considerably, of course, on your system, so you may or may not wish to use this particular option. It appears to do no harm, though, on systems where it's not supported.
Use the kernel's native thread scheduling priorities, rather than Java's. Again, this is a rather specialized facility which may or may not work on your system. Using this option, just like all the other more esoteric options mentioned here, appears to do no harm if your system doesn't support it, so you may want to include it in your command line regardless.
And last but not least, here's Fred! Of course, this should need no explanation, but just in case...
This last bit tells the JVM exactly which class within the CLASSPATH to look for and try to run at startup, which in this case, of course, is Fred's main startup routine in freenet.jar (there actually *are* other possibilities, but we won't go into that here).
(Update: This section formerly also advocated using the following options)
These set the minimum and maximum heap sizes to use. You can specify them in Kilo Bytes
(as seen here) or in Mega Bytes
(using "M" instead of "K"). My settings here translate to 96M and 192M, respectively. Don't be stingy here; give your node as much memory to work with as you can reasonably afford, without too severely affecting other programs running on the same machine.
(Update: I've recently discovered, after a closer reading of the docs on Sun's site
, that these two options are incompatible with the -XX:+AggressiveHeap option. If you try using both -XX:+AggressiveHeap and -Xms/-Xms, the options will end up overriding each other, producing other than the intended effect. If you want to explicitly declare your heap size using these two options, then don't use -XX:+AggressiveHeap.
The following option still does appear, in fact, in the default startup script for Unix installations. However, I can no longer find any reference to it anywhere in Sun's documentation
; it may or may not still be valid. Using it appears to do no harm, however. I no longer use it myself.)
Another one of those obscure, only-documented-on-the-website switches. Recent versions of freenet have begun employing a new feature in Java's NIO package called Direct Buffers
. Essentially, direct buffering reduces the overhead involved in shuffling data in and out of the system, but at a potentially dangerous price: yes, that's right, you guessed it, those nasty little OOMs.
The above switch helps to avoid this by giving the JVM a hint as to how much direct buffer memory it can play with. I'm currently using a setting of 256 Mega Bytes
, which appears to be more than enough for normal operation.
All of the above options are, of course, subject to testing and tweaking. Your mileage may vary, as they say. But I do believe they're well worth exploring if you're serious about maintaining a well-tuned, permanently "up" node. (addition by dolphin, 2004-4-19, revised 2004-4-27, inaccuracies/misspelling in section 109 corrected 2005-1-14 by dolphin)