Enter The Freenetrix
The Freenet Help Site
Enter The Freenetrix
Licences used on this wiki
Welcome admin to Freenet WikiServer
Edit Page - Diff - Revisions - Title Search - Preferences - Bottom
 
Debate about attacks and weaknesses of Freenet


  • If you want to be theoretical, then yes, Freenet does not provide anywhere near "absolute" anonymity.
Ian: "This is truism, because there is no such thing as "absolute" anonymity. Anonymity isn't black or white, a feature that is either on or off, there are degrees of anonymity measured by the amount of effort an attacker must expend to compromise that anonymity. Even theoretically anonymous systems such as the dining cryptographers algorithm could easily be thwarted through a wide variety of meatspace means given sufficient willingness on the part of the attacker.

There is little point in reenforcing the door (the technology) when the window is wide open (meatspace), this is a common fallacy among many people that read "Applied Cryptography", but not its sequel."


  • In fact, it doesn't even provide the level of anonymity that is used when judging such things as anonymous remailers or mixnets.

Ian: "This is debatable, mixnets delay messages to prevent traffic analysis from being used to tie senders to receivers - however since information generally sits for a while on intermediate nodes between insertion and retrieval (not to mention moving around a bit), the same could be said
of Freenet (assuming that you allow that the file inserter is the "sender", and the file requester is the "receiver"). Mixnets have the problem that even with these random delays, statistical analysis can still be used to associate senders with receivers. Such statistical analysis may also be applied to Freenet, but it would be much more complicated."



  • Basically, Freenet purports to be "anonymous" because you files do not recide on the computer of the person who uploaded them, and because all downloads and uploads are chained and tunneled through each host involved in the transfer. That means that the host you download a Freenet document from just knows it got it from some other node, which got it from some other node, which got it through some other node, all the way back to the person who uploaded it. It certainly makes tracking the people upload and download things more difficult then on networks like Kazaa (where it is, as we have seen, trivial) but in theory, and with enough resources, it is of course not impossible.
It should be noted what Freenet does NOT provide however. Freenet does [not?] do what the serious mixnets reffer to as "Onion routing", which basically means that the message is wrapped in an onion of cryptographic layers, which are pealed off at every step. The idea behind this is only the very last node can see contents of the message, and only the first knows it came from you (and none of the other nodes know anything except where the message came from and where it went).

If you request something from Freenet, your node will call up another node and ask it for that file - if that node is controlled by the Feds then you are busted. It is argued that there is plausible deniability, because it is possible that your node was not downloading the file because you asked for it, but simply forwarding it for somebody else. Given the state of the judicial process at the moment, I'm not terribly optimistic about this defense.

Ian: "This would effectively amount to the random punishment of a Freenet
user. In repressive countries, anyone can be punished for anything anyway, the only safety is safety in numbers. In non-repressive countries, you generally need a good reason to convict someone, and if the plausible deniability argument doesn't work, then wave goodbye to the Internet, because it just became illegal to operate a router."

-- see Comments Page on this issue


Toad: Correct. We need premix routing to take freenet's anonymity to the next level. We have known this for years. However, I am not as pessimistic as
the author about plausible deniability: if plausible deniability is not
a defence, then it will probably be illegal to tunnel encrypted data
too."

      commented at Plausible Deniability

  • Freenet also doesn't protect (at least not very well) against traffic and timing analysis, allowing one to track down the author of something using the timing and amount of encrypted traffic that nodes exchange. I don't know of any case of traffic analysis having been used (except maybe on the NSA hyper-spook level), but it isn't impossible.
Ian: "Impossibility isn't the issue, practicality is. Even mixnets aren't immune to statistical traffic analysis as I point out above."


Toad: "Perhaps so. This comes from it being real-time. I2P probably suffers from similar problems. Certainly we could do some traffic padding/pooling - I2P probably does more than we do."


  • Another thing that Freenet does not "anonymise", and this is the most important IMO, is that you are running a node in the first place. Your Freenet node has to be public, so the feds could definitely "fish" the network for node addresses and start busting those who run them. Again there is an argument of deniability: you don't actually know what is in your nodes cache because it is encrypted, but again I don't have a lot of faith in this defense when the prosecutor will argue that you knowning acted in bad faith.
Ian: ""Shadow nodes" will address this issue when we get around to it, but it is still a case of reenforcing the door when the windows wide open."


Toad: "I would like to implement some steganography eventually. However I am not convinced that there ARE any undetectable anonymous networks.
Steganography is an ongoing battle of attrition, obscurity and constantly
updating are sadly the best defences in it. Again it's down to "better
the way we do it than the way you don't". Harvesting seednodes is indeed
a vulnerability. In the long term, "hostile environment routing" might
involve a fixed set of steganographic links going out from a node that
were explicitly set up by the user to people he trusts. However this is
not without its own security problems! And it's certainly not usually
practical, and would have performance issues i.e. we have to get routing
working a bit better first for fixed mesh to be practical
performance-wise."


  • Regarding Winny, however, I think I agree with Ian. It seems doubtful that Winny works in the same manner as freenet, for the simple reason that Winny works, and well, freenet, umm, doesn't. Any time you try to put anonymity into something, useability IS going to take a hit, because trying to spread and bounce traffic necessarily hits performance. I have a very hard time believing that Japans most popular P2P network could be based on tunneling everything - purely for performance reasons.
Toad: "I'm skeptical. I think it is possible to make a fast tunneling network. In fact, I KNOW it to be true. Remember Zero Trust Systems?" {I am assuming "Zero Knowledge Systems" here...}

  • Actually, the defense [of plausable deniability] is both good and bad - the problem lies in the HTL - Hops To Live. As it is (or at least was, when I tried to convince them it was a bad idea) the maximum HTL is 25 (in node, no matter what the program requests). That is, if you request/insert something with HTL 25, it's *your* request/insert, noone else's.
Toad: "The defence is to implement premix routing."

  • I recommended adding a random factor to that, so that there was only a *probability* that you were the original requester/inserter. In fact, they have implemented exactly the same at the very low end - to avoid node probing. Though I got pretty much zero response. This alone makes Freenet's "anonymity" claims pretty much broken, if you ask me. I got some (arguably true) response that statistical attacks would still work - but it'd still beat the smoking gun you have now.
Toad: "Also at the high end. Randomizing or even REMOVING the HTL would NOT fix the problem, we need premix routing."


  • I still maintain Freenet doesn't protect well against traffic and timing analysis, allowing one to track down the author of something using the timing and amount of encrypted traffic that nodes exchange.
Toad: "Possibly. As you move across the network, it gets harder. Request
queueing may make it harder still."


  • That is pretty well known, and also quite solvable. However, both sending bogus traffic and having random delay buffers (Freenet requests really can't work like a mixnet pushover buffer) would drain Freenet's already mediocre performance. Not to mention it requires some pretty damn huge resources to mount that attack from the outside.
Toad: "Queuing would help. There are a lot of messages other than requests, too. What you would try to do would be to track trailers. This is made a great deal more difficult by multiplexing. I suspect it would be a lot
more difficult than it sounds, but it would probably be feasible. Yes,
before multiplexing, it was rather easy. I admit that, and it's a shame
we didn't realize that at the time. Freenet's mediocre performance comes
IMHO from A) bugs, and B) the fact that we don't have the architecture
sorted out yet."


  • A more insidious way would be to run compromised nodes, and hammer the node you wish to unravel with connection requests from other compromised nodes. If you already know your target, it might be possible to compromise all the nodes in their routing table (the more nodes you have, the more new requests for new compromised connections you can send). Also here, Freenet is pretty dumb in that it has a static 50 node limit by default. Once you've got 50 compromised nodes in contact with the target node, it's isolated from the network and you can see all requests/inserts it does. With at least some random factor, you would provide some uncertainty - do we control all nodes now, or are there still more? Can we *prove* these came from him?
Toad: "If you can compromize half the network you can probably do what you
want. It's not as easy to dominate the node's routing table as it
sounds: bidi will add new connecting nodes to the routing table, but
only if it has space for them. This could certainly be improved on.
Also, I don't see what the problem is with the 200 node limit. You have
to set SOME limits."


  • One could run a 'node-harvester'
Toad: "Yes, you can run a harvester. As I said, if it is illegal to run a
freenet node, it will not be enormously difficult to find most of them.
However, my understanding is that I2P suffers from exactly the same
problem. That is something we may have to deal with later. It may not be
soluble by ANY network. Or there may be a bunch of different solutions.
Here are some ideas:

  • Fixed steganographic mesh. CON: Once you've got one node, providing you can surveil the entire network, you can probably get the rest.
  • Use of third party services e.g. webmail for rendezvous.CON: They run them!
  • Use of premix routing on every step via an external pipenet. CON: They'll just ban *THAT*.
My point is that it is hard, nobody has actually successfully done it
afaik on any network, and we can't run before we can walk."


  • I remain convinced the problem lies in the HTL - Hops To Live. It doesn't add a random factor to the forwarding.
Toad: "WE DO. We have a 50% chance of forwarding a maximum HTL untouched.
HOWEVER, statistical attacks are still feasible if you are downloading
multiple files. Premix routing will help in this.

Conclusion: In summary, Freenet is a practical system that provides a
reasonable level of anonymity against reasonably powerful attackers. It
can, and will, be enhanced to provide a greater level of anonymity.
Making nodes invisible is NOT easy by any stretch of the imagination and
is not something we can or should address before 1.0. I very much doubt
that any other network has addressed it satisfactorily either.
Compromizing a large fraction of the network would of course be a
problem, and our only defence against this is strength in numbers,
although premix routing will help slightly. Statistical attacks are
quite feasible but can be defeated by implementing premix routing.
Taking over a node's routing table is a powerful attack and we need to
do more to prevent it, but we have already come a long way in that.
Traffic analysis may be feasible, or may not be feasible. Just padding
packets to a power of two (or the sum of two powers of two) will help,
and multiplexing has already brought us a very long way in defeating
traffic analysis. The new request queueing mechanism, if implemented,
will also help to make traffic analysis more difficult.

None of the above will make it impossible. Passive traffic analysis however is far from easy. The cost of implementing traffic padding will not be as high
as is suggested, because we already have a probabilistic defence: we
would not want to pad every link, because it would make things
ridiculously slow, but we could do something like sending bogus requests
(i.e. void packets the same size as the request) to some nodes - say,
the top 3 choices, plus 3 random nodes? The attacker cannot easily
identify when the request is being answered simply because we have many
requests in-flight on most routes, all of which are constantly
generating messages, and the trailing field data is also divided into
packets. It may be possible to identify packets from trailing fields
just because of their lengths though. My point here is that although we
can't guarantee that it's impossible to do traffic analysis, we CAN
establish that it won't be easy (well, after multiplexing, we can)."