I fired up BlocksNet the other week when investigating some cross-platform, online backup options.
BlocksNet is developed in Ruby and after some “bringing together” of the package and its dependencies for my Linux distribution it worked quite well.
The BlocksNet concept raises an interesting question: should we trust P2P systems for secure data backups? BlocksNet is private as you control who you share data with – it can be one or a set of known nodes. And BlocksNet’s public TCP port is 1984…
If data is scrambled across an arbitrary set of nodes and only retrievable from any node with the software running is it fundamentally more reliable and secure than a traditional star topology backup architecture? Moreover, could we trust our encrypted data to be spread across a public P2P network for even greater redundancy?
Today more and more people are trusting Cloud backup services like Dropbox and Spideroak for their data storage so people are willing to give up control to third-parties, but P2P on the other hand is still a “dirty” abbreviation.
It seems as though BlocksNet wasn’t designed for backups, but that is one interesting use case.
Ideally, a P2P backup system would allow you to add directories to the system which are then propagated around the network and encrypted. Access from a local file manager is really a must if it is to be practical. BlocksNet is Web-based and I haven’t investigated whether it integrates with local file management tools.
Anyone know of an open source, P2P backup app? I already use a star topology backup system but it would be good to have files accessible in a consistent way across all my Linux and Windows systems without having to continuously run sync tools. Further investigation is required.
Perhaps it’s not a “backup” app that I’m after but something like iFolder that maintains a persistent file state between nodes and a server. It would be good to have the server part optional.
Something like: install the client on all nodes in your private network (LAN or WAN), designate shared directories, add each node to the network, enjoy automatic replication and fault-tolerance between all nodes. So no matter which computer you log on to you always have all your working files available to you. From there traditional point-in-time backups can always be taken. Snapshots and deduplication would be something that could be integrated but not necessary. This way you could use a central server but it’s not a single point of reliance – it’s just another node on the network.
Something to think about.