We are trained to think of the digital as being limitless and ethereal (yet with perfect memory), a realm purely made of memetic ideation. While there’s no doubt the Internet itself is fundamentally a phase shift in information spread, it has a physical underpinning. Many different structures form it: the individual computers; the physical routing material, e.g. fiber cables; the backbone routers; the companies that manage these things; and the really, really big warehouses of computers–data centers.
A super-majority of the computing power and data storage is housed within these data centers. That this is the material base means most computing will assume these gigantic centralized data/compute nodes. We expect services to be working 24/7, a minor blip in email access causes untold economic damage.
Back when the Internet was good the physical realization of the network was different. Compute was more spread out and decentralized, as was the ability to effectively command and control that compute. Regular people built their own websites towards their own ends, for the business they owned and operated or for the opportunity to share themselves in some way. They did not need to mold themselves to Facebook. Instead the design abilities of regular people contributed wonder and delight to the web. Dreamweaver was easy enough for children to use, and your ISP or GeoCities let you host for free.
How did we get to today, where regular people essentially aren’t able to create their own websites? Complexity in the space ballooned. Now you need to have perfect SEO, load times, 99.999% uptime, Facebook integration, Twitter feeds, etc. etc. To return to the spirit of the old web we need to drastically reduce complexity.
A common reply to the negatives of mass centralization is mass decentralization via peer-to-peer systems. The problem with this approach is that it still incurs massive complexity in the maintenance and operation of the system itself. It is simply too much to ask of people to know how to set up an HTTP server, how to handle port forwarding, how to check on the health of services, etc. etc.
On the other hand: it is not at all much to ask of your stereotypical Linux nerd to be able to handle that. I suggest these men can be mayors.
In my previous post I discussed walled digital cities–servers as something that can house citizens, with gated access to the city’s resources. In this post I’ll go over what the first step to this sort of Linux city-server looks like.
To start, we’ll need to add citizens, which are simply Linux users. Create a group for citizens first if you haven’t.
Citizens familiar with Linux can now ssh in. Since we expect most citizens will not be power users, we need to make things easy on them. For this I’ll be creating guides on using VS Code over SSH. This makes it easy to do things like edit text files and run terminal commands from a single window–less complexity!
Service: Static Sites
To attract citizens we’ll need to do something for them. Let’s do static site hosting. I’ll assume you’ve already registered a domain like http://yourserver.city/.
Start a new site:
You should also enable SSL, which is left up to you. Enable the site, remove the default, start nginx:
Every time you create a new citizen we’ll need to give them their own space on this website:
I drop a small index.html into each citizen’s folder to give them something to start with. Now your citizens will be able to create their own static website at
I like to use mercurial for source control as it’s mildly more user-friendly than git, with the same functionality.
In a folder, start a repo with
hg init. To give citizens push access to that folder, set the citizen group as owner and set the right permissions:
Now any citizen can do:
Single repos can be served using:
For serving multiple repos at once, see hgweb.
FIle sharing between users is as simple as setting the owner of a folder to the citizen group and giving them read and write access:
All users can now share documents via that folder. More complicated file-sharing schemes with e.g. OwnCloud are possible, but this is the simplist way. Just beware the accidental
rms! Best to back this folder up incrementally…
This folders can be mounted remotely. If the citizen uses Linux:
If that works, make it permanent by editing fstab:
Add to the end:
Windows users can use sshfs-win.
The real land of your town is your server. What I’ve described here could be run on a Raspberry Pi if the traffic to the sites is going to be very low. A more significant server and business ISP connection would be desirable for higher uptime, or a data center server for the full uptime / connection speed is also an option. It’s up to what works for you.