WWW posts page 30

Ideas: Local + Proxy Remote Hosting for Personal Site

Hosting your personal website on a computer at your home puts extra indie in indieweb. You truly control all of your data. I did this for several years. I did this with a very modest setup, serving from a mobile home using an iBook G3 800 with Windstream DSL internet. Performance obviously wasn’t the same as a web host would’ve provided. Of course, it helped a lot that I didn’t have much traffic. But I still had a lot of downtime, for a number of reasons:

  • Dynamic IPs: most consumer level internet service plans do not have a static IP, and change occasionally. I used DynDNS to accomodate this, but it still led to downtime between the time that the IP changed to the time the daemon was run, DynDNS updated its records, and the DNS propagated.
  • Internet outages: consumer level plans definitely don’t have the robust connection that a web host has. This was especially true at my mobile home, where perhaps old wiring led to fairly frequent outages, especially on windy days.
  • Power outages: hosting companies have backup power. Most homes do not. My power went out from the electric company at least several times while I was hosting, but also went out whenever I had to turn off the power to work on something electrical. My server would stay on because it was a laptop, but not the router. A UPS is a reasonably priced option for reducing or eliminating this problem though.
  • Computer / router issues, updates, etc: Any reboots, shutdowns, or stopping of server daemons will mean your site is down, which could be needed for updates or various problems. Web hosts usually have robust servers, and if they’re managing them, they’re usually very good about keeping them up and doing updates quickly and during down-times.

My idea to mitigate performance and downtime problems would be to use a reverse proxy, such as varnish, running on a remote web host, with your DNS pointing at it. It would be configured to go to your home server’s IP for content. You’d have to set up a daemon to contact the remote server and update this when it changes. Public pages would be set with long cache times so that they would be available if your home server goes down. The application(s) on the server would then have to be set up to send a PURGE request when pages were updated. Or perhaps, if the proxy allows, you could use whatever maxage times you want but have the proxy store the cached responses indefinitely and server them if the home server can’t be reached even if the maxage has been passed.

This idea is not without its problems. For instance:

  • Security of connection between servers: If your site is using SSL, the connection between the servers would also have to be over SSL or the SSL used between the client and remote server would be virtually worthless. Without SSL between the two, a man in the middle could easily eavesdrop on the traffic or divert all traffic to their own server. Because of the changing IP address, the home server would have to use a self-signed certificate possibly increasing the risk of a man in the middle attack between the two servers and at the least requiring the remote server to accept that cert from any IP that it considers your home server.
  • Non-cacheable requests would always need the home server: Private pages like admin pages as well as any mutating (POST, etc.) requests, would always have the same performance and robustness issues as the home server. Most importantly for many personal sites, webmentions / pingbacks / trackbacks / comment submissions would fail if the home server went down. So would any other form submissions. To deal with this, you’d probably have to do some programming on the remote server to have it queue these requests and give it an appropriate generic response for the request. For admin and logged in user activity, you could build the client side of your app to operate as you desire in offline mode.

And, as is always the case with serving from home, server and home network configuration, security, maintenance, etc. is all on you. There isn’t really a “managed” option available. You’ll have to get everything working, apply updates, deal with server and network problems, etc. In a home environment, security also includes physical access to the device.


I got PHP 7 working locally finally. It worked for CLI just fine when I first installed it soon after its initial release, but it wasn’t working with Apache. I’ve been upgrading every once in a while and finally, today, it worked. Now I just have to wait until Dreamhost supports it until I can start playing with it for my own site. At work, though, I’m still stuck back in PHP 5.3 land because of needing to support some old sites.


Security HTTP Headers

I’ve been working on the HTTP headers my site sends recently. I had been working on performance / cache related headers, but after seeing mention of a security header scanner built by Scott Helme, I decided to spend a little time implementing security related headers on my site. I don’t really know these headers that well, so I added the headers it suggested and mostly went with the recommended values. I did read up a bit on what they mean though and modified the Content-Security-Policy as I saw fit.

I added most of the headers using a Symfony reponse event listener. This handles all of my HTML responses without sending the headers for other responses, where they aren’t necessary. The exception is the X-Content-Type-Options, which should be set for all responses. I set that in Apache configuration.

Continue reading post "Security HTTP Headers"

I don’t know why I didn’t realize this before, but git project versions can be managed just with tags rather than needing to create a branch for each point version. Packagist can go entirely by tags. I had been creating point version branches because Symfony does, but that’s really only needed if you need to continue updating a previous version. It’s overkill for small, one person projects. With a tag available, it wouldn’t be hard to create a branch later anyway if needed.


Line Mode Browser, or progressive enhancement all the way back

Progressive enhancement is a development strategy meant to provide older and / or less capable browsers with a working website while providing the more capable with a rich, full experience. It is often presented as a set of layers of support, with HTML at its base, then CSS added to that for styles, then JavaScript for advanced behavior. With this, it’s often posited that a well-crafted HTML experience can be used by any browser. However, for really old browsers from the early web, the new web provides many things that can make pages difficult to read, functionality unusable, or even entire sites inaccessible.

Today, I’m going to go back as far as I reasonably can in terms of browser support, to the second web browser ever made, and the first widely supported one, Line Mode Browser. I can’t look at the first, WorldWideWeb, because it was only made for NextStep and, as far as I can tell, isn’t accessible for me to test with. Line Mode is though. It was open-source by the w3c and kept available. I was able to get it with MacPorts with the ‘libwww’ package (run as www on the command line).

Line Mode was based on WorldWideWeb, and in fact was less featured, so it is likely to have any issues WorldWideWeb has and more. I will look at some issues that Line Mode has with modern web pages, and provide some solutions that will improve the abilities of even the oldest browsers to use a page.

Continue reading post "Line Mode Browser, or progressive enhancement all the way back"

Upgrading my Awstats setup

I don’t really monitor analytics for my personal sites that often besides for my blogs, for which I use wordpress.com’s analytics. I do have three open-source analytics programs set up for my main sites though: piwik, owa, and awstats. Awstats is the one I’ve tended to look at the least, probably because its interface isn’t as nice as the others and it doesn’t have as much data about visits. However, it is the only one that looks at actual server logs, so it should be the most accurate about basic visit information. The other two use JavaScript, one having an image fallback, so there’s the potential for them to miss visits.

I have my awstats set up as I described in 2010. I keep the configuration and the data separate from the install to make updates easier. However, it had been so long since I upgraded that I forgot how it was set up and fumbled a little before finding that article and figuring out what had to be done. In order to make it easier for next time, I created myself a simple little script to handle the upgrade for me:

Continue reading post "Upgrading my Awstats setup"