Seems that WordPress.com’s new editor has problems with markdown. Post got converted into HTML and code blocks got messed up. I will stick with WordPress’s built-in editor.
Toby's Log page 88
My sites now HTTPS with LetsEncrypt
My sites are now HTTPS-enabled with LetsEncrypt. It was easy to set up with Dreamhost’s panel. It was just a few clicks and some waiting. This is the first time my own sites have been available over HTTPS. I’ve been wanting to do it for a while, but it was kind of costly until the free LetsEncrypt became available. This brings my sites in line with the “HTTPS Everywhere” movement. I’ve also been wanting to play with the new installable apps forming standard for making web apps installable almost like native apps.
I had written a post before about how I’m setting my security-related headers. I’ve now added an HTTPS related header in a similar manner: HSTS.Upgrade-Insecure-Requests
and
Dreamhost now supports LetsEncrypt even with shared hosting. LetsEncrypt provides free SSL certificates. I’m going to have to try it out on my domains. My plan is to make my visitor targeted domains have https as the canonical protocol but still support http for older browsers.
The webmentions spec is now being published as a w3c working draft
Ideas: Local + Proxy Remote Hosting for Personal Site
Hosting your personal website on a computer at your home puts extra indie in indieweb. You truly control all of your data. I did this for several years. I did this with a very modest setup, serving from a mobile home using an iBook G3 800 with Windstream DSL internet. Performance obviously wasn’t the same as a web host would’ve provided. Of course, it helped a lot that I didn’t have much traffic. But I still had a lot of downtime, for a number of reasons:
- Dynamic IPs: most consumer level internet service plans do not have a static IP, and change occasionally. I used DynDNS to accomodate this, but it still led to downtime between the time that the IP changed to the time the daemon was run, DynDNS updated its records, and the DNS propagated.
- Internet outages: consumer level plans definitely don’t have the robust connection that a web host has. This was especially true at my mobile home, where perhaps old wiring led to fairly frequent outages, especially on windy days.
- Power outages: hosting companies have backup power. Most homes do not. My power went out from the electric company at least several times while I was hosting, but also went out whenever I had to turn off the power to work on something electrical. My server would stay on because it was a laptop, but not the router. A UPS is a reasonably priced option for reducing or eliminating this problem though.
- Computer / router issues, updates, etc: Any reboots, shutdowns, or stopping of server daemons will mean your site is down, which could be needed for updates or various problems. Web hosts usually have robust servers, and if they’re managing them, they’re usually very good about keeping them up and doing updates quickly and during down-times.
My idea to mitigate performance and downtime problems would be to use a reverse proxy, such as varnish, running on a remote web host, with your DNS pointing at it. It would be configured to go to your home server’s IP for content. You’d have to set up a daemon to contact the remote server and update this when it changes. Public pages would be set with long cache times so that they would be available if your home server goes down. The application(s) on the server would then have to be set up to send a PURGE request when pages were updated. Or perhaps, if the proxy allows, you could use whatever maxage times you want but have the proxy store the cached responses indefinitely and server them if the home server can’t be reached even if the maxage has been passed.
This idea is not without its problems. For instance:
- Security of connection between servers: If your site is using SSL, the connection between the servers would also have to be over SSL or the SSL used between the client and remote server would be virtually worthless. Without SSL between the two, a man in the middle could easily eavesdrop on the traffic or divert all traffic to their own server. Because of the changing IP address, the home server would have to use a self-signed certificate possibly increasing the risk of a man in the middle attack between the two servers and at the least requiring the remote server to accept that cert from any IP that it considers your home server.
- Non-cacheable requests would always need the home server: Private pages like admin pages as well as any mutating (POST, etc.) requests, would always have the same performance and robustness issues as the home server. Most importantly for many personal sites, webmentions / pingbacks / trackbacks / comment submissions would fail if the home server went down. So would any other form submissions. To deal with this, you’d probably have to do some programming on the remote server to have it queue these requests and give it an appropriate generic response for the request. For admin and logged in user activity, you could build the client side of your app to operate as you desire in offline mode.
And, as is always the case with serving from home, server and home network configuration, security, maintenance, etc. is all on you. There isn’t really a “managed” option available. You’ll have to get everything working, apply updates, deal with server and network problems, etc. In a home environment, security also includes physical access to the device.
I got PHP 7 working locally finally. It worked for CLI just fine when I first installed it soon after its initial release, but it wasn’t working with Apache. I’ve been upgrading every once in a while and finally, today, it worked. Now I just have to wait until Dreamhost supports it until I can start playing with it for my own site. At work, though, I’m still stuck back in PHP 5.3 land because of needing to support some old sites.
Security HTTP Headers
I’ve been working on the HTTP headers my site sends recently. I had been working on performance / cache related headers, but after seeing mention of a security header scanner built by Scott Helme, I decided to spend a little time implementing security related headers on my site. I don’t really know these headers that well, so I added the headers it suggested and mostly went with the recommended values. I did read up a bit on what they mean though and modified the Content-Security-Policy
as I saw fit.
I added most of the headers using a Symfony reponse event listener. This handles all of my HTML responses without sending the headers for other responses, where they aren’t necessary. The exception is the X-Content-Type-Options
, which should be set for all responses. I set that in Apache configuration.
I don’t know why I didn’t realize this before, but git project versions can be managed just with tags rather than needing to create a branch for each point version. Packagist can go entirely by tags. I had been creating point version branches because Symfony does, but that’s really only needed if you need to continue updating a previous version. It’s overkill for small, one person projects. With a tag available, it wouldn’t be hard to create a branch later anyway if needed.
Ooh, SymfonyStyle looks like it’ll make IO a lot easier for Symfony Console (CLI) apps.
Line Mode Browser, or progressive enhancement all the way back
Progressive enhancement is a development strategy meant to provide older and / or less capable browsers with a working website while providing the more capable with a rich, full experience. It is often presented as a set of layers of support, with HTML at its base, then CSS added to that for styles, then JavaScript for advanced behavior. With this, it’s often posited that a well-crafted HTML experience can be used by any browser. However, for really old browsers from the early web, the new web provides many things that can make pages difficult to read, functionality unusable, or even entire sites inaccessible.
Today, I’m going to go back as far as I reasonably can in terms of browser support, to the second web browser ever made, and the first widely supported one, Line Mode Browser. I can’t look at the first, WorldWideWeb, because it was only made for NextStep and, as far as I can tell, isn’t accessible for me to test with. Line Mode is though. It was open-source by the w3c and kept available. I was able to get it with MacPorts with the ‘libwww’ package (run as www
on the command line).
Line Mode was based on WorldWideWeb, and in fact was less featured, so it is likely to have any issues WorldWideWeb has and more. I will look at some issues that Line Mode has with modern web pages, and provide some solutions that will improve the abilities of even the oldest browsers to use a page.
Continue reading post "Line Mode Browser, or progressive enhancement all the way back"