Powered by Perlanet
In a process that took ten years, from 1986 to 1996, the Conservative government privatised energy supply in the UK and turned it into a competitive marketplace. The British public resigned themselves to a lifetime of scouring pricing leaflets and frequently changing energy suppliers in order to get the best deal. This became simpler with the introduction of comparison sites like uSwitch and nowadays most switches can be completed online with very little effort on the part of the customer.
Of course, one of the crucial reasons why this works is that nothing actually changes on your premises. Your gas and electricity are still supplied through the same meters. The actual changeover is just a flick of a switch or a turn of a tap in a distribution centre miles from your house.
I’m a member of the Money Saving Expert’s Cheap Energy Club. This makes my life even easier. They know all about our energy usage and a couple of times a year I get an email from them suggesting that I could change a bit of money by switching to a different plan.
They also set up deals for their customers. They have enough clout that they can go to big energy suppliers and say “we’ll give you X,000 new customers if you can give them a good fixed deal on power”.
And that’s how I switched to British Gas in February 2016. I got a good fixed deal through the Cheap Energy Club.
The next innovation in British power supply was the recent introduction of smart meters. These are meters that can be read remotely by the suppliers, eliminating the need for meter readers. Because it’s automatic, the suppliers will read your meters far more frequently (daily, or even more often) giving customers a far better picture of their usage. You even get a little display device which communicates with the meter and gives minute by minute information about how much power you are using.
Last August I investigated getting a Smart Meter through British Gas. They came and fitted it and everything seemed to work well. All was well with the world.
Then, a couple of months ago, British Gas announced massive price hikes. This didn’t bother me at the time as I was on a fixed deal. But that deal was going to end in October – at which point my electricity was going to get very expensive.
A week or so later, I got an email from the Cheap Energy Club telling me what I already knew. But also suggesting a few alternative plans. I glanced through them and agreed with their suggestion of a fixed plan with Ovo. My power would go up in price – but by nowhere near as much as it would with British Gas. I clicked the relevant buttons and the switchover started.
Ovo started supplying my power this week and sent me an email asking for initial meter readings. I contacted them on Twitter, pointing out that I had smart meters, so there was no need for me to send them manual readings.
Their first reply was vaguely encouraging
Welcome to OVO, Dave. As we're in the process of taking over the smart meters so that it sends us your readings, I'd submit them yourself
— OVO Energy (@OVOEnergy) October 4, 2017
But actually, that turned out to be untrue. The truth is that there are (currently) two versions of the smart meter system. Everyone who has had a smart meter installed up until now has been given a system called SMETS1. And SMETS1 meters can only be read remotely by the company who installed them. There’s a new version called SMETS2 which will be rolled out soon, which allows all companies to read the same meters. And there will be a SMETS1 upgrade at some point (starting late 2018 is the best estimate I’ve been able to get) which will bring the same feature to the older meters (and by “older”, I mean the ones that have been installed everywhere).
Of course, the SMETS1 meters can be used to supply power to customers of any company. But only working as dumb meters which the customers have to read manually. And, yes, I know this is very much a first world problem, but it would be nice if technology actually moved us forward!
I see this very much as a failure of regulation. The government have been in a real hurry to get all households in the UK on smart meters. At one point they wanted us all switched over by 2020. I understand that target has now been softened so that every household must be offered a new meter by 2020. But it seems that somewhere in the rush to make the meters available, the most obvious requirements have been dropped.
The power companies keep this all very quiet. The market for power supply in the UK isn’t growing particularly quickly, so they’re all desperate to grab each other’s customers. And they won’t tell us anything that would make us think twice about switching supplier.
Ovo will come out and fit new smart meters for me. And (like the original British Gas installation) it will be “free”. Of course, they aren’t giving anything away and customers are paying for these “free” installations in their power costs. It would be interesting to see how many households have had multiple smart meter installations.
Of course, if you’re switching to save money (as most of us are), then I’m not suggesting that you shouldn’t switch if your smart meters will no longer be smart. But I’d suggest asking your new supplier if they can use your previous supplier’s smart meters. And making a loud “tut” sound when they say they can’t.
And when you’re offered new smart meters, don’t get them installed unless they are SMETS2.
Last Friday, I was in Brighton for the Brighton SEO conference. It was quite a change for me. I’ve been going to technical conferences for about twenty years or so, but the ones I go to tend to be rather grass-roots affairs like YAPC or Opentech. Even big conferences like FOSDEM have a very grass-roots feel to them.
Brighton SEO is different. Brighton SEO is a huge conference and there is obviously a lot of money sloshing around in the SEO industry. I’ve been to big technical conferences like OSCON, but tickets for conferences like that are expensive. Brighton SEO is free for most attendees. They must have lots of very generous sponsors.
The conference took place at the Brighton Centre. The people I was staying with in Brighton asked how much of the centre the conference took up. Turns out the answer was “all of it”. Not bad for a conference that started out as a few friends meeting in a pub just a few years ago.
The conference day is broken up into four sessions. It was easy enough to choose sessions that sounded useful to me. I’ve only really been looking into SEO since the start of the year and I’m more interested in the technical side of SEO. I don’t have much time for things like content marketing and keyword tracking (although I’m sure they have their place).
This was followed by Emily Grossman talking about Progressive Web Apps – which are basically web sites bundled up to look like smartphone apps. I plan to try this out with a couple of my sites soon.
The final talk in this session was David Lockie on Using Open Source Software to Speed Up Your Roadmap. I’ve used pretty much nothing but open source software for the last thirty years so I needed no convincing that he was advocating a good approach.
A quick coffee break and then the second session started. I chose a session on Onsite SEO. I was amused to see that even after only eight months of working on SEO, I could pick a session that was too basic for me.
The session started with Chloé Bodard on
Chloé was followed by Sébastien Monnier with a talk entitled
The final talk in the session was Aysun Akarsu and On the Road to HTTPS Worldwide. This was a good talk, but it would have been far more useful to me before we moved ZPG’s three major web sites to https earlier this year.
It was then lunch and with some ZPG colleagues I wandered off to sample some of Brighton’s excellent food.
For the first session in the afternoon, I chose three talks on Technical SEO. We started with Peter Nikolow with Quick and Dirty Server-Side Hacks to Improve Your SEO. To be honest, I think Peter misjudged his audience. I was following the conference hashtag on Twitter and there were a lot of people saying that his talk was going over their head. It didn’t go over my head, but I thought that some of his server-side knowledge looked a little dated.
Then there was Dominic Woodman with a talk entitled Advanced Site Architecture – Testing architecture & keyword/page groupings. There was a lot of good stuff in this talk and I need to go back over the slides in a lot more detail.
The session ended with Dawn Anderson talking about Generational Cruft in SEO – There is Never a ‘New Site’ When There’s History. A lot of this talk rang very true for me. In fact just the week before, I had been configuring a web site to return 410 responses when Google and Bing came looking for XML sitemaps that had been switched off two years ago.
For the fourth and final session, I chose the talks on Crawl and Indexation. This session began with Chris Green giving a talk called Robots: X, Meta & TXT – The Snog, Marry & Avoid of the Web Crawling World. The title was slightly cringe-making, but there was some good content about using the right tools to ensure that pages you don’t want crawled don’t end up in Google’s index.
I think I wass getting tired by this point. I confess that I don’t remember much about François Goube’s How to Optimise Your Crawl Budget. I’m sure it was full of good stuff.
There was no chance of dozing off during Cindy Krum’s closing talk Understanding the Impact of Mobile-First Indexing (the link goes to the slides for a slightly older version of the talk). This was a real wake-up call about how Google’s indexing will change over the next few years.
I had a great time at my first Brighton SEO. I wonder how much of that is down to the fact that for probably the first time this millennium I was at a conference and not giving a talk. But I’m already thinking about a talk for the next Brighton SEO conference.
Many thanks to all of the organisers and speakers. I will be back.
If you read yesterday’s post about my Mail Rail trip, you’ll remember that my slight quibble with the experience was that there weren’t any maps showing the route that the tour takes.
Well, I’ve found one. And I think it explains why they don’t shout about the route.
I was Googling for any maps of the whole Mail Rail system when I came across this blog post from 2013 where John Bull examined the documents that made up the planning request that the British Postal Museum and Archive had submitted to Islington Council. For real document buffs, the blog post included a link to the original planning request.
But, for me, the interesting part is the diagram I’ve included at the top of this post. It’s a map of the intended route. And it ties in well with the tour I took on Saturday, so I’m going to assume there were no changes in the four years between the planning request and the exhibit opening.
The Mail Rail exhibit is the coloured sections. The Postal Museum is on the other side of the road in the Calthorpe House. The bit in green is the entrance hall and gift shop and the blue bit is where you queue and board the train.
And the pink shows the route that the train takes. You can see it doesn’t go very far. In fact, it doesn’t make it out of the Mount Pleasant complex. It goes from the depot, takes a sharp turn to the right and pulls into the south-east Mount Pleasant platform. That’s where you see the first multi-media presentation. Once it pulls out of that station, the train comes off of the main tracks and takes a maintenance loop which brings it back into the same station but on the north-west platform where it stops for the second multi-media presentation. After that, it returns to the depot where the passengers alight.
So, all-in-all, you don’t get to see much of the system at all. I knew that you wouldn’t go far, but I’m a little surprised that you don’t get any further than Mount Pleasant station. And that, I expect, is why they don’t publicise the route.
To be clear, I still think it’s well worth a visit. And it’s great to see such an interesting part of London’s communication infrastructure open to the public.
But I really hope that in the future, more of the system can be opened up – even if it’s just for occasional trips for enthusiasts. I know I’d be first in line for a ticket.
I rode the Mail Rail yesterday. It was very exciting. More about that in a minute. Before that, I went to the Postal Museum.
I’ve often thought that the UK needed a museum about the Post Office. And the new (well, newish – it’s been open a couple of months) Postal Museum is a really good start.
Most of the museum is a pretty standard chronological look at the postal service in the UK. There are exhibits telling the story of the service from its earliest incarnation five hundred years ago. It’s interesting and the displays are well-designed but I couldn’t help thinking it was all a bit simplified. There were many places where I would have welcomed a deeper investigation. Mind you, I find myself thinking that in many modern museums, so perhaps the problem is with me.
Towards the end of the museum is a small cinema area where they show various short films associated with the Post Office (yes, this includes Night Mail). I could have sat there watching all of them – but I didn’ t have the time. And I think they missed a trick by not selling a DVD of the films in the gift shop.
The Postal Museum is well worth a visit. It’s not as big as I thought it would be. We went round it all in about 45 minutes.
But the reason I left it a couple a months to visit the Postal Museum was because it was only this weekend that the other nearby attraction, the Mail Rail, finally opened to the public.
The Mail Rail is an underground railway system which, between 1927 and 2003 was used to transport post around London. I remember hearing about it soon after I first moved to London and I’ve been fascinated by it ever since.
And last week it opened as a visitor attraction. New carriages have been installed which are (only just) more comfortable for people to sit in and you can take a 20 minute guided tour of the line. Well, it’s 20 minutes if you include the time the train is sitting in the platform as you all board.
I enjoyed the ride. To be honest, I would have been happy just riding around the tunnels for 20 minutes, but there are a couple of points where you stop and are shown a multi-media presentation about the system and the postal service. A lot of time and money has been spent on them and they were really enjoyable (if not particularly informative).
As you leave the platform at the end of your ride, you pass though an interesting exhibition on the history of the system.
If I had one suggestion for improvement, I would like to have seen a map of the system with the bits that the tour covers marked. I suspect that you don’t actually get out of the bits of the system under Mount Pleasant sorting office. [Update: I found a map. See here for details.]
I recommend a visit. I’ll be returning at some point in the future to see it again.
Here’s a video I took of my tour.
I have a few ideas for static web sites that I want to build. And, currently, the best place to host static web sites is, in my opinion, Github Pages.
And if you’re hosting a site on Github Pages, everyone knows that the best tool to use is Jekyll. Or is it?
I’ve tried to use Jekyll a couple of times and it just confused me. Something about the way it works just doesn’t fit into my head in some way. I’m not sure what it is, but every time I change something, it all breaks completely. I’m sure the problem is with me rather than the software. Everyone else seems to get on with it just fine.
So, anyway, when faced with a problem like that I did what any self-respecting geek would do. I wrote my own tool to solve the problem.
And, of course, I wrote it in Perl (because that’s what I know best) and I used the Template Toolkit (because, well, why wouldn’t I). And because I wrote it to reflect the way that I think about building static web sites, I understand how it works.
To be honest, how it works is pretty simple so far. It takes a bunch of files in an input directory, processes them using the Template Toolkit and writes them into a mirror directory structure under an output directory. So far, not so different to tttree (the tool that comes with the Template Toolkit), but there’s one little improvement that I’m finding very useful.
I like writing text using Markdown. And I thought that it would be great to write text in Markdown, but have it pre-processed to HTML before passing it through the Template Toolkit. A couple of months ago I released Template::Provider::Pandoc which does just that (actually, it does a lot more than that – it will convert between any two text formats that are supported by Pandoc.
And my new site builder software used Template::Provider::Pandoc to process all of the templates in the site. You don’t really want to be using Markdown for the main layout of your site – Markdown is rubbish for building navbars, footers or image carousels – but when I have a large amount of text, I can [% INCLUDE %] a template which includes that text in Markdown, knowing that it will be converted to HTML before being included in the page.
If you want to try it out, the best documentation, currently, is in the command line tool, aphra, that comes with the distribution.
Yes, it’s a strange name.
When I first realised I’d be writing something like Jekyll, I wanted to call it Hyde. I wanted to be able to say that it was uglier and more powerful than Jekyll. But there’s already a Python sitebuilder called that. Then I considered Utterson (he’s Henry Jekyll’s friend in the novel) but that had been taken too.
So I abandoned the idea of using the name of a character from The Strange Case of Dr Jekyll and Mr Hyde and started looking elsewhere.
I first came across Aphra Behn when I read Philip José Farmer‘s Riverworld books about thirty years ago and she has stuck with me ever since. [I should point out for people who haven’t read Farmer’s books that he takes real historical characters, like Behn, and drops them into a science fiction environment.]
Behn was a British writer who wrote novels, plays and poetry in the second half of the seventeenth century. At a time when women simply didn’t do those things, it just didn’t seem to occur to her that she shouldn’t. She was a great role model to many of the great women writers of the following centuries.
Oh, and she was a spy too, during the Second Anglo-Dutch War.
All in all, she was an inspirational woman who deserves wider recognition. And I hope that, in some small way, that my software will raise her profile.
So now I have my tool, it’s time to start creating the web sites that I wanted. I hope to have some news on those for you in a few weeks.
Or, perhaps, I’ll get bogged down creating a web site for Aphra. I’ve just registered a domain name…