Powered by Perlanet
When I signed up for my Monzo bank account last year, one of the things that really excited me was the API they made available. Of course, as is so often the way with these things, my time was taken up with other things and I never really got any further than installing the Perl module that wrapped the API.
The problem is that writing code against an API takes too long. Oh, it’s generally not particularly difficult, but there’s always something that’s more complicated than you think it’s going to be.
So I was really interested to read last week that Monzo now works with IFTTT. IFTTT (“If This Then That”) is a service which removes the complexity from API programming. You basically plug services together to do something useful. I’ve dabbled with IFTTT before. I have “applets” which automatically post my Instagram photos to Twitter, change my phone’s wallpaper to NASA’s photo of the day, tell me when the ISS is overhead – things like that) so I knew this would be an easier way to do interesting things with the Monzo API – without all that tedious programming.
An IFTTT applet has two parts. There’s a “trigger” (something that tells the applet to run) and an “action” (what you want it to do). Monzo offers both triggers and actions. The triggers are mostly fired when you make a purchase with your card (optionally filtered on things like the merchant or the amount). The actions are moving money into or out of a pot (a pot in a Monzo account is a named, ring-fenced area in your account where you can put money that you want to set aside for a particular purpose).
You can use a Monzo trigger and action together (when I buy something at McDonald’s, move £5 to my “Sin Bin” pot) but more interesting things happen if you combine them with triggers and actions from other providers (move £5 into my “Treats” pot when I do a 5K run – there are dozens of providers).
I needed an example to try it out. I decided to make a Twitter Swear Box. The idea is simple. If I tweet a bad word, I move £1 from my main account into my Swear Box pot.
The action part is simple enough. Monzo provides an action to move money out of a pot. You just need to give it the name of the pot and the amount to move.
The trigger part is a little harder. Twitter provides a trigger that fires whenever I tweet, but that doesn’t let me filter it to look for rude words. But there’s also a Twitter Search trigger which fires whenever a Twitter search finds a tweet which matches a particular search criterion. I used https://twitter.com/search-advanced to work out the search string to use and ended up with “fudge OR pish OR shirt from:davorg”. There’s a slight problem here – it doesn’t find other versions of the words like “fudging” or “shirty” – but this is good enough for a proof of concept.
Creating the applet is a simple as choosing the services you want to use, selecting the correct trigger and action and then filling in a few (usually pretty obvious) details. Within fifteen minutes I had it up and running. I sent a tweet containing the word “fudge” and seconds later there was a pound in my Swear Box pot.
Tonight, I was at a meeting at Monzo’s offices where they talked about how they developed the IFTTT integration and what directions it might go in the future. I asked for the latitude and longitude of a transaction to be included in the details that IFTTT gets – I have a plan to plot my transactions on a map.
Monzo is the first bank to release an integration with IFTTT and it really feels like we’re on the verge of something really useful here. I’ll be great to see where they take the service in the future.
It’s June, which means it’s only a couple of months until the Europe Perl community descends en masse on Glasgow for this year’s Perl Conference (formerly known as YAPC). For me, that also means I need to start planning the training courses I’ll be running before the conference. And for you, it means you need to start deciding which training courses you want to come to before the conference
This year, it looks like there will be one day of training courses on the day before the main conference starts (that’s Tuesday 14th August). There are a number of courses being offered – details are in a recent conference newsletter.
I’ll be giving two half-day courses and, unusually, there will be little or no Perl content in either of them. Here are the details:
Many of us have web sites and for most web sites, success is measured by the number of visitors you get. And, in most of the western world, getting your web site to rank higher in Google’s search results is one powerful tool for bringing in more visitors.
In this half-day course, I’ll be introducing a number of simple tips that will make your site more attractive to Google which will, hopefully, improve your search ranking. If you make it easier for Google to understand the contents and structure of your site, then Google is more likely to want to send visitors to your site. (Other search engines are, of course, available but if you keep Google happy, you’ll be keeping them happy too.)
I ran a short version of this course at the London Perl Workshop last year. This version will be twice as long (and twice as detailed).
Some people seem surprised that being really good at programming isn’t the only skill you need in order to have a successful career in software development.
I’ve been working in this industry for thirty years and I like to think I’ve been pretty successful. In this half-day course, I’ll look at some of the other skills that you need in order to do well in this industry. We’ll look at a range of skills from more technical areas like source code control and devops, to softer areas like software development methodologies and just working well with others.
I ran a two-hour version of this course at a London Perl Workshop in the dim and distant past. This version will be updated and expanded.
Both courses will be taking place on the same day. I’m not sure where they will be held, but I’ll let you know as soon as I have that information. Each half-day session costs £75 and you can book places on the conference web site. Places on the courses will be limited, so I recommend booking as soon as possible.
Do these courses sound interesting? Please let me know your thoughts in the comments.
Some of you might remember the lightning talk I gave at the London Perl Workshop last year (it’s available on YouTube, I’ll wait if you want to watch it). In it, I said I planned to resurrect the Perl School brand, using it to publish Perl ebooks. One book, Perl Taster, was already available and I had plans to write and publish several more. Those plans are still ongoing…
Also in the talk, I asked if anyone else wanted to write a book for the series. I offered to help out with the hard parts of getting your text into the Amazon system (it’s actually nowhere near as hard as you might think). Three people approached me later to discuss the possibility of writing something, but only one followed through with something more concrete. That was John Davies, who has been a regular attendee at London Perl Mongers for a pretty long time. At the LPW, John had helped Martin Berends to run a training course on using Selenium with Perl. As part of that help, John had written some notes on the course which had been distributed to the attendees. John wondered if those notes would be worth publishing as a Perl School ebook. I said that it was certainly worth pursuing the idea.
Over the last few months, John has expanded his original notes considerably and I’ve been doing the work needed to convert his input into an ebook. And I’m happy to say that the book was published on Amazon yesterday. It’s called Selenium and Perl and you should be able to find it on your local Kindle store. If you want to test your Perl web applications using Selenium, then I hope that you find it useful.
It’s been the first time I’ve edited someone else’s work and converted it into an ebook. I think the process has gone well (perhaps someone should ask John for his opinion!)
But I’m confident enough of the process to renew the offer I made at the LPW. If you’re interested in writing an ebook as part of the Perl School range, please get in touch and we can discuss it.
Yesterday I was at my second Brighton SEO conference. I enjoyed it every bit as much as the last one and I’m already looking forward to the next. Here are my notes about the talks I saw.
I misread the description for this. I thought it would be about clever ways to use command-line tools for SEO purposes. But, actually, it was a basic introduction to Unix command-line text processing tools for people who were previously unaware of them. I wasn’t really the target audience, but it’s always good to see a largely non-technical audience being introduced to the powerful tools that I use ever day.
A good introduction to why HTTP/2 is good news for web traffic (it’s faster) and a great trucking analogy explaining what HTTP is and how HTTP/2 improves on current systems. I would have liked more technical detail, but I realise most of the audience wouldn’t.
To be honest, I was only here because it was the last talk in the session and I didn’t have time to move elsewhere. I have never worked on a site with pages that are translated into other languages, so this was of limited interest to me. But Emily certainly seemed to know her stuff and I’m sure that people who use “hreflang” would have found it very interesting and useful.
One thing bothered me slightly about the talk. A couple of times, Emily referred to developers in slightly disparaging ways. And I realised that I’ve heard similar sentiments before at SEO events. It’s like developers are people that SEO analysts are constantly battling with to get their work done. As a developer myself (and one who has spend the last year implementing SEO fixes on one of the UK’s best-known sites) I don’t really understand this attitude – as it’s something I’ve never come across.
It’s annoyed me enough that I’m considering proposing a talk called “I Am Developer” to the next Brighton SEO in order to try to get to the bottom of this issue.
Fili is a former Google Search Quality Engineer, so he certainly knows his stuff. But this talk seemed a bit scattershot to me – it didn’t seem to have a particularly clear focus.
This was probably the talk I was looking forward to most. I’ve been dabbling in JSON-LD on a few sites recently and I’m keen to get deeper into to. Alexis didn’t disappoint – this was a great introduction to the subject and (unlike some other speakers) she wasn’t afraid to go deeper when it was justified.
Here first slide showed some JSON-LD and she asked us to spot the five errors in it. I’m disappointed to report that I only caught two of them.
This started well. A good crawling strategy is certainly important for auditing your site and ensuring that everything still works as expected. However, I was slightly put off by Sam’s insistence that a cloud-based crawling tool was an essential part of this strategy. Sam works for Deep Crawl who just happen to have a cloud-based crawling tool that they would love to sell you.
Conferences like this are at their best when the experts are sharing their knowledge with the audience without explicitly trying to sell their services. Sadly, this talk fell just on the wrong side of that line.
Then it was lunchtime and my colleagues and I retired just around the corner to eat far too much pizza that was supplied by the nice people at PI Datametrics.
This was really interesting. Rob says that featured snippets are on the rise and had some interesting statistics that will help you get your pages into a featured snippet. He then went on to explain how featured are forming the basis of Google’s Voice Search – that is, if you ask Google Home or Google Assistant a question, the reply is very likely to be the featured snippet that you’d get in response to the same query on the Google Search Engine. This makes it an ever better idea to aim at getting your content into featured snippets.
Sam Robson / Slides
Sam works for Future Publishing, on their Tech Radar site. He had some interesting war stories about dealing with Google algorithm changes and coming out the other side with a stronger site that is well-placed to capitalise on big technical keywords.
[I can’t find his slides online. I’ll update this post if I find them.]
This tied in really well with the other talks in the session. Jason has good ideas about how to get Google to trust your site more by convincing Google that you are the most credible source for information on the topics you cover. He also talked a lot about the machine learning that Google are currently using and where that might lead in the future.
I was at a bit of a loose end for the final session. Nothing really grabbed me. In the end I just stayed in the same room I’d been in for the previous session. I’m glad I did.
All too often, I’ve seen companies who don’t really know how to report effectively on how successfully (or otherwise!) their web sites are performing. And that’s usually because they don’t know what metrics are important or useful to them. Stephen had some good ideas about identifying the best metrics to track and ensuring that the right numbers are seen by the right people.
Having following Stephen’s advice and chosen the metrics that you need to track, Anna can show you how to record those metrics and how to also capture other useful information. As a good example, she mentioned a client who was an amusement park. Alongside the usual kinds of metrics, they had also been able to track the weather conditions at the time someone visited the site and had used that data to corroborate ticket sales with the weather.
Anna seemed to be a big fan of Google Tag Manager which I had previously dismissed. Perhaps I need to revisit that.
Dana DiTomaso / Slides
And once you have all of your data squirrelled away in Google Analytics, you need a good tool to turn it into compelling and useful reports. Dana showed us how we could to that with Google Data Studio – another tool I need to investigate in more detail.
[I can’t find her slides online. I’ll update this post if I find them.]
Two things struck me while watching the keynote conversation between John Mueller and Aleyda Solis. Firstly, I though that Aleyda was the wrong person to be running the session. I know that Brighton SEO tries hard not to be the usual stuffy, corporate type of conference, but I thought her over-familiar and jokey style didn’t go well in a conversation with Google’s John Mueller.
Secondly, I had a bit of an epiphany about the SEO industry. All day, I’d been watching people trying to explain how to get your site to do well in Google (other search engines are, of course, available but, honestly, who cares about them?) but they’re doing so without any real knowledge of how the Mighty God of Search really works.
Oh, sure, Google gives us tools like Google Analytics which allow us so see how well we’re doing and Google Search Console which will give us clues about ways we might be doing better. But, ultimately, this whole industry is trying to understand the inner working of a company that tells us next to nothing.
This was really obvious in the conversation with John Mueller. Pretty much every question was answered with a variation on “well, I don’t think we’d talk publicly about the details of that algorithm” or “this is controlled by a variety of factors that will change frequently, so I don’t think it’s useful to list them”.
The industry is largely stumbling about in the dark. We can apply the scientific method – we propose a hypothesis, run experiments, measure the results, adjust our hypothesis and repeat. Sometimes we might get close to a consensus on how something works. But then (and this is where SEO differs from real science) Google change their algorithms and everything we thought we knew has now changed.
Don’t get me wrong, it’s a fascinating process to watch. And, to a lesser extent, to be involved in. And there’s a lot riding on getting the results right. But in many ways, it’s all ultimately futile.
Wow, that got dark quickly! I should finish by saying that, despite what I wrote above, Brighton SEO is a great conference. If you want more people to visit your web site, you should be interested in SEO. And if you’re interested in SEO, you should be at Brighton SEO.
See you at the next one – it’s on September 28th.
There was a London Perl Mongers meeting at ZPG about ten days ago. I gave a short talk explaining why (and how) a republican like me came to be running a site about the Line of Succession to the British Throne. The meeting was great (as they always are) and I think my talk went well (I’m told the videos are imminent). The photo shows the final slide from my talk. I left it up so it was a backdrop for a number of announcements that various people gave just before the break.
In order to write my talk, I revisited the source code for my site and, in doing so, I realised that there were a couple of chunks of logic that I could (and should) carve out into separate distributions that I could put on CPAN. I’ve done that over the last couple of days and the modules are now available.
I’ve written the module as a Moo role, which means it should be usable in Moose classes too. To add JSON-LD to your class, you need to do three things:
Your class inherits two methods from the role – json_ld_data() returns the data structure which will be encoded into JSON (it’s provided in case you want to massage the data before encoding it) and json_ld() which returns the actual encoded JSON in a format that’s suitable for embedding in a web page.
One of the most satisfying parts of the Line of Succession site to write was the code that shows the relationship between a person in the line and the current sovereign. Prince Charles (currently first in line) is the son of the sovereign and Tāne Lewis (currently thirtieth in line) is the first cousin twice removed of the sovereign.
That code might be useful to other people, so it’s now on CPAN as Genealogy::Relationship. To be honest, I’m not sure exactly how useful it will be. The Line of Succession is a rather specialised version of a family tree – because we’re tracing a bloodline, we’re only interested in one parent (which is unlike normal genealogy where we’d be interested in both). It also might be too closely tied to the data model I use on my site – but I have plans to fix that soon.
Currently, because of the requirements of my site, it only goes as far as third cousins (that’s people who share the same great, great grandparents). That’s five generations. But I have an idea to build a site which shows the relationship between any two English or British monarchs back to 1066. I think that’s around forty generations – so I’ll need to expand the coverage somewhat!
But anyway, it’s there and I’d be happy if you tried it and let me know whether it works for you. The documentation should explain all you need to know.
The Line of Succession site doesn’t get much traffic yet – I really need to do more marketing for it. So it’s satisfying to know that some of the code might, at least, be useful outside of the project.