twitter

@Amal1a_ @OvidPerl It was videoed, so I expect you'll be able to watch it all online before too long.

twitter

Excellent keynote by @OvidPerl. Really positive and with some concrete ideas to take the Perl community forward.

twitter

First day of @yapceu lightning tasks finished. Tried to encourage people to help answer questions on Perl web sites.

twitter

RT @clujpm: Fantastic day here at the @clujpm HQ, we're celebrating YAPC::Europe 2016 coming to @Cluj! Woohoo! Thank you @yapceu! http://t.…

twitter

From a few days ago, here's me talking to Charles Darwin at the Granada Science Park - https://t.co/e6ePnGYKjO

perl hacks

Beginners Perl Tutorial

A few weeks ago I got an interesting email from someone at Udemy. They were looking for someone to write a beginners Perl tutorial that they would make available for free on their web site. I think I wasn’t the only person that they got in touch with but, after a brief email conversation, they asked me to go ahead and write it.

It turned out to be harder that I thought it would be. I expected that I could write about 6,000 words over a weekend. In the end it took two weekends and it stretched to over 8,000 words. The problem is not in the writing, it’s in deciding what to omit. I’m sure that if you read it you’ll find absolutely essential topics that I haven’t included – but I wonder what you would have dropped to make room for them.

But eventually I finished it, delivered it to them (along with an invoice – hurrah!) and waited to hear that they had published it.

Yesterday I heard that it was online. Not from Udemy (they had forgotten to tell me that it was published two weeks ago) but from a friend.

Unfortunately, some gremlins had crept in at some point during their publication pipeline. Some weird character substitutions had taken place (which had disastrous consequences for some of the Perl code examples) and a large number of paragraph breaks had vanished. But I reported those all to Udemy yesterday and I see they have all been fixed overnight.

So finally I can share the tutorial with you. Please feel free to share it with people who might find it useful.

Although it’s 8,000 words long, it really only scratches the surface of the language. Udemy have added a link to one of their existing Perl courses, but unfortunately it’s not a very good Perl course (Udemy don’t seem to have any very good Perl courses). I understand why they have done that (that is, after all, the whole point of commissioning this tutorial – to drive more people to pay for Perl courses on tutorial) but it’s a shame that there isn’t anything of higher quality available.

So there’s an obvious hole in Udemy’s offerings. They don’t have a high quality Perl course. That might be a hole that I try to fill when I next get some free time.

Unless any other Perl trainers want to beat me to it.

Oh, and please let me know what you think of the tutorial.

The post Beginners Perl Tutorial appeared first on Perl Hacks.

books read

A Dance With Dragons: Part 1 Dreams and Dust (A Song of Ice and Fire, Book 5)

A Dance With Dragons: Part 1 Dreams and Dust (A Song of Ice and Fire, Book 5)
author: George R.R. Martin
name: David
average rating: 4.40
book published: 2011
rating: 0
read at:
date added: 2015/08/18
shelves: currently-reading
review:

slideshare

Conference Driven Publishing


A talk I gave at the London Perl Mongers Technical Meeting on 13th August 2015
davblog

Writing Books (The Easy Bit)

Last night I spoke at a London Perl Mongers meeting. As part of the talk I spoke about a toolchain that I have been using for creating ebooks. In this article I’ll go into a little more detail about the process.

Basically, we’re talking about a process that takes one or more files in some input format and (as easily as possible) turns them into one or more output formats which can be described as “ebooks”. So before we can decided which tools we need, we should decide what those various file formats should be.

For my input format I chose Markdown. This is a text-based format that has become popular amongst geeks over the last few years. Geeks tend to like text-based formats more than the proprietary binary formats like those produced by word processors. This is for a number of reasons. You can read them without any specialised tools. You’re not tied down to using specific tools to create them. And it’s generally easier to store them in a revision management system like Github.

For my output formats, I wanted EPUB and Mobipocket. EPUB is the generally accepted standard for ebooks and Mobipocket is the ebook format that Amazon use. And I also wanted to produce PDFs, just because they are easy to read on just about any platform.

(As an aside, you’ll notice that I said nothing in that previous paragraph about DRM. That’s simply because nice people don’t do that.)

Ok, so we know what file formats we’ll be working with. Now we need to know a) how we create the input format and b) how we convert between the various formats. Creating the Markdown files is easy enough. It’s just a text file, so any text editor would do the job (it would be interesting to find out if any word processor can be made to save text as Markdown).

To convert our Markdown into EPUB, we’ll need a new tool. Pandoc describes itself as “a universal document converter”. It’s not quite universal (otherwise that would be the only tool that we would need), but it is certainly great for this job. Once you have installed Pandoc, the conversion is simple:

pandoc -o your_book.epub title.txt your_book.md --epub-metadata=metadata.xml --toc --toc-depth=2

There are two extra files you need here (I’m not sure why it can’t all be in the same file, but that’s just the way it seems to be). The first (which I’ve called “title.txt”), contains two lines. The first line has the title of your book and the second has the author’s name. Each line needs to start with a “%” character. So it might look like this:

% Your title
% Your name

The second file (which I’ve called “metadata.xml”) contains various pieces of information about the book. It’s (ew!) XML and looks like this:

<metadata xmlns:dc="http://purl.org/dc/elements/1.1/">
<dc:title id="main">Your Title</dc:title>
<meta refines="#main" property="title-type">main</meta>
<dc:language>en-GB</dc:language>
<dc:creator opf:file-as="Surname, Forename" opf:role="aut">Forename Surname</dc:creator>
<dc:publisher>Your name</dc:publisher>
<dc:date opf:event="publication">2015-08-14</dc:date>
<dc:rights>Copyright ©2015 by Your Name</dc:rights> </metadata>

So after creating those files and running that command, you’ll have an EPUB file. Next we want to convert that to a Mobipocket file so that we can distribute our book through Amazon. Unsurprisingly, the easiest way to do that is to use a piece of software that you get from Amazon. It’s called Kindlegen and you can download it from their site. Once it is installed, the conversion is as simple as:

kindlegen perlwebbook.epub

This will leave you with a file called “your_book.mobi” which you can upload to Amazon.

There’s one last conversion that you might need. And that’s converting the EPUB to PDF. Pandoc will make that conversion for you. But it does it using a piece of software called LaTeX which I’ve never had much luck with. So I looked for an alternative solution and found it in Calibre. Calibre is mainly an ebook management tool, but it also converts between many ebook formats. It’s pretty famous for having a really complex user interface but, luckily for us, there’s a command line program called “ebook-convert” – which we can use.

ebook-convert perlwebbook.epub perlwebbook.pdf

And that’s it. We start with a Markdown file and end up with an ebook in three formats. Easy.

Of course, that really is the easy part. There’s a bit that comes before (actually writing the book) and a bit that comes after (marketing the book) and they are both far harder. Last year I read a book called Author, Publisher, Entrepreneur which covered these three steps to a very useful level of detail. Their step two is rather different to mind (they use Microsoft Word if I recall correctly) but what they had to say about the other steps was very interesting. You might find it interesting if you’re thinking of writing (and self-publishing) a book.

I love the way that ebooks have democratised the publishing industry. Anyone can write and publish a book and make it available to everyone through the world’s largest book distribution web site.

So what are you waiting for? Get writing. If you find my toolchain interesting (or if you have any comments on it) then please let me know.

And let me know what you’ve written.

The post Writing Books (The Easy Bit) appeared first on Davblog.

perl hacks

Driving a Business with Perl

I’ve been a freelance programmer for over twenty years. One really important part of the job is getting paid for the work I do. Back in 1995 when I started out there wasn’t all of the accounting software available that you get now and (if I recall correctly) the little that was available was all pretty expensive stuff.

At some point I thought to myself “I don’t need to buy one of these expensive systems, I’ll write something myself”. So I sat down and sketched out a database schema and wrote a few Perl programs to insert data about the work I had done and generate invoices from that data.

I don’t remember much about the early versions. I do remember coming to the conclusion that the easiest way to generate PDFs of the invoices was using LaTex and then wasting a lot of time trying to bend LaTeX to my will. I got something that looked vaguely ok eventually, but it was always incredibly painful if I ever needed to edit it in any way. These days, I use wkhtmltopdf and my life is far easier. I understand HTML and CSS in a way that I will never understand LaTeX.

Why am I telling you this, twenty years after I started using this code? Well, during this last week, I finally decided it was time to put the code on Github. There were two reasons for this. Firstly, I thought that it might be useful for other people. And secondly, I’m ashamed to admit that this is the first time that the code has ever been put under any kind of version control (and, yes, this is an embarrassing case of “do as I say, not as I do“). I have no excuses. The software I used to drive my business was in a few files on a single hard drive. Files that I was hacking away at with gay abandon when I thought they needed changing. I am a terrible role model.

Other than all the obvious reasons, I’m sad that it wasn’t in version control as it would have been interesting to trace the evolution of the software over the last twenty years. For example, the database access started as raw DBI, spent a brief time using Class::DBI and at some point all got moved to DBIx::Class. It’s likely that I wasn’t using the Template Toolkit when I started – but I can’t remember what I was using in its place.

Anyway, the code is there now. I don’t give any guarantees for its quality, but it does the job for me. Let me know if you find any of it interesting or useful (or, even, laughable).

p.s. An interesting side effect of putting it under (public) version control – since I uploaded it to Github I have been constantly tweaking it. The potential embarrassment of having my code available for anyone to see means that I’ve made more improvements to it in the last week that I have in the previous five years. I’m even considering replacing all the command line programs with a Dancer app.

p.p.s. I actually use FreeAgent for all my accounting these days. It’s wonderful and I highly recommend it. But I still use my own system to generate invoices.

The post Driving a Business with Perl appeared first on Perl Hacks.

davblog

Financial Account Aggregation

Three years ago, I wrote a blog post entitled Internet Security Rule One about the stupidity of sharing your passwords with anyone. I finished that post with a joke.

Look, I’ll tell you what. I’ve got a really good idea for an add-on for your online banking service. Just leave the login details in a comment below and I’ll set it up for you.

It was a joke because it was obviously ridiculous. No-one would possibly think it was a good idea to share their banking password with anyone else.

I should know not to make assumptions like that.

Yesterday I was made aware of a service called Money Dashboard. Money Dashboard aggregates all of your financial accounts so that you can see them all in one convenient place. They can then generate all sorts of interesting reports about where your money is going and can probably make intelligent suggestions about things you can do to improve your financial situation. It sounds like a great product. I’d love to have access to a system like that.

There’s one major flaw though.

In order to collect the information they need from all of your financial accounts, they need your login details for the various sites that you use. And that’s a violation of the Internet Security Rule One. You should never give your passwords to anyone else – particularly not passwords that are as important as your banking password.

I would have thought that was obvious. But they have 100,000 happy users.

Of course they have have a page on their site telling you exactly how securely they store your details. They use “industry-standard security practices”, their application is read-only “which means it cannot be used for withdrawals, payments or to transfer your funds”. They have “selected partners with outstanding reputations and extensive experience in security solutions”. It all sounds lovely. But it really doesn’t mean very much.

It doesn’t mean very much because at the heart of their system, they need to log on to your bank’s web site pretending to be you in order to get hold of your account information. And that means that no matter how securely they store your passwords, at some point they need to be able to retrieve them in plain text so they can use them to log on to your banks web site. So there must be code somewhere in their system which punches through all of that security and gets the string “pa$$word”. So in the worst case scenario, if someone compromises their servers they will be able to get access to your passwords.

If that doesn’t convince you, then here’s a simpler reason for not using the service. Sharing your passwords with anyone else is almost certainly a violation of your bank’s terms and conditions. So if someone does get your details from Money Dashboard’s system and uses that information to wreak havoc in your bank account – good luck getting any compensation.

Here, for example, are First Direct’s T&Cs about this (in section 9.1):

You must take all reasonable precautions to keep safe and prevent fraudulent use of any cards, security devices, security details (including PINs, security numbers, passwords or other details including those which allow you to use Internet Banking and Telephone Banking).

These precautions include but are not limited to all of the following, as applicable:

[snip]

  • not allowing anyone else to have or use your card or PIN or any of our security devices, security details or password(s) (including for Internet Banking and Telephone Banking) and not disclosing them to anyone, including the police, an account aggregation service that is not operated by us

Incidentally, that “not operated by us” is a nice piece of hubris. First Direct run their own account aggregation service which, of course, they trust implicitly. But they can’t possibly trust anybody else’s service.

I started talking about this on Twitter yesterday and I got this response from the @moneydashboard account. It largely ignores the security aspects and concentrates on why you shouldn’t worry about breaking your bank’s T&Cs. They seem to be campaigning to get T&Cs changed so allow explicit exclusions for sharing passwords with account aggregation services.

I think this is entirely wrong-headed. I think there is a better campaign that they should be running.

As I said above, I think that the idea of an account aggregation service is great. I would love to use something like Money Dashboard. But I’m completely unconvinced by their talk of security. They need access to your passwords in plain text. And it doesn’t matter that their application only reads your data. If someone can extract your login details from Money Dashboard’s systems then they can do whatever they want with your money.

So what’s the solution? Well I agree with one thing that Money Dashboard say in their statement:

All that you are sharing with Money Dashboard is data; data which belongs to you. You are the customer, you should be telling the bank what to do, not the other way around!

We should be able to tell our banks to share our data with third parties. But we should be able to do it in a manner that doesn’t entail giving anyone full access to our accounts. The problem is that there is only one level of access to your bank account. If you have the login details then you can do whatever you want. But what if there was a secondary set of access details – ones that could only read from the account?

If you’ve used the web much in recent years, you will have become familiar with this idea. For example, you might have wanted to give a web app access to your Twitter account. During this process you will be shown a screen (which, crucially, is hosted on Twitter’s web site, not the new app) asking if you want to grant rights to this new app. And telling you which rights you are granting (“This app wants to read your tweets.” “This app wants to tweet on you behalf.”) You can decide whether or not to grant that access.

This is called OAuth. And it’s a well-understood protocol. We need something like this for the finance industry. So that I can say to First Direct, “please allow this app to read my account details, but don’t let them change anything”. If we had something like that, then all of these problems will be solved. The Money Dashboard statement points to the Financial Data and Technology Association – perhaps they are the people to push for this change.

I know why Money Dashboard are doing what they are doing. And I know they aren’t the only ones doing it (Mint, for example, is a very popular service in the US). And I really, really want what they are offering. But just because a service is a really good idea, shouldn’t mean that you take technical short-cuts to implement it.

I think that the “Financial OAuth” I mentioned above will come about. But the finance industry is really slow to embrace change. Perhaps the Financial Data and Technology Association will drive it. Perhaps one forward-thinking bank will implement it and other bank’s customers will start to demand it.

Another possibility is that someone somewhere will lose a lot of money through sharing their details with a system like this and governments will immediately close them all down until a safer mechanism is in place.

I firmly believe that systems like Money Dashboard are an important part of the future. I just hope that they are implemented more safely than the current generation.

 

The post Financial Account Aggregation appeared first on Davblog.

cpan

WWW-Shorten-OneShortLink-9.99

=êåŠ{^­öœzÚ5²YÞ
cpan

WWW-Shorten-NotLong-9.99

=êåŠ{^­öœzÚ'¢Ùhž(
perl hacks

Culling My Modules

About a year ago, I dabbled briefly with Travis CI. I even gave a talk about my experiences. The plan was that I would start to use it for all of my code. But real life intervened and I never got round to getting any further with that project.

This weekend, I finally made some progress. I added a .travis.yml file to all of my Github repositories that hold CPAN modules. I even fed the details through to Coveralls so I get test coverage reports. From there it was a simple step to building a dashboard that monitors the health of all of my CPAN modules.

And it’s not a pretty picture. You’ll see a lot of grey boxes on that page, indicating that Travis couldn’t run the tests or, worse, red boxes showing that the tests failed for some reason.

Yesterday I made a few quick fixes to some of the modules (particularly in the WWW::Shorten namespace) and a couple more of them now work. But I want to work out how much effort it’s worth investing in the ones that are still failing. And, widening my scope a little, I’ve decided to take a close look at my CPAN modules and work out which ones are worth keeping and which ones I should just delete.

For example, twelve years ago I was really excited about the idea of AudioFile::Info. Most people were ripping music to MP3s, but I wasn’t following the crowd and was using Ogg Vorbis instead. AudioFile::Info and its friends was an attempt to make it easy to extract information from audio files no matter which format they were it. I suppose it was a kind of DBI for ID3 tags. But twelve years on, does anyone really care about that any more? I switched all of my music collection to MP3 years ago. If I recall correctly, the AudioFile::Info modules use a convoluted hand-crafted plugin system which never worked as well as it should. I could probably switch them to use some kind of plugin architecture from CPAN. But is it worth the effort?

Then there is Guardian::OpenPlatform::API – a Perl wrapper around the Guardian’s API. I believe they changed the API end-point several years ago so the module doesn’t even work. But the fact that I’ve had no complaints about that, probably indicates that no-one has ever used it.

It’s a similar story for Net::Backpack. To be honest, I have no idea whether or not it still works. Is Backpack still running? Ok, I’ve just checked and they’re no longer offering it to new customers. But if I’m not a paying customer is there any way I can test that it still works?

Finally, there is the WWW::Shorten family of modules. I released a module called WWW::MakeAShorterLink back in 2002, but it was Iain Truskett who realised that there should be a family of modules around the (at the time new) URL-shortening industry. I took over the module when Iain passed away and I’ve been maintaining it ever since. But it’s a real pain to maintain. The URL-shortening industry changes really quickly. For a long time, new services were popping up all of the time (and many of them closed down just as quickly). I haven’t been anywhere near quick enough at releasing versions that keep up with all the changes. I suspect that at least a couple of the current test failures are down to services that have closed down. I should probably investigate those over the next few days.

I don’t think WWW::Shorten is in any danger of going away (but I need to find a better way to keep abreast of changes in the industry) but the other modules I’ve mentioned here (AudioFile::Info::*, Guardian::OpenPlatform::API and Net::Backpack) are on borrowed time. If you’re using them and you’d like to see new versions of them in the future then let me know. If you’d like to take over maintenance, then that would be even better.

If I don’t hear from anyone (and I strongly suspect that I won’t) then I’ll be removing them from CPAN in a couple of months time.

The post Culling My Modules appeared first on Perl Hacks.

cpan

WWW-Shorten-Shorl-1.93

=êåŠ{^­öœzÚ,†Šår‰
cpan

WWW-Shorten-SnipURL-2.01

=êåŠ{^­öœzÚž*TD·(
perl hacks

Mailing Lists

Over the years I’ve set up a few mailing lists for the discussion of various projects I’ve been involved with. There’s always an expectation that mailing lists will flourish without much input from me. But it never works out like that.

The truth is that most mailing lists just quietly die. And, in many cases, they end up attracting a lot of spam – which the owner of the list has to check on a semi-regular basis on the off-chance that there’s something interesting or useful in amongst the crap. There never is.

So I’ve decided to close a few mailing lists that didn’t seem to be going anywhere. I don’t suppose anyone will miss them, but I’ve taken a copy of the archives and I may do something with them at some point in the future.

The lists that I have removed are:

A couple of these lists have received slightly special treatment. The xml-feed list is advertised as the support email address for XML::Feed. I’ve redirected that address so that mail now comes to me. Hopefully my spam filters will ensure that I’m not overrun with spam from it before I work out a more permanent solution.

The other list that has been treated differently is the training-news one. That was set up so that people could get information about upcoming training courses that I would be running. I still think that’s useful, so I’ve replaced it with a new list (run by MailChimp). If you’re interested in keeping in touch with what I’m doing then please sign up to the new list by entering your email address below. (The same form will now appear in the sidebar on every page of this site.)


Sign up here for occasional email about stuff I'm doing with Perl, information about upcoming talks and training courses and other updates.

(I promise not to spam you.)


So, there you are. I’ve removed a few moribund mailing lists. I hope that hasn’t ruined anyone’s day.

The post Mailing Lists appeared first on Perl Hacks.

davblog

Opentech 2015

It’s three weeks since I was at this year’s Opentech conference and I haven’t written my now-traditional post about what I saw. So let’s put that right.

I got there rather later than expected. It was a nice day, so I decided that I would walk from Victoria station to ULU. That route took me past Buckingham Palace and up the Mall. But I hadn’t realised that the Trooping of the Colour was taking place which made it impossible to get across the Mall and into Trafalgar Square. Of course I didn’t realise that until I reached the corner of St James Park near the Admiralty Arch. A helpful policeman explained what was going on and suggested that my best bet was to go to St James Park tube station and get the underground to Embankment. This involved walking most of the way back through the park. And when I got to the tube station it was closed. So I ended up walking to Embankment.

All of which meant I arrived about forty minutes later than I wanted to and the first session was in full swing as I got there.

So what did I see?

Being Female on the Internet – Sarah Brown

This is the talk I missed most of. And I had really wanted to see this talk. As I arrived she was just finishing her talk, and the audio doesn’t seem to be on the Opentech web site.

Selling ideas – Vinay Gupta

I think I didn’t concentrate on this as much as I should have. It was basically a talk about marketing – which is something that the geek community needs to get better at. Vinay illustrated his talk with examples from his Hexayurt project.

RIPA 2 – Ian Brown

Ian talked about potential changes to the Regulation of Investigatory Powers Act. It was all very scary stuff. The slides are online.

The 3rd year of Snowdenia — Caroline Wilson Palow

Caroline talked about Ed Snowden’s work and the way it is changing the world.

Privacy: I do not think that word means what you think it means — Kat Matfield

Kat has been doing research into how end users view privacy on the web. It’s clear that people are worried about their privacy but that they don’t know enough about the subject in order to focus their fear (and anger) at the right things.

The State of the Network Address — Bill Thompson

Bill thinks that many of the world’s woes are caused by people in power abusing the technological tools that geeks have build. And he would like us to do more to prevent them doing that.

The State of Data — Gavin Starks

Gavin works for the Open Data Institute. It’s his job to help organisations to release as much data as possible and to help the rest of us to make as much use of that data as possible. He talked about the problems that he sees in this new data-rich world.

Using data to find patterns in law — John Sheridan

John is using impressive text parsing and manipulation techniques to investigate the UK’s legislation. It sounds like a really interesting project.

Scenic environments, healthy environments? How open data offers answers to this age-old question. — Chanuki Seresinhe

The answer seems to be yes :-)

I stood as a candidate, and… — James Smith

James stood as a candidate in this year’s general election, using various geek tools to power his campaign. He talked through the story of his campaign and tried to encourage others to try the same thing in the next election.

Democracy Club — Sym Roe

The Democracy Club built an number of tools and web sites which built databases of information about candidates in the recent election – and then shared that data with the public. Sym explained why and how these tools were built.

The Twitter Election? — Dave Cross

This was me. I’ve already written up my talk.

Election: what’s next

This was supposed to follow my talk. Bill Thompson had some ideas to start the discussion and suggested that anyone interested retired to the bar. I put away my laptop and various other equipment and the set off to find them. But I failed, so I went home instead.

Yet another massively successful event. Thanks, as always, to all of the speakers and organisers.

The post Opentech 2015 appeared first on Davblog.

davblog

TwittElection at OpenTech

Last Saturday was OpenTech. It was as great as it always is and I’ll write more about what I saw later. But I gave a talk about TwittElection in the afternoon and I thought it might be useful to publish my slides here along with a brief summary of what I said.

TwittElection from Dave Cross

The post TwittElection at OpenTech appeared first on Davblog.

slideshare

TwittElection


A Talk from OpenTech 2015 about a tool I wrote for monitoring parliamentary candidates on Twitter during the 2015 UK general election.
perl hacks

Building TwittElection

I was asked to write a guest post for the Built In Perl blog. I wrote something about how I built my site, TwittElection, for the recent UK general election.

In the UK we have just had a general election. Over the last few weeks many web sites have sprung up to share information about the campaign and to help people decide how to vote. I have set up my own site called TwittElection and in this article I’d like to explain a little about how it works.

But why not go over to Built In Perl and read the whole thing there.

Incidentally, on 13th June, I’ll be giving a talk about TwittElection at this year’s OpenTech conference. If you’re interested in the positive impact that technology can have on society then you’ll, no doubt, find OpenTech very interesting.

The post Building TwittElection appeared first on Perl Hacks.

davblog

Quoted By The Daily Mail

This morning Tweetdeck pinged and alerted me to this tweet from a friend of mine.

He was right too. The article was about Reddit’s Button and about half-way though it, they quoted my tweet.

My reaction was predictable.

I was terribly embarrassed. Being quoted in the Daily Mail isn’t exactly great for your reputation. So I started wondering if there was anything I could do to to recover the situation.

Then it came to me. The Mail were following Twitter’s display guidelines and were embedding the tweets in the web page (to be honest, that surprised me slightly – I was sure they would just take a screenshot). This meant that every time someone looked at the Mail’s article, the Mail’s site would refresh its view of the tweet from Twitter’s servers.

You can’t edit the content of tweets once they had been published. But you can change some of the material that is displayed – specifically your profile picture and your display name.

So, over lunch I took a few minutes to create a new profile picture and I changed my display name to “The Mail Lies”. And now my tweet looks how you see it above. It looks the same on the Mail article.

As I see it, this can go one of two ways. Either I the Mail notice what I’ve done and remove my tweet from the article (in which case I win because I’m no longer being quoted by the Daily Mail). Or they don’t notice and my tweet is displayed on the article in its current form – well at least until I get bored and change my profile picture and display name back again.

This afternoon has been quite fun. The caper has been pretty widely shared on Twitter and Facebook and couple of people have told me that I’ve “won the internet”.

So remember boys and girls, publishing unfiltered user-generated content on your web site is always a dangerous prospect.

The post Quoted By The Daily Mail appeared first on Davblog.

flickr

Antsiranana

Dave Cross posted a photo:

Antsiranana

flickr

Antisiranana

Dave Cross posted a photo:

Antisiranana

flickr

Stray Dog in Antisiranana

Dave Cross posted a photo:

Stray Dog in Antisiranana

flickr

Antisiranana

Dave Cross posted a photo:

Antisiranana

flickr

Antisiranana

Dave Cross posted a photo:

Antisiranana

books read

Perl by Example

Perl by Example
author: Ellie Quigley
name: David
average rating: 0.0
book published: 1994
rating: 0
read at:
date added: 2015/03/01
shelves: currently-reading
review:

cpan

Tie-Hash-Cannabinol-1.11

=êå{^žÈ¨ú+r·š¶)à…«!zËaj×è®­¶§‚
slideshare

Perl in the Internet of Things


My training course from the 2014 London Perl Workshop

sources

Feed Subscribe
OPML OPML

Powered by Perlanet