May 30th, 2015
Google’s new Photos app seems pretty great, with a consistent experience between the web and its native Android and iOS versions. The way your photos are organised is better than in Apple’s app, but the clincher is that they give you unlimited online storage if you’re willing to have them compress the originals. Given that (for me) this is just for family snaps, that is fine.
My iCloud storage has been full for weeks, and a combination of not being bothered enough to get round to it and not being sure I want to pay for the service (5GB feels tight, given I recently spent £ hundreds on a new iPhone) has led me to leave it like that. So goodbye iCloud Photo Library.
And as a it happens you can still post photos to iCloud shared libraries (which are, confusingly, separate from the iCloud Photo Library) direct from the Google Photos app.
Anyway two days into using it a couple of things are eluding me:
- A lot of people are tweeting about how impressive the facial recognition is, and the feature was demonstrated in the Google IO Keynote, but my Google Photos app (and also on the web) has no mention of faces anywhere and no apparent means of manually tagging faces – despite my library being full of photos of my family. Perhaps they’re rolling it out incrementally.
- Google has rather cleverly tagged and grouped a load of objects and things such as cats, cars, trains and food. However these collections contain some notable mistakes. A photo of one of my cats sleeping has appeared in the ‘food’ set, for example. Oddly there seems to be no way of untagging these things. Surely if you could then this could theoretically help its learning algorithm.
I’m guessing these things will be sorted out in due course, but there’s a chance I’m just missing something obvious. I’ve searched Google and Twitter but can’t find anyone else with the same problem (I mostly care about the face recognition).
May 21st, 2015
Back in 2010 Sir Tim Berners Lee warned about the threat posed to the web by Facebook et al.
Yesterday Jeremy Keith made this timely post (thanks to @fjordaan for tweeting it) about how poorly-performing websites are fuelling the shift towards native apps. In case you missed it, Facebook – which has already created a closed content silo – recently launched Instant Articles, which is basically their proprietary presentation mechanism for external content that is (presumably) be pre-cached to enhance the speed of the experience.
Rather than taking you to the external site they’re keeping you on Facebook, which is obviously good for Facebook, but you can’t argue with the fact that sometimes the user experience of external news sites is pretty terrible, so users will understandably like Instant Articles.
I’ll not repeat Jeremy’s points so read his post.
In a previous guise I remember arguing against going full-single-page-app in favour of ‘proper’ indexable content URLs on a project. And for keeping the number of requests on those pages down to a minimum (and, yes, making those requests super speedy via, minification, caching et cetera).
This is all well understood good practice, and yet a BuzzFeed article I just tested triggered 335 individual server requests. And one of the reasons I don’t like WordPress particularly is that out of the box (and with most of the popular themes) it leads to bloated request-heavy pages. There’s no culture of optimisation around it, yet WordPress seems more popular than ever (Yes, this site is WordPress; it’s good at doing blogs).
- It is only for use by logged-in users.
- It serves individual user-specific content such as their personal messages. It’s much faster to load the raw JSON data of a message than to reload an entirely new document with all its assets.
- It provides live status updates on some items.
- Our caching and local storage strategy ensures that users only load the application framework once, even though they may visit hundreds of pages within the app over the course of a week.
- And even then, our uncached page load is only 242KB (on a mobile device) and 18 requests, many of which are asynchronous.
It’s an application not a website, it just happens to use web technology. This is a very different use-case to a public page of content such as a news article.
The web is natively great at delivering pages of text very quickly. I consider documents and applications quite separately. And I don’t think it’s contradictory to be a cheerleader for both. The trick is, I believe, not to try to make documents more application-like.
This article on A List Apart also makes some good points
October 31st, 2014
The Problem with Facebook is well explained in this video by science communicator Derek Muller. Basically they algorithmically filter your news feed in such a way that you probably won’t see most of what your friends post. This is contrary to what users expect to happen, but they are none the wiser because they don’t know about what they don’t see.
Of course it’s all about this button, the heart of Facebook’s business model:
Once you give them cash they’ll show your post to all your friends / followers and of course a load of other people who don’t know you too. Fine: they have to make money. I just happen to hate it because it feels dishonest to actively hide things like that.
Facebook would argue that they’re trying to make my new feed “relevant” and “manageable”, something which Twitter does not do.
I’ve always greatly preferred Twitter’s follow model to Facebook’s friend model because I’m not socially obliged to follow my friends and family. I might be related to you but I’m not necessarily interested in your town’s local politics, or whatever. On Twitter it is left up to me to curate my feed by following the accounts I find interesting.
However, it’s changed. I joined Twitter early when it felt like a close-knit little network. For ages I followed about 40 people, most of whom I knew personally (early adopter web industry-types) plus a handful of other interesting people. Posting a Tweet was like putting something up on the village noticeboard. Most if not all of your followers would see it. And I would see all of my followers’ posts; in fact at first I received an SMS message whenever one of them tweeted. My feed was a mix of industry stuff and <= 140 character witticisms. However—grumble grumble—Stephen Fry joined and got stuck in a lift then it went mainstream. Soon those 'brand' things got in on the action and it became a marketing and news platform, all about driving clicks to websites. This has driven real human users away. I'd say 80% of the people I used to connect with on Twitter no longer use it. Or if they do they're completely silent and passive. "Last tweet: July". The trouble is that now when I post a tweet it feels like I'm standing at Oxford Circus during the morning rush hour. And most of the people surrounding me in the crowd are announcing things through megaphones. If I'm lucky perhaps I'll glimpse a familiar face but – to continue the urbanisation analogy – most of my friends don't come this way any more because they find it unpleasantly busy and they've moved out to the country. Evidence of this data overload symptom is the regular appearance now of ICYMI tweets. Often re-posting something a few hours later I’ll get a number of people commenting on it that I would have hoped to have seen it the first time but it’s now a mile down their timeline.
Solutions do exist: Using Twitter lists or TweetDeck, and the act of curating your following list by unfollowing non-human accounts. Sadly what’s left when you take away the noise is a bit of a ghost town.
For me Twitter was most interesting as a system for connecting human minds in real-time, not unlike Conjoiner technology in Alastair Reynolds’ fictional universe. That was genuinely exciting. Sadly, real-time is only usable up to a certain tweets-per-hour threshold. I don’t want to be connected in real-time to machines.
Here are two hypothetical experiments (that of course would be completely at odds with Twitter’s business model) that would make it very different but to me more interesting:
- Limiting the number of people you anyone can follow to 100
- Not allowing any links or media in tweets*.
A third experiment would be the option of following things that ONLY appear in a list and / or making a list your default timeline view, which would have the same effect.
But maybe it’s too late for all that. Or maybe I’m just being a sentimental Old Web guy.
*Yes, I tweeted a link to this blog post.
September 17th, 2014
Milk is an application suite for schools and colleges designed to be used by students, teachers and parents. It is a student self-management tool. Milk comprises an iOS and Android mobile app for students, as well as the Milk Web Portal which is open to all users.
I was honoured to be invited to join the Milk team at the end of December 2013 and I have been heading up its development since January. Milk is being piloted in a number of schools across the UK this term. It’s been a lot of hard work but it’s immensely satisfying to see it now coming to fruition.
To find out more about Milk visit our website.
April 23rd, 2014
Twitter finally updated my profile to the new display format – several weeks after they upgraded my cat. Here’s an almost pointless blog post about what I like and dislike about the new profile design:
- Overall appearance: Like
- Massive font size for just certain tweets apparently selected at random: Dislike
- Front-end build details, particularly the way the profile photo slides up out of the way as you scroll down, to be replaced by the compact in-nav-bar version: Like
- Pinned tweets: Dislike (because it reduces the beautiful simplicity of Twitter… but I’ll probably use it to promote something)
- Not showing replies by default: Like
- Showing non-tweet-based activity in my timeline such as who I followed: Dislike (I think).
That’s it. You don’t care. Good.
October 27th, 2013
I’m writing this in the notes app on my phone because I’ve got no signal so cannot log into my blog, or download and set up the WordPress app (which itself wouldn’t be able to connect to my blog anyway, and I can’t remember whether it works offline). Actually, worse than having no signal, I’ve got a patchy 2 bars and a GPRS connection that occasionally steps up to Edge but then drops out completely and the process starts over again. The tease.
I’m at my in-laws’ house and they don’t “have the Internet” here, and actually don’t particularly want it either. Oddly there are no neighbours’ WiFi networks in range; I just tried in the hope of finding a BT Openzone (or Fon) I could latch onto. Nope. Not even any private ones. A luddite neighbourhood?
I remember (of course) not having the Internet. Other than games, in those days PCs were more or less devoid of distractions. The 1996 equivalent of scrolling through Vines for 10 minutes was playing a game of Minesweeper on the big setting.
I remember my dad having Compuserve, and it seeming a bit boring. Then, later, I remember picking up a Freeserve disc from Dixons in Stockport and being fairly excited about getting on the World Wide Web – but not being exactly sure what the web was or whether I’d be that into it. I think I assumed it’d be a novelty for a while – I liked the idea of email – but had no idea quite how big a deal it would turn out to be. “Quite”, you might say.
By 1997 I had made my own website complete, of course, with a visible hit counter and a Java plugin ripple effect. The future.
At university in 1998 we had an internet connection at our student house in Leeds but only on one of our PCs at a time, depending on which room we ran the phone extension and modem to. “Are you going to be long on the Web? Can you give me a shout when you’re finished so I can use the phone?”
We played 2-player Warcraft II and Starcraft between the bedrooms on floors 2 and 3 by way of daisy-chaining parallel printer cables (remember LPT1?) and putting a Laplink cable on one end to reverse the gender and pin-out. I always lost.
And Unreal Tournament through a 56k ‘v90’ was unseen assailant frag hell. Though it’s impressive that it was playable at all.
None of this was that long ago. It’s remarkable how quickly we’ve come to expect connectivity to the Internet, wirelessly everywhere, such that now (trips into the wilderness aside) being offline is the exception rather than the norm.
The Internet has become a kind of magical Higgs-Field-like property that pervades the very air we breathe… Until we lose signal and the spell is broken.
“Oh for God’s sake, why won’t it just load?!”
So, keeping in mind that famous Louis C.K. clip, I won’t complain. That such a thing as a wireless Internet connection exists at all is little short of a miracle (well that and the culmination of decades of work by scientists, mathematicians and engineers).
Instead I’ll write this offline on my phone to kill some time until I’m tired enough to fall asleep. Which is, conveniently, now.
July 30th, 2013
If you work in, around and underneath websites for a living it’s likely (and indeed desirable) that you’ll become acclimatised to using a lot of technical terminology and acronyms. But it’s easy to lose touch with how much of that people outside the industry actually understand.
Jargon has its place but I’ll actively try to avoid using any terms that I think might be lost on my audience in any given situation. I’ve always hated it when ‘techies’ appear to be trying to impress people by blinding them with science, and I never want to be that guy. So when talking to clients I often find myself treading that fine line between confusing and patronising them, constantly tweaking the tech-level to ensure that they’re still with me.
This of course means I have to make some assumptions: “They’ll probably know what X is, but I’d better briefly explain Y”. But occasionally I’m way off, as happened the other day.
I recently made a site for a work acquaintance of my wife – just a small job for a local business. I knew it was likely that this client would be near at the non-technical end of the spectrum but I wasn’t prepared for one phone conversation in particular, and it took me a while to figure out what they were doing wrong.
The site has a very simple and stripped back
CMS content management system that allows them to add, edit or remove products, and I’d given them printed instructions for getting into the system, which went along the lines of:
- Sign in by going to: http://www.site.com/admin-area
- Enter your username: XXXXXX
- Enter your password: YYYYYY (case sensitive)
- Click the ‘Enter’ button
And I’d done a face-to-face demo.
A week after handing over the instructions I received a message asking me to call urgently because they couldn’t get into the admin system. Digging for a more specifics I asked them what browser they were using, though judging from the ‘Umm…’ response I may as well have asked them what their subnet mask was.
OK, no problem. I think I recall they have IE9 on their laptop. That should be fine.
And then they said “The admin page isn’t showing up, nor is the contact page actually”. That was weird. I fired up a Windows / IE9 image in VirtualBox and loaded up the site. Sure enough, ‘Contact us’ was there on the menu and it linked correctly to the ‘Contact us’ page (of course there wasn’t supposed to be a public link the admin area but hey we’d get to that).
“Really? You mean you can’t see the contact page at all?”
“No, it’s only showing home and the about us page.”
Then the penny dropped – they were going to a search engine instead of typing the address in. They were talking about which pages were showing up in the search results.
I explained that they needed to type in the link exactly as I had printed it into the address bar, but the term ‘Address bar’ was apparently unfamiliar technical jargon to them.
Finally, after advising that they include “…the ‘http’ bit. Yes, with the colon, and yes the two slashes as well”, we got there and they were away.
Now I’m a big fan of the unified search and address bar that’s been adopted across all the major desktop browsers. But thinking about it, it appeared first in Chrome and what Google wants is obviously more traffic to Google.com. The unified search bar surely delivers this. Increasingly people get to places via search even if they already know the URL because Google is so fast that it’s still quicker than typing in the full address. That’s fine, I do it myself.
But this has also dumbed down the user experience to the point that that I fear the notion of a website’s address – for some – may never register on their jargon chart. “You just go to Google”.
It worries me that there are people who frequently use the web yet do not know what a URL is. And that there’s a real confusion around what Google (et al) actually is and what it does. Near enough all of us use it so we really ought to know – at least at some basic level – what is going on.
And – it wasn’t the point of the post but as a side-note – ignorance affects the likelihood of certain Government schemes getting the green light. A recent Daily Mail front page (which must have been in the print edition-only as I can’t find it online) declared in intentionally loose language words to the effect of ‘Google refuses to remove child porn from the internet’. I believe that a not insignificant amount of people actually think that because you go to Google and then it shows you a list of websites, when you click on one of the results the website comes back to you in some way from or through Google. This failure to grasp even the basics is worrying and dangerous given what a powerful political and social tool the web has become.