cappchur, data capture app

October 31st, 2015

cappchur - customer data capture app

cappchur is a simple data capture app for tablet and mobile, aimed primarily at the exhibition, trade show and retail markets. The project is a collaboration with Paul Pike.

The app launched this week and is available for iPad, iPhone or iPod Touch for FREE in the App Store, and for Android tablets and phones here on Google Play.

cappchur has been designed to be simple and intuitive without any complicated set up process. Once you’ve installed the app you can start using it straight away, with no need to register up-front. You can also use it completely offline.

If you or someone you know is running a stall or exhibiting at an event please give our app a try and let us know what you think: cappchur.com.

deepdream roundup

July 4th, 2015

If you missed deepdream in the news then go and read this article and the original research blog post, and / or look at the original gallery full screen.

Good. Now, given that Google made the software open source, there’s lots more to look at. Check out the #deepdream hashtag on Twitter.

And this Twitch channel let’s you “live shout objects to dream about”.

And finally, these two videos are worth a watch:

Journey through the layers of the mind from Memo Akten on Vimeo.

Noisedive from Johan Nordberg on Vimeo

[Edit]

Also, someone ran it on a clip from Fear and Loathing…

  
Google’s new Photos app seems pretty great, with a consistent experience between the web and its native Android and iOS versions. The way your photos are organised is better than in Apple’s app, but the clincher is that they give you unlimited online storage if you’re willing to have them compress the originals. Given that (for me) this is just for family snaps, that is fine.
My iCloud storage has been full for weeks, and a combination of not being bothered enough to get round to it and not being sure I want to pay for the service (5GB feels tight, given I recently spent £ hundreds on a new iPhone) has led me to leave it like that. So goodbye iCloud Photo Library.

And as a it happens you can still post photos to iCloud shared libraries (which are, confusingly, separate from the iCloud Photo Library) direct from the Google Photos app. 

Anyway two days into using it a couple of things are eluding me:

  1. A lot of people are tweeting about how impressive the facial recognition is, and the feature was demonstrated in the Google IO Keynote, but my Google Photos app (and also on the web) has no mention of faces anywhere and no apparent means of manually tagging faces – despite my library being full of photos of my family. Perhaps they’re rolling it out incrementally.
  2. Google has rather cleverly tagged and grouped a load of objects and things such as cats, cars, trains and food. However these collections contain some notable mistakes. A photo of one of my cats sleeping has appeared in the ‘food’ set, for example. Oddly there seems to be no way of untagging these things. Surely if you could then this could theoretically help its learning algorithm.

I’m guessing these things will be sorted out in due course, but there’s a chance I’m just missing something obvious. I’ve searched Google and Twitter but can’t find anyone else with the same problem (I mostly care about the face recognition).

Anyone else?

The Web vs native apps

May 21st, 2015

Back in 2010 Sir Tim Berners Lee warned about the threat posed to the web by Facebook et al.

Yesterday Jeremy Keith made this timely post (thanks to @fjordaan for tweeting it) about how poorly-performing websites are fuelling the shift towards native apps. In case you missed it, Facebook – which has already created a closed content silo – recently launched Instant Articles, which is basically their proprietary presentation mechanism for external content that is (presumably) be pre-cached to enhance the speed of the experience.

Rather than taking you to the external site they’re keeping you on Facebook, which is obviously good for Facebook, but you can’t argue with the fact that sometimes the user experience of external news sites is pretty terrible, so users will understandably like Instant Articles.

I’ll not repeat Jeremy’s points so read his post.

As an aside (from me), Jeremy makes a valid point about the rise of JavaScript frameworks being a contributing factor to the problem. I’ve long argued about the appropriateness or otherwise of single-page-application sites. The truth is that there is a time and a place for them, but they are not necessary for delivering content quickly on the web. People often lose sight of this.

In a previous guise I remember arguing against going full-single-page-app in favour of ‘proper’ indexable content URLs on a project. And for keeping the number of requests on those pages down to a minimum (and, yes, making those requests super speedy via, minification, caching et cetera).

This is all well understood good practice, and yet a BuzzFeed article I just tested triggered 335 individual server requests. And one of the reasons I don’t like WordPress particularly is that out of the box (and with most of the popular themes) it leads to bloated request-heavy pages. There’s no culture of optimisation around it, yet WordPress seems more popular than ever (Yes, this site is WordPress; it’s good at doing blogs).

This all said, I have spent most of the last 18 months years building complicated AngularJS-based single page application Milk. However, the reasons why a JavaScript framework is appropriate for Milk are:

  1. It is only for use by logged-in users.
  2. It serves individual user-specific content such as their personal messages. It’s much faster to load the raw JSON data of a message than to reload an entirely new document with all its assets.
  3. It provides live status updates on some items.
  4. Our caching and local storage strategy ensures that users only load the application framework once, even though they may visit hundreds of pages within the app over the course of a week.
  5. And even then, our uncached page load is only 242KB (on a mobile device) and 18 requests, many of which are asynchronous.

It’s an application not a website, it just happens to use web technology. This is a very different use-case to a public page of content such as a news article.

The web is natively great at delivering pages of text very quickly. I consider documents and applications quite separately. And I don’t think it’s contradictory to be a cheerleader for both. The trick is, I believe, not to try to make documents more application-like.

Mind you, that ALL said… Although JavaScript frameworks are a problem in some instances, I think the real culprit in the case of the Buzzfeeds of this world, is the amount of advertising and sponsored content adding bloat to their pages. If publishers had spent more time testing their sites on edge and 3G mobile connections maybe we’d not be in this situation where Facebook Instant Articles look set to be a hit.

[Edit]
This article on A List Apart also makes some good points

On encryption

February 3rd, 2015

I should apologise that this blog is not (currently) served over https. It’s on my to-do list, but that list is pretty stupidly long. (As an aside I don’t look forward to the day when I have nothing to do. The idea of just putting my feet up is horrible. It feels like I’ve had at least 50% more things to do than I have time to do since about 2007; but the upshot is that I genuinely don’t think I’ve been bored once in the last 7 years.)

Anyway, recent comments by Phil Zimmermann – the creator of email encryption software PGP – struck me as particularly (if unsurprisingly) smart. The upshot is yet another timely argument against David Cameron’s frankly embarrassing stance on end-to-end encryption: Hackers are always going to be able to get around whatever security you put up, but if your data is properly encrypted it doesn’t matter if they get access to your servers. So those Sony emails and movie scripts, for example, would never have been leaked if they’d been stored encrypted.

This article is worth a read, as is Phil’s original blog post.

In related news, BWM recently patched their ConnectedDrive software after a flaw was identified by a third party. The shocking part of the story is that prior to this patch the software was using unencrypted plain text HTTP to send and receive data! Given that the software operates door locks (among other functions) it is mind-boggling to me that its developers didn’t choose HTTPS in the first place.

A culture of ‘encrypt by default’ needs to be instilled.

Bitcasa was (in my opinion no longer is) a very promising cloud data storage provider – a bit like Dropbox except for two practical differences: Firstly the Bitcasa desktop application mounts your Bitcasa drive as a network volume, rather than syncing to a local folder (so it can hold more data than your hard drive). And secondly the data is encrypted both in transit and on the server. They also offered “infinite” storage for a very reasonable fee. In principle it was great.

Rachael has been using it (on my advice) to back up her photography work (~80GB of new images per week), and now has several terabytes of TIFF and RAW files in her account. We’ve been running an automated upload process every evening and had a further 10TB to upload. The data is also on RAID hard drive units but, as it’s business critical information, a remote backup seemed sensible.

Unfortunately on 23rd October Bitcasa announced that they were discontinuing the infinite accounts and were going to be offering a 1TB or a 10TB service for $99 or $999 per annum. For those in the early pricing scheme and with over 1TB of data this amounts to a roughly tenfold increase in annual cost.

“You have between October 22, 2014 and November 15, 2014 to migrate your data”

The other key part of the announcement was that there was a 15th November deadline (just over 3 weeks) to either migrate the account or to download all data, otherwise it would be deleted. That such an unreasonably short amount of time has been given reeks, to me, of some corporate / financial “emergency” measure, but that’s just speculation.

Bitcasa has always felt, in my experience, a bit “beta”: uploads are much slower than with Dropbox and are very processor intensive. This is, I understand, related to the encryption processing but generally (particularly more recently running it on a new computer) it’s been usable. We’ve never had much reason to download files from it though.

Rachael was (grudgingly) willing to upgrade her account to the $999 10TB package in order to buy enough time to find an alternative long-term solution, but it isn’t working. More than 20 attempts to run the account upgrade process have failed with a server error. Several support tickets I raised have not been answered after several days, except one which was marked by them as “Solved” with a generic advice response.

Bitcasa upgrade server error

Awkward indeed… It doesn’t bode well. Maybe they’re just being swamped with user requests but it feels to me like they are going under.

We have therefore been trying to salvage critical data from the account, but the process is slow and unreliable. Despite us having (according to speedtest.net) an 80Mbps download connection speed, downloading 1GB from Bitcasa is taking about 2-3 hours, when dragging the file out of the Bitcasa drive using Finder on the Mac. And more often than not the operation fails after 40 minutes or so.

Bitcasa - Finder error

The alternative – downloading via their web app – isn’t much better. It’s faster but trying to download more than one file at a time results in a corrupted zip file. Not very practical when you’ve got a folder with hundreds of files in it. Even Bitcasa recommend avoiding it (in a support response):

“We recommend not downloading multiple files through the web portal. If one of the file(s) is damaged, it will break the entire zip file. Downloading single files from the web portal should be fine.”

However, this morning I discovered that moving files in the Terminal is much more reliable. A lot of the problems seem to be related to the Finder. It’s going to take right up to the deadline to get all of the data but it is now, finally, just about feasible.

On balance, for us, speed and reliability are more important than encryption for this use-case. So we’re moving the data to Amazon ‘Glacier’ (via S3). Uploading directly to S3 is like a dream compared to Bitcasa, the data is uploading at over 2 megabytes per second.

The sad thing is that we were willing to pay $999 to migrate the Bitcasa account but then technical failures and lack of support simultaneously made it impossible to do this, and destroyed any confidence we had in the system that we would have been paying for anyway.

It looks on the face of it like Bitcasa are moving more towards a business-to-business API-driven service provider but this is basically a big “fuck you” to all their existing customers. If I were one of their investors I would be less than impressed.

The trouble with Twitter

October 31st, 2014

The Problem with Facebook is well explained in this video by science communicator Derek Muller. Basically they algorithmically filter your news feed in such a way that you probably won’t see most of what your friends post. This is contrary to what users expect to happen, but they are none the wiser because they don’t know about what they don’t see.

Of course it’s all about this button, the heart of Facebook’s business model:

boost_post

Once you give them cash they’ll show your post to all your friends / followers and of course a load of other people who don’t know you too. Fine: they have to make money. I just happen to hate it because it feels dishonest to actively hide things like that.

Facebook would argue that they’re trying to make my new feed “relevant” and “manageable”, something which Twitter does not do.

I’ve always greatly preferred Twitter’s follow model to Facebook’s friend model because I’m not socially obliged to follow my friends and family. I might be related to you but I’m not necessarily interested in your town’s local politics, or whatever. On Twitter it is left up to me to curate my feed by following the accounts I find interesting.

However, it’s changed. I joined Twitter early when it felt like a close-knit little network. For ages I followed about 40 people, most of whom I knew personally (early adopter web industry-types) plus a handful of other interesting people. Posting a Tweet was like putting something up on the village noticeboard. Most if not all of your followers would see it. And I would see all of my followers’ posts; in fact at first I received an SMS message whenever one of them tweeted. My feed was a mix of industry stuff and <= 140 character witticisms. However—grumble grumble—Stephen Fry joined and got stuck in a lift then it went mainstream. Soon those 'brand' things got in on the action and it became a marketing and news platform, all about driving clicks to websites. This has driven real human users away. I'd say 80% of the people I used to connect with on Twitter no longer use it. Or if they do they're completely silent and passive. "Last tweet: July". The trouble is that now when I post a tweet it feels like I'm standing at Oxford Circus during the morning rush hour. And most of the people surrounding me in the crowd are announcing things through megaphones. If I'm lucky perhaps I'll glimpse a familiar face but – to continue the urbanisation analogy – most of my friends don't come this way any more because they find it unpleasantly busy and they've moved out to the country. Evidence of this data overload symptom is the regular appearance now of ICYMI tweets. Often re-posting something a few hours later I’ll get a number of people commenting on it that I would have hoped to have seen it the first time but it’s now a mile down their timeline.

Solutions do exist: Using Twitter lists or TweetDeck, and the act of curating your following list by unfollowing non-human accounts. Sadly what’s left when you take away the noise is a bit of a ghost town.

For me Twitter was most interesting as a system for connecting human minds in real-time, not unlike Conjoiner technology in Alastair Reynolds’ fictional universe. That was genuinely exciting. Sadly, real-time is only usable up to a certain tweets-per-hour threshold. I don’t want to be connected in real-time to machines.

Here are two hypothetical experiments (that of course would be completely at odds with Twitter’s business model) that would make it very different but to me more interesting:

  1. Limiting the number of people you anyone can follow to 100
  2. Not allowing any links or media in tweets*.

A third experiment would be the option of following things that ONLY appear in a list and / or making a list your default timeline view, which would have the same effect.

But maybe it’s too late for all that. Or maybe I’m just being a sentimental Old Web guy.

*Yes, I tweeted a link to this blog post.