A number of years ago, PG remembers reading a science fiction story about a future in which a large percentage of the population wore body cameras.
InfoTrends says people will take 1.2 trillion digital photos this year. That’s 100 billion more than last year and nearly double the number taken as recently as 2013.
The rate at which photo taking grows is currently clocked at a whopping 100 billion per year – that means each year humanity takes 100 billion more photos that it did last year.
I think that rate is about to accelerate. And the reason is wearable cameras.
As the cost goes down, quality goes up and ease of use improves (through miniaturization, better software and better batteries), wearable cameras will become more compelling.
These will arrive in the form of clip-on cameras, smartwatch cameras and cameras attached permanently or temporarily to glasses, including smart glasses.
. . . .
Just a few years ago, nobody could have predicted or imagined what’s now acceptable public behavior with a smartphone camera. People shamelessly pose and posture in public for selfies without embarrassment. They take pictures of their food and drinks in restaurants. They take selfies in the bathroom mirror.
. . . .
It turns out that the location of a wearable camera makes all the difference for how it’s used.
Badge-style clip-on cameras are acceptable for “lifelogging” applications – jogging your personal memory about places you go and people you meet. But they’re horrible for “photography.” Because the physical cameras move around, sit at odd angles and aren’t directly controlled by the user (they tend to shoot photos at intervals, or take video), the pictures are universally bad, save that one odd lucky shot.
Wrist-worn cameras are best used as expedient replacements for smartphone cameras – group shots, vacation snapshots and selfies.
As Google Glass wearers learned, eyeglasses-based cameras can take amazing photos. They point the camera where the user is looking, and show a first-person, this-is-what-I-saw picture, which can be photographically compelling.
. . . .
Smartglasses will use camera electronics and lenses as much for data gathering as photography. Images and video will be processed for object and face recognition and this data will be fed back into the AR application. Looking at a table with a goldfish bowl on it, an AR app will know that a virtual kitten can stand on the table but not the bowl, and a virtual shark can swim in the bowl but not the table. In AR, cameras aren’t for photography.
Other applications will capture photos or video all day, and process it through artificial intelligence systems to provide extremely good data on activity, behavior and environment.
Best of all, photography can be retroactive, either as photography or as data.
For example, instead of taking pictures of their food while they’re eating it, consumers can just tell their virtual assistant at the end of the day: “Post a picture of that pie I ate.” A.I. will reach into the recorded video, grab the best still shot of the pie and post it online. From a data perspective, we’ll ask that same assistant: “How many slices of pie did I eat last year?”
. . . .
The co-founder and CEO of Shonin, Sameer Hasan, told me wearable cameras will be initially focused on quality control and documentation, medical applications and security. They’ll be immediately usable for “instruction and demonstration, live entertainment and news reporting.”
Wearable cameras will enable AR to “process video information in real time and instantly provide the wearer with analysis and recommendations based on what the camera is seeing,” according to Hasan.
Link to the rest at TechConnect
As PG remembers the scifi story, the ubiquity of video recording devices were a great assistance to the totalitarian government that collected all the video.