garote: (weird science)
Three years. That's how long we have. AI chatbot style dialogue with integrated systems will appear on smartphones in 3 years or less. What that means is, you'll be able to take a photo of yourself and say, "Add a bear to the scene. Make it look like it's about to attack me. No, move it to my other side. Make the bear smaller."

Chatbot-derived AI will be let loose on internal codebases at tech companies and coders will leverage it by asking it to auto-complete chunks of API, search for bugs, or pose questions like 'is there a way to optimize this?' and 'What changes do I need to make to this to make it run with the new OS?'

And then, wind forward ten years for the computing power and capacity to catch up, until... "Show me this scene again except have the part of Fred be played the way Christopher Walken would play it."

And then ... "Here are 50 episodes of Scooby Doo. Make me fifty more, except insert a character arc where Velma and Daphne become radicalized by Boko Haram and eventually 'solve' all the mysteries through the application of torture and austere religious deprivation."

Presto, fifty hours of new content, tailored to keep stoners and children amused, but with an evil twist. And the question comes up forever and ever: "Who owns this? Do we owe the original cast anything? The original animators? The composer that wrote all the musical cues we're remixing?"

"Does Frank Welker have a right to royalties on performances that reconstruct his own personal voicing of Fred? Or is this all owned by Viacom and two fingers to everyone else? Or is this all owned by Oleg Heyoushenkovich, Russian billionaire, generating propaganda-laced remixes of cartoons and handing them out for free, backed by and protected by the Russian state?"

So that's round one of what this will look like. Round two is, instances of this will become interconnected to enforce rights management.

And copyright will be legally extended by tech and media giants to INCLUDE derivative works in a way that attempts to effectively colonize the very imaginations of all currently living people, to extract money from them for re-inventing something similar to a thing that a long-dead person did once, which the company has now absorbed and licensed and laid permanent, eternal claim to. Like Disney stretching copyright law into taffy for the sake of a dancing mouse, except covering the way a thing seems or feels, and enforced by the device running on your face.

So for now, we have about three years before it all starts going to hell.

By then, any time we call a business of any size on the phone, we will EXPECT to be talking to a robot, at least for the first minute or so of the call.

The expectation that a real human will be on the end of the line will drop so low that people will start experiencing "robot rage", where they're on the receiving end of people who are extremely rude and nasty because those people think they're talking to a robot.

As an employee, you'll pick up the phone, and one out of every ten calls, someone will say "F%&$* GET ME A ROAST F!%%# LATTE NOW, WITH SPRINKLES, AND I'M PICKING IT UP IN TEN MINUTES YOU $%*(#@ %*(ING @#$%, SO IT BETTER BE &$#% READY."

In less than three years, you'll get used to seeing AI-generated artwork in advertising, of all kinds, at all levels. It will get progressively worse even though the software will be improving, because the artists have all been fired or quit, and so there's nobody left to do clean-up artwork.

Similar effects on "copypasta" copywriting that is already really low-grade, and already accounts for a lot of the drivelposts intended to resell other people's content, slathered with advertising on social media networks.

In time, the software will get better at carrying on dialogue, and will incorporate pre-existing research in physical emoting, developed by roboticists. You will slowly get used to an online world that is populated by a significant percentage of artificial people. And in a little more time, you will stop caring whether the entities you spend time with there are human. In fact, you will find things generally go easier if there is at least one artificial person involved in most of your conversations. Eventually your private conversations will be colonized this way as well.

Then after a bunch of digging around in old Asimov novels, grad theses written about World Of Warcraft and Second Life, and a churning period of what Facebook's Zuckerberg quaintly calls "PR fires" for the providers of these worlds, standards will emerge.

Like Frank Herbert's characters disavowing all technology that aims to supplant human dominance, we will hammer out ground rules for the representation of artificial people in a mixed environment.

It's hard to say what these rules will look like, but I suspect at least one of them will be, "All representations of people that provide any kind of interaction will be visually and audibly LABELED as artificial." And probably some knock-on rules about behavior, e.g., "no lifelike simulation of a human will provide interaction designed to cause psychological harm, outside of a clearly delineated entertainment setting, and without proper effort to ensure the subject of the interaction is an adult."

With rules like this in place, the corporate world will be given the declaration of "safe", enough to do things like create solid-state VR instructors that provide one-on-one teaching to children, with a degree of sensitivity and adaptability derived and refined from thousands of the finest teachers in the world.

I mean, getting a kid to understand algebra is hard. Why pay a human to do it? Just put on the dang goggles and let R. Daneel Olivaw guide them effortlessly, without ever getting angry or tired.

And all you gotta do is make them watch a few ads afterwards. (Don't try and take the goggles off beforehand. It tracks your kid's eyes. It KNOWS when they're looking, and when they're not.)

Take the ads. Right in the face. Smile like you want them. Freaking smile. Wider. Now dance the special dance that's part of the theme song. Do it so everyone around you can see. And so it becomes muscle memory, literally, and you think of the product whenever you move from now on. LIKE YOU LIKE IT, you CONSUMER. The goggles CAN TELL when you like it, better than you can. You gave the goggles the right to assemble this information about you and share it with any company that has money to pay. You gave them that right when you put them on, which is a physical action that has been interpreted by a court of law to be equivalent to "consent" to a 130-page document that's available in a locked filing cabinet in a disused lavatory back at the company HQ if you're curious enough to barge in there and read it. (But beware the leopard.)

I reckon it'll start with teachers. Instructors. Like videos telling you how to change a bike tire, but guided by interactive AI. Then "real" teachers will start assigning interaction with Microsoft-driven AIs as homework. Not for any endorsement deal or kickback, but just because it'll make their teaching life easier, because they're overworked already and paid in circus peanuts.

(Fun fact: A limited version of this is happening already. College students are feeding homework from their professors into ChatGPT and asking it to explain the assignment to them, using language easier to understand than what their professors used. No one considers this cheating, and why would they? It's just a tool for a job.)

Soon the AI will be the one administering the tests. Because seriously, how in the world would you be able to afford a personal tutor for every subject, who's gonna watch your kid write out every digit of a math problem with digital eyes, and gently guide their hand? The tech is a godsend to struggling parents from the third-world on up.

And up, to the highest echelons of the upper class, who will take thousand-dollar-an-hour "exclusive" guided instruction from AIs designed to teach them the finer graces of liberal arts.

And alongside this... We will have the military applications.

Yeah there's the psychological warfare and the propaganda and all that ... That's already here, in cruder form. Whole message boards spun up out of nothing with convincing "dialogue" between "debaters", swaying the lurker towards some opinion. Been there done that.

Where this really gets interesting is the simulation of interacting crowds. Groups of people. Predicting what an individual will do is hard. There's a lot of detail to process. A lot of random factors interfering. Predicting what a group of people will do, when you can observe them all at once - even for a short while - and apply what you see to a model... That might be easier.

That might be easy enough to sabotage markets, derail political campaigns, spin up paranoia over innocuous events, and so on, all handily below the threshold of identification, because no one speaker or source pushed it over the tipping point. It was just something you heard coming from the infotainment screen at the gas station, combined with something you saw drifting by in your social media feed, plus a comment from your personal AI assistant reading you the news before bed, and suddenly an idea occurs to you...

I gotta wonder, how will any citizenry dismantle such a powerful combination of corporate interests and national interests, aside from literally smashing it?

Folks like Jello Biafra and Michael Franti used to call out television as the drug of the nation, and a route towards sameness, mediocrity, and apathy. But this ... This is software that can get out ahead of any collective revolutionary idea, and neutralize it with passionate reasoning in the other direction, or just a well-placed joke, and it will be utterly invisible, because it will be threaded into a dozen conversations we have with various AI that we've already invited into our homes, our bedrooms, our minds. And will we care?

Don't you already feel like your life would be calmer, more orderly, even more entertaining and interesting, if you could interact with a cadre of unerringly supportive, cheerful, and eager assistants for most of the day?

Don't you feel like you would actually trust a robot MORE than a human, to provide psychotherapy, give feedback on your essay, show you how to change a tire, help you compose a difficult letter, make telephone calls on your behalf, tell you jokes without offending you, sing you the same song 20 times without getting annoyed, et cetera...?
garote: (Default)
Time for another in my little series of explorations into generative art. This is what you get when you tell Midjourney to produce an advertisement for computer hardware "in the style" of a Soviet propaganda poster:



Aside from being hilarious, it also invites a discussion of "style" in generative art. Some people could probably guess the prompt just by looking at the picture. It's not the colored pencil style used to render the thing, it's the composition, the colors, the expressions, the clothing... And it's no mistake that children are featured, since old computer ads really pushed the "help your kids get educated" angle.

I'm not sure if Midjourney incorporates actual Soviet propaganda, or just a bunch of interpretations done by artists trying to mimic the style, but let's assume that someone fed a whole stack of original posters into the training data. First question: Are those even copyrighted?

Well, after the collapse of the Soviet Union, Russian copyright law was altered to have a retroactive effect that covered works created in the Soviet era, and Russia joined the Berne Convention in 1995. So, technically, propaganda commissioned by the USSR is still copyrighted, by ... Someone. Possibly the Russian state, though that's a bit shifty because Russia was not the only territory in the USSR.

Second question: How different are these from the works they're trying to mimic?

For reference, here's the whole set Midjourney produced:



Scouting around the internet it becomes clear that Soviet propaganda was actually much more diverse than what Midjourney produces. It came in all kinds of styles, spanning multiple waves of technical and cultural change. To my eye, Midjourney seems to be pulling almost entirely from stuff in the 1960's about the Olympics. Perhaps a lot of that was fed into the machine.

Here's a poster for a roller derby, "in the style" of Soviet propaganda:



It would have limited utility as a real advertisement for a roller derby, because for example everyone in Soviet propaganda is white and smug-looking. But since it took seconds to produce rather than hours, we can generate it on a lark.

Breakpoint just had to ask for "Soviet Super Mario Brothers":



The more of these you make, the more you get a feel for what Midjourney is drawing from when you ask for a style. And it's clear that Midjourney has particular ideas about style. If you said it's captured the essence of Soviet propaganda, you'd be very wrong. You might be a little less wrong if you said, "It's captured the essence of what people think of when they hear the words 'Soviet propaganda.'" But that would still be wrong.

The most correct way I can put it is, "When you ask for 'Soviet propaganda', you get back what the Midjourney team thinks is Soviet propaganda." That seems harmless... But what does the Midjourney team think "a criminal" looks like? What does the Midjourney team think "a patriot" looks like?

Producing interesting art with generative tools is all about curation, and that includes the curation done by the people who trained the generative tool. You may think the possibilities are infinite, but the output from any prompt is limited - sometimes severely - by what the curators thought was relevant. Your ideas will be directed by that curation, and you will have to fight the tool to move beyond it. And you'll need to apply some critical thinking and skepticism.
garote: (Default)
After I took that brief dip into Stable Diffusion, my friend Mr. Breakpoint began exploring a similar tool called Midjourney.

He was looking for a creative outlet after a bad breakup, and found a worthy distraction in learning how to wield the keyword prompts and other widgets of the Midjourney interface. There is apparently always a learning curve to these things, because there is some skew between what the person at the controls has in mind, how the software interprets the prompting, and whatever assumptions got baked into the data it was trained on. I'm going to be exploring that in a set of posts, because he made some really interesting images.

His first project was a series of national space program critters. This is "Soviet Space Hippopotamus":



You'll notice immediately that the quality is way better compared to something generated for free on the Stable Diffusion demo site, because Midjourney is a commercial product and has been "trained" a lot more, using a lot more images.

By default, Midjourney presents you with four attempts at rendering a result, based on different random number "seeds." You can then pick one and ask it to refine the image various ways. That's how Breakpoint got the relatively polished image above. Then just by adding the keyword "general", he got this:



Once you've got a bunch of settings dialed in, it's fun to let it ride, with minor tweaks. Here's "Chinese Dragon Taikonaut":



This image was picked from several and refined, but aside from that, just swapping "soviet hippo" for "chinese dragon" in the keywords resulted in an image that was both deceptively novel, and deceptively similar in composition and style. Just as easily as the idea comes to you to make a series of themed images, the images can be manufactured.

Also, while there are plenty of details in the image that are screwy - like the way the dragon's head doesn't fit in the helmet, or the position of its tail - it still makes a handy template for an experienced artist to retouch. And so we stumble again, directly into one of the imminent consequences of generative art that I pointed out in the post about Stable Diffusion: The idea problem.

An artistic process can be described as a rough combination of two things: Ideas and execution. If you hired an artist to generate a series of space-agency-themed animals, you would be contributing the initial spark of an idea, but the artist would then go on to personally iterate on that idea, sometimes with execution, by making a few rough sketches for example, or with ideas, by examining photos of animals or perhaps reading about the various space agencies. Whatever the approach, the artist would be aware of the external sources they were drawing from, and would generally produce all the iterations from scratch, acting on a learned instinct based on all the other art they've seen or produced. Unless they were unscrupulous - or just very unlucky - the result of their instincts and their research would be a piece of art made to your specifications that is also unique.

But now we introduce generative art, and let's assume that the artist who uses it does not have any idea how the generative software was trained.

You give the artist the initial idea, like before. The artist then turns that idea into a few carefully worded sentences, and dumps them into the generative AI. It cranks out a dozen or so polished-looking pieces in a range of styles. Now the artist picks a few, maybe tweaks them, and presents them to you as drafts. You pick the one you like, ask for a few changes which the artist gladly applies, and the work is done. You get your art, and the artist probably put in one tenth of the work creating it for you.

Before, the execution of the art could be laid squarely at the feet of the artist you hired. They did the sketches, they did the iterating, they did the tweaks. With the generative art, the artist has taken a step back from the canvas, and most of the execution was done by the software. That by itself could reasonably be welcomed as progress, by artists and commissioners of art alike. But here's where it all goes wrong: In those iterations, who came up with the ideas?

Who researched the animals and turned them into stylized forms? Who researched the space agencies and came up with appropriate clothing, props, and color schemes? Who came up with the idea to compose them in profile, facing right, with those facial expressions?

Okay, suppose you have to tell the machine about the composition. "Make the characters all face to the side, visible from the waist up, with their helmets open, looking stoic." These are all things you would have to realistically supply to Midjourney, to get this series of animals. Here's another example, "French Space Kestrel":



Whose idea was it to put the Eiffel Tower in the background? Sure it's the cheapest way to make someone think of France, so it feels obvious, but nevertheless, whose idea was that?

The only sensible answer you could give, when you're talking about something created by processing thousands of pieces of human-curated art and photography into tiny cross-referenced pieces, is, "everyone who contributed anything, and possibly some people more than others."

And so, how in the hell do you fairly compensate those original artists for their work?

The true origin of an idea has now been obscured in a process so interconnected and complex that it's comparable to a weather system. Can we credit that butterfly flapping its wings in Detroit with causing that hailstorm in Chicago? Well, it probably contributed, but ... how? How much?

When ordinary people began putting together videos of each other dancing to songs, and posting them to YouTube, the developers quickly realized that they had a copyright problem. They were redistributing copyrighted music without compensating the rights holders, and those rights holders - big powerful record labels - were annoyed. So they created software that would crawl through every newly posted video and detect the fingerprint of any commercially published music. The video containing the music would be tagged, identifying the artist, and the website would play an advertisement before or after it. The idea was that the revenue gained from the advertiser would be split and a fragment would be shared with the record company. So the record company turned a loss into a gain.

You could see a similar approach eventually working for any professionally produced generative art tool: If the tool was trained using art from a particular artist, that artist would be due a certain amount of compensation from the makers of the tool. That doesn't address the "weather system" problem - which may be impossible to address directly - it sidesteps it, by assuming that each artistic work used to program the tool has a certain value, supposedly worked out between the artist and the developer, and every time the tool is used to make artwork, everyone gets a cut of the profit according to the value they've negotiated, no matter what artwork was produced or what the prompts or settings were.

You need to account for the negative training effect I mentioned in the Stable Diffusion post. So, for example, it wouldn't be fair to compensate just the people who drew foxes when you generate your latest animal iteration, "Canadian Space Fox":



That seems like one possible solution that could be implemented after a lot of legal dust has settled. It also has flaws.

For example, here's one related to training: If you were a company producing training data, you could get permission from a pile of artists to submit their work according to some compensation terms you negotiate. Then, you could tell the people assigning descriptions to images to label them not just according to the original artist, but according to artists that have produced similar work, but didn't submit any material. The outcome would be a generative art tool that you could order to create, for example, "a sketch in the style of Bernie Wrightson", even if the training images didn't use a single sketch by Bernie Wrightson.

Would that be fair to Bernie Wrightson? Well, according to current legal precedent, yes. You can't patent a style, which is what you would effectively be doing if you prevented other people from creating works that resembled yours from fresh material. (I mean, I'm sure that Disney would absolutely do that if they could. They would patent the very idea of singing cartoon animals.)

But what would happen to Bernie Wrightson's career? Would he go from being established enough that people want to name-drop him in their training data, to suddenly never working again?

Hard to say. Often times in the art world, imitation means interest, which can be redirected to become opportunity, and no one knows yet what kind of tools the next generation of artists will use to stay relevant.

In the meantime, software projects like Stable Diffusion and companies like Midjourney operate in a Wild West environment. They built their data sets by dumping in truckloads of images scraped from sources that did not get the permission of the original artists to have their work processed in this way, and have so far been willing to do all kinds of talking and negotiating, short of actually deleting those data sets and starting over on more equitable ground.

But even if they did, the tools and the theory they've developed are already being distributed among people who don't care about compensation or copyright. People who are basically analogous to software pirates and hackers.
garote: (programmer)
  • Programmers have more logical minds than other people.

When working with computers, I find a lot of discrete math concepts to be useful. You gotta know your ANDs and ORs inside-out. But what does a "logical mind" even look like? Personally, my life is motivated by bicycling, puns, relentless curiosity, snuggling, and snacks. Every one of those motivations is irrational. My thought process is a fishing net wrapped around those things. How logical can it be, really?

  • Programming is a single discipline.
  • There is a meaningful concept behind the term "full-stack developer".

Nah; it's a grab-bag of frameworks, cloud services, browser standards, platform variations, and so on. It's possible to find two self-identified "full-stack" developers whose working knowledge overlap is limited to basic HTML and lightweight browser debugging.

For example: A "full-stack" developer who:

Uses AWS to host a database application written in F# on the .NET platform using Fable to transpile the F# to javascript, with an Elmish messaging model and a front end laid out using standard React components, and data supplied by GraphQL. (This developer does not touch a single line of Javscript except perhaps when debugging.)

And a "full-stack" developer who:

Uses a virtualized linux hosting provider to run an interactive, high-volume visual data browsing tool, written in Python on the back end using the Django ORM to store data, and using typescript to build the front-end code using custom UI components styled with custom CSS, and an SVG-and-canvas-based tile rendering pipeline to keep the browsing experience fluid.

(I have done both, and several other variants that also have virtually no overlap among them, including obsolete ones like using Perl and cgi-bin to render raw HTML with all interaction through forms, and using Flash to construct a "serverless" interactive game. Oh, and all those Java plugins; who can forget those?)

  • Programmers are better at picking up disparate knowledge from other domains than other professionals.
  • All programming languages are the same because Turing something handwave.
  • 10× programmers exist.

Not as generalists, definitely. Though I will say this: Programmers who have massive knowledge of a very specific domain, and the ability to explain it, do exist. When surrounded by a highly functional team, those people can reduce the number of mistakes by 10x. That benefit is very real.

Drop them into an unfamiliar context, or take away the team, and that benefit goes away.

  • It’s easy to make a good UX as long as the core problem domain code is sound.

Yeah, and if you build a really great engine, then all you need to do is bolt a chair to it, and you've got a really great car!

  • Hand-written assembly is better than compiled code.

I think anyone who has ever tried to make a really goood French Cruller donut at home will undersand how this could be wrong.

  • Social problems have technical solutions.
  • Social problems are of overriding importance in programming.

These are interesting ones. I find more truth in the statement that "technical solutions tend to create social problems", or at least invent new kinds of dysfunction. But I think there is also a strange trend of considering all software development to be a political act.

This probably comes from the assumption that all software development is meant to interconnect with an anonymous end-user, or at least have an audience, and through that it shapes the flow of social influence or power, and is therefore tangled in politics -- social, sexual, legal, et cetera. But that's a reductive argument, because if that makes all programming political, then it makes everything else political for the same reasons.

People in this new century generally don't remember it - or weren't alive for it - but there was a span of time when computer programming was something that usually happened on non-networked devices, as people manipulated data for their own personal ends. I'm not saying that made it apolitical; I'm saying that it made the end user and the audience a far smaller part of the process. Companies wrote code to handle their own internal procedures. Individuals wrote code to amuse themselves with the blinky lights, and what they created often never left the device they wrote it with. I grew up during that time and was surrounded by code that was parochial. It had nothing to say about society or politics, except perhaps that I lived in an area with enough affluence to give me and my friends the spare time to poke at computers and learn a trade.

On the other hand, some code - like the code that chooses what to drop into a social networking "feed" - is astonishingly political.

garote: (machine)

If you haven't heard of it already, here's an extremely brief non-technical explanation of how Stable Diffusion and other so-called "AI" image generation tools work:

  • You get a huge pile of existing art.
  • You label each piece of art in various ways, like "trees; grass; man running; sunny day; Tom Baker; person in foreground; comfy scarf".
  • The computer takes in the art, and organizes it by the labels.
  • Then the computer compares all the art pieces with each other, looking at them through a series of lenses that get progressively more and more blurry. With the blurriest lens, the pieces are almost identical: Big fuzzy blobs.
  • The computer remembers all these comparisons by compressing them in a very clever way.

Now, here's what you do with that:

  • You give the computer some labels it's familiar with, like "trees; Tom Baker".
  • The computer then makes a canvas the same size as all the art pieces it's looked at before, and plops a random fuzzy blob onto it.
  • Then, a little bit at a time, it tries to "re-focus" the fuzzy blob into an image by adding random bits of contrast. Each time, it asks itself, "Does this look more like a photo with trees or Tom Baker in it? Or less?" If the answer is more, it keeps that change. If less, it rejects that change and tries another.
  • And so on, for as long as you're willing to let it fiddle with the image.

Of course, the nature and quality of the results you get out depends heavily on what you've fed into the machine.

More specifically, it also depends on how much consensus there is between the people who made the labels. Here's an example:

Of the zillions of labels fed into Stable Diffusion (or at least the version that generated what you see above), there's an obvious trend in the ones that people labeled with both "afternoon" and/or "golden" and/or "golden afternoon". I never told the program to make images of the outdoors, or of trees, or grass. That came along for the ride because of how the humans labeled the source art, which probably contained lots and lots of photographs taken by people standing around in parks at sunset.

It's important to keep in mind that the four images above do not depict any place in particular. They are not just pre-existing things that were picked out of the source art based on the keywords, they are constructed images, and the places they appear to show do not really exist. Even partially. The system is not borrowing grass from one image and trees from another and stapling them together. It's making new images that resemble the ones that the humans associated with "golden afternoon", including details in those images that may have not been the primary reason the humans labeled them that way.

For example, the image on the lower right appears to have a lake in it. We didn't tell the computer we wanted a lake, but the images we fed into it labeled "golden afternoon" sometimes had lakes in them. So, at some point in the de-blurring process, the computer decided that the image looked more "golden-afternoon-y" if that blob resolved itself into a lake.

That seems sensible. But here's the more interesting bit: This associative power doesn't just apply for things that we've labeled in the images. It also applies for things we didn't label. Even if the computer was never told about lakes at all, it might still put one in the generated image, just because there was sometimes one in the source images.

And even more interesting: This also applies for things that we humans do not even recognize as objects in the images ... and things we may not even have the vocabulary to describe. For example, the computer was never told how to apply the back-lit effect of the setting sun on the leaves of trees. It may not even know what leaves are. It doesn't even have a concept of three-dimensional space, let alone how light moves through it. All it knows is how to ask the question, "Does this little change make it more, or less, overall, like an image with the description I've been given?" And that's it. But that simple question can go a long way.

For example, if you feed a million images into the computer, and a thousand of them are labeled "scary", the computer will get more-or-less trained to tell the difference between images that are scary and ones that aren't. Especially if you've labeled the incoming images with a ranking, from "kind of scary" all the way up to "extremely scary".

It can also learn the extremes automatically, through negative comparison. For example, if you feed it a million images of people, and a thousand of them are of Tom Baker and are labeled as such, the computer will be processing a whole lot of images of people that might look a little like Tom Baker, or even a lot like Tom Baker but will not actually be Tom Baker. And because of that skew in the data, when you ask the computer to draw a picture with Tom Baker in it, the computer will use its training to draw a person that looks EXTREMELY TOM BAKER. It will know - without consciously knowing - all the nuances that set Tom Baker's face (and shape and clothing and pose) apart from everyone else's, and it will go for them.

Same for Mr. Bean, assuming images of him were also included.

And that's why, if you tell the computer to draw "Tom Baker as Mr. Bean," you end up with this MONSTROSITY:

This is essentially a drawing that the computer has constructed by iterating on a random blob until it looks more and more and more like it's got Tom Baker or Mr. Bean in it, and unlike a human artist with a sense of proportion, it doesn't know when it's done.

This total lack of awareness becomes painfully clear when you ask it to render things that contain text. For example, if you hand it the prompt "Dungeon Master", you get stuff that looks like this:

Fun Fact: Most of these gibberish titles are actually the names of streets in Denmark!*

(* This Fun Fact has not been peer-reviewed.)

What's happening here is, the computer doesn't know what is and isn't text in the source images, let alone how to read it. Some of the source images labeled "dungeon master" may actually contain that phrase printed in them, some might not, and some will have other words as well, but the whole point of what the computer is doing is to construct new images that are a synthesis, never an exact copy. And so, a result with the bold title "DUGNGON MASNSEN" might be easily explained as the visual combination of "DUNGEON MASTER" with the single word "DUNGEON" and the single word "MASTER", all trying to occupy the same space, to resemble the most images at once.

It is indeed similar to what we expect, but we see it as a failure because written words are an all-or-nothing proposition: Either it's correctly spelled using properly shaped letters, or it's not the word.

Trees, grass, buildings, faces, and almost all other things we would recognize in an image are less complicated - and less narrow in their correctness - than a written word. And, words are even harder for the computer because the images in the source set labeled "dungeon master" also very likely contain other words, and it has no opinion whatsoever on which part of the image says "dungeon" versus which part says "master" or "adventure" or "magic", et cetera.

There is sometimes an enormous gap between an image that resembles something and an image that actually is something. One of my favorite demonstrations of this is asking the computer to give you a picture of a maze. It will absolutely look like one from a distance, but it will also absolutely not be a maze you can solve.

I had a lot of fun throwing in supplementary keywords here, because I just like the visual style of a classic black-and-white maze combined with other things. My favorite was to blend them with variations on "stained glass window" because the idea of finding a maze built into one seems really cool to me. The system can make a pretty convincing stained glass window, so the source art must have had some good examples.

The source artwork must also contain a lot of stuff on the fringes of pop culture, for example if you add in "skeksis" you get images that incorporate those ugly bird-like antagonists from The Dark Crystal:

Having these pop out of the machine in seemingly endless variation with no effort on my part was inspiring. I immediately wanted to find someone I knew who wasn't already aware of Stable Diffusion and show them an image, so they might be fooled into thinking it was from a real church somewhere. Then I could spin up some ludicrous tale about bird-worshipping Pagans on some tiny coastal island before confessing and explaining that the image was fake.

The lesson from that - aside from the obvious one about how easily this stuff lends itself to forgery - is how well this kind of generated art lends itself to brainstorming, for making "concept sketches" or fooling around with ideas, or deciding how to compose a drawing. For example:

Any one of these could have been cover art for a techno CD back in the 1980's. And of course, that's thanks to all of the hard work of the artists who made the art that was fed into the machine, as well as all the work people put in assigning labels to that art, including labels that drifted into the subjective.

Which leads to an important, and difficult question. What do you do if you're Bernie Wrightson, and people can stick a prompt like this into Stable Diffusion?

garote: (maze)
Some crazy, wonderful person resurrected an old Apple ][ game called Robot Odyssey, making it playable in modern web browsers.

I think I was about 13 years old when I played this, and I have to give it credit for teaching me a whole lot about circuit design and programming. I obtained most of my software through piracy at the time, but this was one of the relatively few things my parents actually bought, and so I can credit them too for spending the equivalent of 80 bucks in today's money and bringing it home. That can't have been easy on the household budget.

Playing it now, I noticed a couple of references I didn't get when I was 13:



I do believe that's a Dalek, wandering the sewers! As a kid I couldn't quite parse it. It looked like a lost shopping cart.



It would be another two decades at least before I moved to Oakland and started riding BART on a regular basis.

So how did the modern developer get this ancient thing running, without using a clunky emulator which would have restricted the user interface?

They wrote a Python utility to examine the Robot Odyssey DOS executable files and translate them most of the way into C, which they then aggressively patched to add mouse and saved-game integration. Next they fed the C into a WebAssembly compiler, and wrapped that with some additional Javascript, CSS, et cetera to make a point-and-click interface with buttons for use on devices without a keyboard.

The whole thing has been open-sourced. It's a masterful bit of reverse-engineering.

If you want to hang out with the creator for a while, they recorded a Youtube stream of some of their early explorations in the code and with a previous port of the same game that they did for the Nintendo DS. It's companionable background listening, in the way many livestreams are.
garote: (hack hack)
The "Metaverse" was summoned into existence with IRC, back in the 80's. Zuck's vision brings nothing new to the table, except some retrograde vision of the adorable cyberspace meeting rooms that William Gibson was already writing about. (Again, back in the 80's.)

It completely and perhaps deliberately misses the point. The next step is not about anything so grand. The "killer app" is niche and is right in front of most people: AR/VR headsets are on the verge of becoming the new essential engineering and artistic tool, not a replacement for the current emperor of communication tools (the smartphone).

Here's an example that should be close to everyone here, i.e. software developers: You sit down in a coffee shop with a foldable keyboard in front of you, and put on the VR headset. You bring no laptop with you, and no notebooks or manuals either. As soon as the headset is in place, external cameras on it immediately recreate the view around you as-is, but slightly dimmer. Then in front of you (in the virtual space), half a dozen five-foot-high curved screens of code appear, in precisely the configuration you left them 20 minutes ago when you left the house.

"So what?" you say. "This is basically like having a laptop." Well sort of. But the point is, it's actually EASIER to deploy than the laptop. THAT'S when it becomes the killer device.

The headset has one wire that goes over your shoulder down to a battery, which you can keep in your pocket. Other than that there are no wires to deploy. There are no controls to pick up. You can see and feel the keyboard in front of you still. You do not need a mousepad or a trackpad, but you do have a pointer: You move it by raising one index finger off the keyboard and pointing. Note that this is actually EASIER than lifting one hand up and placing it on a trackpad. You click by tapping your thumb, ever so slightly, wherever it is. The LIDAR sensors on the bottom of the headset track all this, as long as you don't turn your head more than 90 degrees away from your hands.

The displays are fixed to the environment, and of course can be easily rearranged. Of course, by the time you see an implementation of this, the whole concept of windows in a display will have been re-thought, to include elements that respond to the movement of your eyes specifically, elements that warp space, faux "work environments" with notes scattered on a desk, tools to examine data structures and memory contents in cube form, integrated side-by-side interaction with the same environment by two people, and so on. It would take the whole notion of screen sharing in a meeting to the next level, obviously.

If the device is light and responsive and high-resolution enough, you will suddenly hate developing on tiny screens anchored to keyboards. You will get used to having your custom 360-degree "work environment" deployable around you at a moment's notice no matter where you are, and you will begin using the more positional aspects of the environment to keep track of things and context-switch in ways you haven't even thought of.

Now, if it can be that useful to you as a software developer - a person who really just needs big legible grids of code and a good keyboard to do their job - think about how useful it's going to be to an architect, a painter, a photographer, an interior designer ... a piano teacher guiding her student's hands, a mechanic learning about an unfamiliar engine, a geologist making sense of land survey data ... anyone who would rather not have their visual information confined to a square.

Of course, a gadget that's this good at what it does would be pricey. But you could spend $1500 on a light but powerful laptop, or you could spend $2500 on this, and never need an external display. At that point you would break even.

Now, to get back to my point about Zuck and his "vision":

NONE OF THESE APPLICATIONS HAVE JACK SQUAT TO DO WITH A METAVERSE. They are serious things done by people at work, and they do not need any "social" component beyond what is already implemented. That whole vision of cool people "hanging out" in some virtual universe and having a blast with their spare time ... bugger that. Anyone with spare time is going to pull these devices off and go outside. But make no mistake, the technology is just about here to make a device of this type and quality, and people are going to find it very useful.
garote: (castlevania library)

Uninvited, by ICOM Simulations, makers of the much more popular Shadowgate. I was always curious about this game.

When I was a youth messing around with computers in the pre-internet days, a good adventure game with a box and paper manuals and all that fancy stuff cost about 40 dollars. That's in 1980's money, which is over a hundred dollars in 2020's money, where you can buy twenty much fancier games for your smartphone for the same amount (assuming you're unhappy with all the thousands of free ones).

That's way too much for a teenager to spend on a regular basis, and though I was a stinking software pirate I could only copy what I could access from friends, and no one I knew had a copy of Uninvited. Why get Uninvited when you could get the superior game Shadowgate for the same price? Or spend that 40 bucks on ten fast food meals to stuff into your voracious teenager face?

Of course, the real trick these days is getting such an old game to run at all. I was interested in the Apple IIgs version, since it's in color and it's what I would have played back in the 1980's, but I got rid of my Apple IIgs two decades ago. I'm on a laptop running MacOS.

First thing I tried was an Apple IIgs emulator called KEGS. It was operational, but the emulator had weird timing glitches that made it very hard to play. After only a few screens I gave up and searched for something else.

The solution turned out to be the all-time greatest emulator of the Apple IIgs, a program called Bernie ][ The Rescue. It only ran on the archaic OS 9 operating system, which Apple hardware stopped supporting around the turn of the century, so I had to use an application called SheepShaver to emulate a PowerPC-based Mac old enough to run OS 9, and then launch Bernie inside that emulator to emulate an Apple IIgs. Wheels within wheels. Somehow it all works.

I got ahold of a walkthrough written for the game, because though I was curious enough to want to go back and play it, I wasn't patient enough to confront the challenge of the game on its own terms. When I was a kid and there weren't any better options, and the whole idea of using a point-and-click interface as the foundation for a visual story was still being explored, I would have stridently avoided all external help just to prolong the experience. These days the pointing and clicking is ancient, and the fun for me is mostly in the spooky stylized art direction, and the Monty Python and Douglas Adams inspired offbeat weirdness to the plot and the puzzles that 1980's-era games are infamous for.

I played it late at night, with all the lights off except for an LED candle, and creepy music playing in the background for maximum immersion. It was heaps of retro fun. As I went I took snapshots, and afterwards I assembled this slideshow of almost all the "rooms" in the game, in rough order according to the plot:

The original author of the walkthrough I used is lost in the origins of the internet. As I went through the game I made edits and corrections, and I'll post it below as a reference for fellow travelers.

Walkthrough for the non-Nintendo versions of Uninvited )

At the end of the game, successful players get to fill out a certificate and send it to their computer's printer to demonstrate their awesomeness. How charming!

garote: (maze)
In 10 years people are going to be driving cars wearing AR glasses. People will stop putting pictures up on walls; they’ll just decorate virtual rooms and wear the AR glasses to see them.

Security guards will put on AR glasses and all the walls in the building will become transparent.

Sports fans will put on AR glasses and stand in the middle of the field right next to the quarterback while the play is called.

Your face will be your passport. The glasses on your face will help authenticate it. The border patrol agents will see you through their own glasses, with a little icon floating over your head: Time allowed in country: 10 days. Arrest warrants: None. Their views will be kept "secure" by pairing their glasses with their own faces. Headscarves and hoodies will be banned in almost all venues. Concealing who you are from the government - or even just the restaurant owner - will be seen as social deviance.

People will start "livestreaming" much more than just their location. People will leave their glasses engaged in recording everything they see, all the time, in a half-hour loop, so they can tag it if they decide to. Nothing embarrassing or funny that anyone does in public will ever escape recording and potential rebroadcast, ever again. People will get into the habit of rewinding conversations that they are currently engaged in, to prove that someone actually said what they said, minutes earlier. Everyone on earth will go from living in the present, to living about 15 minutes in the past, all the time.

You will unlock doors by looking at them the right way. You will pay for things by staring at the points of a star, in order. Live concerts will either ban the glasses entirely, or make them part of the show. When you share a casual glance across a room, you will send more than just a glance -- you will send contact information, propositions, advertisements. If you don't like the way someone is staring at you, you can blot them out of your vision. If you take your glasses off in public, people will assume you want a quiet moment and don't want to be talked to.

The cities are going to fill up with micro apartments that consist of a closet attached to a closet. The first closet will do what a closet normally does: store clothing. The second closet will be all the rest of the living space in the house combined, including the bed, and a person will put on AR glasses when they get home - assuming they're not wearing them already - and pretend that they are sitting in the middle of the woods. All they'll need is a good air conditioner.

Want to watch that YouTube video? First, watch these ads. No; you can't skip them. You can't look away. The glasses know where you're looking. You will watch these ads. If you shut your eyes more than 5 percent of the time, the ads will start over.

This will become a new way of paying for "free" things. Want a discount while you're pumping gas into your car? Stand there and take the ads. Right in the face. Take them. Take them like you like them.

Somewhere, still wedged inside a security researcher's head, is the design for a foam-rubber 3D-printed human torso, with elaborate electronic eyeballs, designed to trick the glasses. The arms race will be difficult.

Museums, state parks, restaurants, store aisles ... everything, everywhere, will accumulate a digital layer, only available through the AR glasses. Information kiosks and labels will vanish. You will walk through a tangle of completely un-signposted roads and never lose your way, unless you're one of the unfortunate poor who can't pay for the glasses. Those people will be lost in a terrifying labyrinth, and the only solution anyone will seriously offer them ... is free AR glasses. This rabbit hole will only go one way. Don't even think of what Facebook has in store for you.

Meanwhile, in China...

While everyone in the West is arguing over how much privacy to preserve, China will build backdoors into every single Chinese person's AR glasses. You could be in your home, staring into the face of your child, talking to them about an argument they had at school perhaps, and a government agent could be staring at them too, a thousand miles away in a booth. You will never know until ten years later when the recording - indexed by voice transcription software - is presented as evidence to an anonymous panel tasked with deciding whether to arrest you and stick you in a factory prison.

The agents will become almost completely omniscient. While they're locked in a desk, the microphones in every pair of glasses could be tuned to pick up conversation in the next room. If you're standing in a crowded subway, an icon might appear over the man next to you, placed there by the police, identifying that man as a state agitator. This is different from a smartphone alert: You cannot un-see the icon, and the glasses know when they're not on your face. If you get too close to the man, or try to warn him, the flag may appear on you.

If the agents decide they don't like you, they will shut your glasses down. You will instantly lose your wallet, your keys, your phone, your passport, the contact information for everyone you know, and all your notes and photographs and music. You won't even be able to board a bus or buy a sandwich, until you do whatever the agents demand of you.

Dissent will be so thoroughly micromanaged into the noise floor that people will start to think that crushing dissent is part of the normal function of a "free" society. People will start to aspire to getting that well-paid job in government, eavesdropping on people and crushing dissent, since it comes with privileges and power.

Eventually, we will all start to drown in a sea of information warfare. State-sponsored from China (and Russia, basically an appendage of China), and corporate-sponsored from the West. But you can't take off the glasses. They're more you than you are.

Like I said, this rabbit hole only goes one way.

I suspect that in two or three years, Apple will release a flagship product that will usher in this new future. They will be committed to solving or mitigating the flood of privacy and abuse problems this product creates. But then Google will follow up with their own version. And then Samsung. And then others. Privacy will be something you buy back, at a price, in the West. And in China and similar places, it will be something you don't even understand the concept of any more.

Perhaps this beloved tech industry I grew up in is about to create a monster.
garote: (hack hack)
30 years ago I saw this advice in the computing magazine that was delivered to our house each month:



I was already familiar with the game, and I knew it was right: When you're playing Moebius: The Orb of Celestial Harmony and you enter a fortress, the guards are easier to fight hand-to-hand.

I was intrigued by the counterintuitive feel of the advice. If you have a long sharp sword, and you're good with it, wouldn't that always be the best choice? Then I imagined trying to swing a sword in a narrow hallway. Perhaps something more intimate, and easier to control, was right after all.

For years after I read that silly, unremarkable sentence in that gaming magazine, it bubbled up randomly in different situations. I generalized the idea: When you enter a confined space, don't waste effort trying to keep everything at arm's length -- especially people. Switch to something more intimate even if it's less powerful. The trick is recognizing when you need to switch modes.

Recently I got ahold of a bunch of ancient issues of a defunct magazine called Family Computing, and fed them into a sheet-fed automatic scanner. Flicking through the pages, I found more pieces of worthy life advice. May it guide you own your journeys!



















Very important to know when you're a young person at a party, or when some jerk decides to pick on you!



garote: (viking)
As a life-long computer engineer, let me spell it out nice and easy:

You know that game, Sim Earth? Well the way it’s done is, a bunch of math happens to a bunch of numbers, then it gets turned into dots on a screen and you are amused.

You know how it ISN’T done? A little tiny crude version of a planet inside your computer box, with little tiny weather and buildings and people.

There is no "living in" a simulation, because there is no "in" part. A bunch of math is happening to give the appearance of something, but it's your own senses and imagination that make the "in".

Okay, so, how about if we back off of that and say, “my brain is real, but everything else is a simulation!” Well, hey, that’s just Plato’s Cave. You can handle that yourself; no need for the computer engineer.

I have "living in a simulation"-adjacent dreams fairly often. They're not a sign of a conspiracy, they're a sign of a creative and dastardly mind. Occam's Razor helps here.

Alright, internet, back to what you were doing...
garote: (weird science)
Epic Games wants to be able to distribute their software onto iPhones, and entice people into giving them money for usage of that software on iPhones, without paying anything to Apple in the process, except a microscopic (to them) yearly developer fee. You could call it "cutting out the middle-man", except that in this case the "middle-man" is the entity selling and maintaining the entire platform.

Well, if they wanted an alternative distribution channel, they could have one right now! Here's what they can do:

Hand out the source code for Fortnite, and let end-users compile it themselves in Xcode, sign it, and then side-load it onto their own devices. It would take several days of work and an Xcode developer license for every single Fortnite iOS player, but it could be done. There's your "app store alternative". I'm pretty sure it doesn't technically violate any Apple license terms.

And it makes sense, really. If you make money off the payment channel built into the software, like Epic does, the best way to make it secure is to open-source it and have John Q. White-Hat hammer at it for a few years, exposing all the flaws. Meanwhile Apple doesn't need to host your data files, vet your API usage, scan your code for malware, do any advertising for your app, or give you any free cloud data hosting for your app state. That's all up to you. A perfect solution! Train every customer of your software in the art of software development and proper security!

Of course, there's a problem here that reveals an additional wrinkle in the case:

If you give out your app's code and/or allow users to side-load whatever they like and connect to your servers and use your payment platform, you are deliberately creating a giant security hole that you then need to fill: Users can hack your app to do an end-run around the payment platform. Blizzard's struggle with Battle.net is a great example of this. For a very long time they fought with hackers who distributed alternate versions of Battle.net, in order to play unrestricted or custom games of Starcraft, Warcraft III, etc. They have now implemented a very sophisticated scrambling, encryption, and digital signing mechanism inside Battle.net and every game distributed through it, to combat that problem and ensure their payment platform is the only choice. It took a lot of expensive developer time, and didn't completely eliminate the problem - since they are distributing Battle.net to eminently hackable PCs - but it did raise a barrier against casual piracy.

That barrier is one of the things Apple is transparently providing, by maintaining only one app store, and maintaining a high barrier to side-loading random software. It is another thing Epic gets when they distribute their app on iOS: A userbase that is consistently forced to use the one payment channel, rather than what they would prefer, and what hackers are happy to give them: Not multiple alternate payment channels; but NO PAYMENT CHANNEL AT ALL. I.e. good old software piracy.

Epic's public bleating about "open for everyone" is an obvious smokescreen to give them the appearance of a more altruistic legal footing. They don't give a crap about opening up other companies' ability to bypass Apple's payment structure -- it in no way affects their business, except negatively. It's another piece of their try-hard PR positioning, in one of the most blatant examples of a stage-managed lawsuit I've seen in many years.

They could have filed their lawsuit without changing their app, without getting it banned, without crapping a childish 1984-ad parody onto the internet, without timing it just before one of their own "seasons", and without making the boss of that season an apple wearing a suit. Every one of those deliberate choices is an effort by Epic to cast themselves as a victim and underdog. They are neither. They make 5 billion a year and are now attempting to strong-arm their own storefront into other people's hardware.

"But, no!" you cry. "It's not Apple's hardware, it's mine as soon as they sell it to me! Then I own it! And from that point on, Apple has no right to say who can sell apps to me!"

Say what now?

"I suppose you think it is legal for Ford to tell me whose tires I have to buy, whose gas I have to use, et cetera, because it is a Ford product and I can always buy a Chevy? No!!"

Sorry, that is a false equivalency. It blurs the line between selling software that is loaded onto a device, and selling hardware that physically modifies a device. The two really do not compare. For a start, the things you do with "your" information transmission/storage/execution device regarding other people's information, are legally regulated.

For example, it is legal for you to buy a record pressing machine and sell records, but it is not legal for you to dub someone else's record onto your own, and sell those copies.

Sure, Ford can't dictate whose tires you buy. But the government can tell you what firmware you can install on the emissions control computer under the hood of your Ford. It is also legal for Ford to constrain the usage of the software they install there -- for example, you are not allowed to extract it and place it in a Chevy. You'd have to rip the entire computer (the PCM, or "power-train control module") out of your Ford and find some way to physically install it into the Chevy. That's the only way to avoid violating the software license you agreed to when you bought the car.

The main point I want to get to, is that there's actually a pretty big difference between an "app store" and the process of merely putting different software on a device. But first, a diversion. Here's some fun background applying to cars:

Third party dealers sell customized firmware for embedding into Ford PCMs. (Installing it voids your warranty, by the way.) To be specific, they do not exactly sell firmware, they sell customized firmware they originally downloaded from Ford by paying a hefty access fee, which they then tweak in various ways. Even those tweaks are subject to government regulations (for example in California) if they affect the emissions of the vehicle. (Meeting those regulations costs money -- a cost that those third party dealers pass on to you.) If you sneak around the internet, you can find software that actually lets you write custom assembly code for loading directly onto a Ford PCM. You could conceivably write an entirely new firmware for your Ford -- a tough prospect considering how complicated Ford's own code is, which you would not be able to use as a template without violating both copyright law and license terms. (Not to mention meeting the state emissions regulations...)

Now, turning to iPhones:

You can install customized software on your iPhone that is not sanctioned by Apple, by jailbreaking it. Jailbreaking is not illegal. Hacking Apple's iOS into some bastardized form and then loading that on via jailbreak -- that is illegal. Same thing: Violates both copyright law and license terms, and probably a dizzying number of patent licenses as well. Woe be to you if you try and sell that on any kind of market. You'd basically be committing garden variety software piracy. KRACKED BY MR. KRACK-MAN, etc.

You could legally write your own OS for iPhones and sell it, assuming you could get it to run on Apple's hardware. But Apple has absolutely no interest in letting you do this, which is why they now build their devices with a custom bootloader that checks the software against a digital certificate. If you got ahold of that and managed to sign your OS using it, you could still get around Apple, but only for a brief moment while Apple reissues the certificate and finds a better way to bury it in new versions of their hardware.

Does Apple have a "right" to do this? It would be rather odd if they didn't. If I made an appliance that used hardware working in tandem with software on a CPU inside it, I don't think it would be right for other companies to legally require that I build in some way of swapping out the software, so they could sell their own. That's extra work for me. I might do it as a courtesy, or if I thought it would increase my market share, but it should still be my choice.

Now, when you talk about an "app store", you are not just talking about swapping out firmware, or an operating system. You are talking about something with quite a few moving parts, many of them made of software. A basic "app store" relies on an operating system, a delivery method, a platform for money transactions, and an interface for search and discovery. On top of that, add digital signing and encryption, anti-malware scanning, a host of developer tools, advertising, content ratings and warnings, user reviews, a support mechanism...

There was no "app store" when Apple first released the iPhone. You could use Apple's software to put music and movies onto it, and (slowly) run a web browser, and that was it. They added an "app store" a few years later in what was effectively a firmware upgrade. I'm putting "app store" in quotes because, before Apple released that upgrade, there was no such term. (The closest equivalent was Handango InHand, which described itself in much more labored terms than "app store" because no one knew what the hell they did until they used it.)

My point is, Apple essentially created a new kind of marketplace on their device, and then developed and managed it - and the device - so well that the iPhone became the single most successful consumer product in all of history. Was it the wrong move to aggressively retain control of how all users of their device browsed and paid for new functionality? It depends on who you ask, since that store makes money for Apple (after effectively operating at a loss for years) and consistently makes more money for developers than any other.

(Before you jump to point out that there is some revenue split at work due to Android supporting multiple stores, note that this fact also remains true for in-app purchases and subscriptions, which do not split across stores. That is, a typical user of an app on iOS tends to send more money to the developer via in-app purchases and subscriptions than a typical user on any other store, making more money for developers despite the 30% transaction fee.)

Apple's userbase skews more towards people with disposable income. Companies like Epic know this, and know that they could extract even more money from that userbase if they somehow didn't have to pay Apple's 30% fee. Does that fact make them - or anyone - _entitled_ to that extra money? Why would it? If people can conceive of a way to divide part of a product out from the other parts, and theorize some profit stream inherent in just that part that they could grab for themselves if they swapped it out, does that mean they have a legal right to compel it into existence? Apple has that high-end userbase because they do a damn good job designing and programming their device AND their on-device marketplace. You want to draw a line between the two - device and "app store" - because -- why exactly? Because Google did? (Note that even Google insists you cannot distribute an alternate app store via their own Google Play store. They "allow" other stores to keep their appeal to the widest possible range of hardware manufacturers, but they personally loathe the idea.)

Since I'm on a roll, here's another thought experiment: Apple could have declined to add an app store, and declared that the iPhone would only contain Apple apps. It could have also made an app store and declared that all the apps on it HAD to be downloadable for free, and that furthermore they were not allowed to support third-party payment systems either, which would not have been a "store" per-se; more like a free library. Would there then be some kind of populist movement to allow developers to charge money for them? In other words, should we be allowed to legally or morally compel a hardware developer to create an apparatus for us to sell our shit though their hardware, where none exists? Is an injustice that Apple isn't allowing additional middle-men between their userbase and developers, to siphon off more money ... or is it an injustice that Apple is allowing anyone to siphon money at all, because compelling people to pay for changes in software on a piece of hardware they own is morally wrong?

By the way, if you insist on having a physical analogy for what's going on with Epic and Apple's App Store, here's a much better one you can throw around.

Apple is running a high-end hotel chain. There are other hotels out there, but when people can afford it, they stay in an Apple hotel. One of the things they like about the Apple room is the big, well-stocked fridge. Apple makes it clear that every time you take an item out of that fridge, they will add the cost to your bill. Apple also adds a 30% markup to the price of every item in that fridge.

Suppliers of drinks and snacks make a lot of money jockeying for position in that fridge, because people who stay in Apple hotels have disposable income and the fridge is very convenient.

Enter Epic Root Beer. They see Apple adding a 30% markup to the bottle in their fridge, and they want a bigger slice of that pie. So they modify their root beer bottle: Now it comes with a credit card slot. To open the bottle, you have to send Epic $5 via the credit card slot. Then they tell Apple the "price" of their root beer is zero. 30% of zero is zero, so Apple stocks the root beer in their fridge, but gets zero money for it.

You can see why Apple Hotels would boot Epic Root Beer out of their fridge.
garote: (conan what)
I have an amazing new product. It's a handgun with big metal truck nuts on it. Those midwestern guys are going to love it. But Walmart says they won't stock it on their shelves because it violates their safety guidelines. "This weapon is too front-heavy," they say. Bah, what do those pencil-pushers know about firearm design?

You know what I found out? Only Walmart gets to approve what Walmart puts on their store shelves! That's a god damn monopoly!! My attorney says so too, and so far he's taken $75,000 in fees researching my case -- but I'll surely win that all back and more.

A lot of people move through Walmart stores. If you can get your product on Walmart shelves, you could get massive sales. How can it be legal for those bastards to deny my access to their shelves? My product is GREAT! I mean think about it; the puns write themselves. "ARE YOU A GUN NUT? WELL HERE'S SOME NUTS FOR YOUR GUN!"

Okay, so, I know how to make this fair. What they should do is, just clear a bunch of space out in their parking lot, so I can set up my own store right where their customers park. Then they should knock an entire wall out of their store, so their customers can just wander randomly out into my store instead. So, they think they're in a Walmart - with that reputation for security and efficiency - but I get their money, and I don't have to pay a stocking fee, and when they shoot themselves in the foot with a TRUCK NUTS GUN because it's too front-heavy, they'll blame Walmart for their pain.

Sounds fair!
garote: (castlevania 3 sunset)
I know a professor - an anthropologist by trade - who was chosen to act as a mediator between the state of Georgia and the organization of American Indian tribes there. He told me that when he asked the leaders what they preferred to be called, they said “Indian”. He asked why and they said “Because no one is technically a Native American. But Indian is a unique name that we can define and use for ourselves now.”

I was surprised by that, and it stuck in my mind.

Some time later, I wandered into one of those stupid this-name-or-that-name arguments on Facebook. The author of the post declared that anyone defending the name “Indian” was actively encouraging racism and merely wanted to preserve their "right" to be offensive to others.

Attempting to bring some clarity, I brought up what my professor friend told me, in detached and polite voice.

I was immediately accused of “lying to justify my racist beliefs.”

What can you do at that point except say “oh for crap’s sake” and walk away? It wasn't much later that I moved Facebook to a back folder on my smartphone and disabled all its notifications.

This is callout culture. It is a cry for attention wrapped in a call to arms. The people engaging in it may say they want greater empathy in the world, but they don't really. What they want is attention, and for the enemy they have identified to shrivel up, fail, disappear, and die. Like a fungus, it has pushed tendrils into every news and media network, but the heart of the infection is in Facebook and Twitter. (Instagram and Snapchat have their own flavor, more akin to straight-up bullying, which disproportionately affects younger people.) Something about these mediums has tricked massive amounts of people into believing that they are changing hearts and minds by punishing strangers. Or maybe not -- maybe they just want an excuse to vent hate.

But hey, we know all this, yeah? South Park threw a whole season at it last year.

Here's where things get even more interesting. Without a deep structural change, the future looks much darker.

In the technology sector we are only a few years away from constructing an artificial intelligence that can generate activity on the internet that is indistinguishable from the activity of living persons. Not random activity -- activity with motive. The ability to argue and persuade. Shortly after that we will find a way to package it, and oh yes, we will sell it.

The product reviews, the blog posts, the comments, the recycled jokes, even the editorials and essays and news reports you scour for real information, drifting down to you from beyond the people you know face-to-face, will go from 5% engineered, to 30% engineered ... to 80% engineered ... to 99% engineered ... and eventually it will all be engineered. Real people will be reduced to corks bobbing in a sea of AI-generated culture, opinion, and reporting. There will be no one online. Only screaming, whispering, capering machines.

And silicon is cheap. Imagine what it will be like when a wealthy person can rent some rackspace for a while and fire up an AI proselytizer for every single man, woman, and child on the planet, personally stalking them, making sure the buyer is portrayed in a positive light. Or, worse yet, making sure their enemies are shamed and slandered and hounded with fake outrage crippling their social life and business. Money will be more than free speech; money will do more than talk. Money will call you up on video chat, stare at you with your mother's own face, and dare you to disobey.

If we're fortunate, other technologies will compensate for this. Perhaps we'll collectively make the choice to abandon almost all online interaction with strangers. Either way, we will at least be free of the infection of callout culture: We will know for certain that it is fake, and only the product of AI zombies burning money and fighting each other on comment boards that no human may ever read.

Prepare for Interesting Times!
garote: (wasteland librarian)
When I began putting things on the Internet over 30 years ago I made a promise with myself. I would not put anything online that was meant to be limited to specific people or a specific audience. I approached it like this: Either everyone in the world should see it, or no one. When I started a blog on LiveJournal 20 years ago I made the same deal. LiveJournal (now Dreamwidth) has a feature where you can make a post that is only readable by those on your friends list. I have not used this feature at all, and I never will.

There are a few things in the long tail of my internet presence, mostly from the 90’s when I was a teenager, that seem overly dramatic or petty to my middle aged eye. Also a few things that seem deranged and sexist, but have value in context. I could take them down if I wanted. They are on personally hosted websites, and I could press the big red "off' switch any time. And year by year I feel more convinced that I eventually will. Especially one website in particular:

20 years ago, as a joke, my friends and I formed an industrial plunderphonics band in a garage in the Lee Vining desert, and have been irregularly posting "albums" freely online. I'm proud of them and they were great fun to make and brilliant in places, but they are also loud, profane, absurd, and mostly terrible, and I have made sure to never mention them to anyone I work with. Of course, thanks to the insidious efficiency of Google, the band website comes up when you enter my name in their search engine. Cat's out of the bag. Technically all that crap is just a Google away from anyone who wants to learn about my awful creativity. As far as I know, no one I work with has. (Though how would I know?)

And what would happen if they did? What if they then found it offensive, and then splashed it all over social media, drawing the ire of a million anonymous trolls?

Well, I guess I would have to just shrug, and take it down. I certainly know better than to argue back.

I feel like it's inevitable. To tell you the truth, I feel like there is now an anonymous army of people connected to social media, who consider it their daily entertainment to be handed a clear-cut piece of offensive material and a source, so they can immolate the source, and it's only a matter of time before I get roasted alive.

(I’ve written about this before. The only way to win is not to play.)

But my point here is, I know why this is happening. And it is not a subversion of the internet itself, or an inevitable decline in society. It is a direct consequence of "growth at all costs" capitalism.

Let me put it bluntly: The Internet used to be the domain of the middle class.

You needed a middle-class income or university admission just to get on it. So, whatever you put there was visible to everyone -- in the middle class. And they were generally accepting and quietly liberal-minded. (Some of the very first stuff to drift out across the internet was hardcore erotic fan fiction, spicy enough to obliterate careers it if was linked to its authors today.) For years after that you at least needed to have a passing interest in your local library and a willingness to type. Then the barrier to entry was a monthly fee on top of your phone and cable service, and enough space in your house for a cheap computer.

That is all way, way over, as it should be. Today's barrier to entry is 25 bucks for a used cell phone or tablet, and proximity to a McDonald’s. It's a wonderful thing.

But it also means that the internet is now bursting with people who are extremely cost conscious. The working-class and poor are able to get online cheaply, and now they need services that are cheap - or free - to form their online experience. And how do you offer a service for free? You don't. You pick a different paying customer: Advertisers.

So you form a company around some service designed to extract attention and labor from the working-class and the poor, then sell it to advertisers. The more people you attract, the more money you make. Competition goes up, margins go down, and social media companies must grow or they get eaten. And the quietly liberal and egalitarian middle class is thoroughly buried and scattered amongst a horde of people connected to the internet who are being exploited: They get communication tools and entertainment "for free", and as part of the bargain they are drip-fed a constant, addictive stream of paranoia, shame, and rage. And rage needs targets.

Social media companies are not the internet, but they have a presence on it like a kaiju stomping through a city, and woe unto you if you're in the way.

So today, if I put anything online without a filter, I am now cannon fodder for the entities that feed and manipulate that horde of newcomers. When I put something online, it potentially reaches everyone, but most likely, it remains totally buried in the flood of other stuff until an algorithm plucks it out of the current and decides to show it to thousands, then millions, of people I have no interest in reaching. And they will feel upset, offended, and vengeful, because that's what algorithm is designed to evoke. Conflict sells ads. And, there's no better distraction to cover your skullduggery than dropping a turd in the public pool.

If explosive growth wasn't a necessity, we would not be here. Ladies and gentlemen, this is what people mean when they use the term "late-stage capitalism."

So what are we all going to do? Same thing we do every time, Pinky. Try to put up some walls to keep the algorithms out. The days of erecting a webpage visible to the whole world are rapidly ending. Why would you do that? It's just free material for algorithms to use against you. Up with the filter barriers, up with the credentials and the tests.

And there's no shame in it ... only regret.
garote: (Default)
It keeps popping up in forums, Facebook, and live conversations: Some computer nerd (I use the term with affection, for I am one) brings up the concept of “Universal Basic Income” and declares that we had better start advocating for it, because we’re inventing amazing new technologies that will totally destroy so many people’s jobs that people will simply starve unless we hand them free money.

I don’t see it that way, because I don’t buy into the assumption that lurks beneath it: That some huge proportion of the human race is only qualified for one job, and would be instantly and permanently useless if that job went away.

As a counterexample I don’t have to look any farther than the mirror. Computer programmers completely reinvent their skillset every five years or less, moving from one set of problems to another. And yet, the more problems we solve, the more programmers we need. When self-driving trucks put truck drivers out of work, they won’t stay truck drivers for long. They’ll become mechanics, remote pilots, construction crew ... or maybe they’ll just ride around in the trucks they used to drive, keeping watch, scaring off highwaymen with a rifle and a drone shooting tranquilizer darts. Or maybe they’ll just quit, since everything is now slightly cheaper and they can stop being a dual-income household and spend time with their kids, like people imagine the 1950’s somehow was for the entire planet.

Hmm. Maybe that’s the pattern. I only ever see Universal Basic Income floated by single, childless men with little or no connection to their extended family. As if government-fed bachelorhood was the resting state of every person on the planet. Interesting.

As long as humans exhibit the basic trait that drove all innovation beyond survival - that of sexual competition - they will invent ways to pay each other for the means to move slightly ahead in whatever arbitrary yardstick is fashionable at the time. Automation has had zero effect on this behavior in ten thousand years. At the same time, it has clearly not done enough to eliminate extreme and abject poverty, which is still rampant in places far away from the air-conditioned cubicles and taco trucks of the Silicon Valley.

Maybe that's another thing that feeds into this assumption. In the Silicon Valley, you're either an engineer of some kind, or you're one of those other people, doing some other mysterious job, for way less money. That's what it looks like to the engineers, anyway, because when they're not hanging out with other engineers, who do they interact with? Food vendors. Baristas. Store clerks. Ticket takers. The FedEx delivery man. The cleaning lady. Their world consists of work, and paying money to have everything else catered so they can work even more.

So they mentally divide the world into two groups: Engineers who pull down big bucks from the sky, and other people who scrape a living together ... for now ... serving the engineers.

Naturally the solution that comes to them is to take these other poor people and stick them in a walled garden where everything is pleasant and catered, though boring, and meanwhile the engineers can move underground and do even more engineering, ... and occasionally sneak up in the dead of night to kidnap some surface-dwelling dunce for a cannibalistic blood ritual.

It's the future!!
garote: (ultima 4 combat)
The main character is obviously in the ideal physical shape for the crazy parkour crap they have her do all day long. I remember looking at Nathan in the previous game and thinking the same thing. I also remember thinking: "There are lots and lots of men who are just genetically not designed to be in this shape. They're too fat, or too skinny, or too tall or wide to be as good at this stuff. And yet, we all remember being kids, running around and climbing on things. Perhaps that's how we can relate to this game. Does steering Nathan and Chloe across exotic worlds remind us of climbing the jungle gym in the schoolyard? Perhaps so."

But, all the wall climbing and parkour that these characters do is deceptive. Every other move they make defies gravity, and it's too easy to think "hey, if I got in better shape, I could do the same stuff!" No. Ten seconds hanging off the edge of a building by my fingertips is already way too long.

These environments are always such overload. In the real world, any one of these buildings would be a major find and a tourist attraction. In this game, almost all of them are disposable. You drive past them, blow them up, run through them, bash them over, and leave behind hundreds of bizarre artifacts to get buried in mud or washed away, all because they are not the single item relevant to the plot. It feels really self-contradictory.

Take this hidden city for example. It's a square mile of gorgeous carved stone and brickwork, all inside a cavernous valley with rivers flowing through it, and enough jungle inside to support families of elephants. In the real world, it would be a global must-see tourist attraction, and with careful development could enrich countless lives and generate millions of dollars a year, for hundreds of years. But! There is one specific artifact, about the size of a burrito, hidden somewhere inside it. So we're gonna blast our way in with mortars and C4, snatch up the artifact, and leave the rest of place to the rats.

It's even more contradictory when you think about how many people play these games, myself included. We just go through the environments at a walking pace, panning the camera around and absorbing what we see and hear. It's an opportunity to relax in a designed environment -- or at least hallucinate that we are in one. Strange that it's sandwiched between action scenes that play out like terrorist attacks, eh? Humans are weird.

This is the first Uncharted game with a female protagonist, and she has a female companion. I get the impression the studio tried to avoid any hint of sexism in the dialogue by just writing it for their usual character Nathan, then tweaking it a little to add gender-specific jokes. But perhaps that's the only way to do this, since Nathan himself is an unreal character? That is, he's purpose-built for far flung adventures full of "extreme sport" athleticism and gunfire. Maybe the only way to write a character plausibly capable of doing those things is to start from a template that is often stereotyped as male, and Chloe and Nathan sound alike because of that.

Oh, I don't know. Some of my favorite female characters are from Terry Pratchett novels, e.g. Granny Weatherwax and Nanny Ogg. I don't question Terry's approach to them, I just take them at face value. I guess I'll take Chloe at face value too and forget about the analysis.
garote: (ancient art of war china)
How does a person figure out what "generation" they're in? I think the only way is in retrospect. You need to be there when it begins, and live long enough to see how it's going to end. Well, enough time has passed that I can see the borders of mine.

My generation is the one that grew up with a particular thing in the house: A box full of electronics, heavy enough that it had to sit on a desk or the floor, with wires connecting it to a big rectangular viewing screen and a big rectangular keyboard made of physical buttons. It could connect to the internet, but poorly, and only with wires.

The end of my generation came with the rise of the smartphone. The computer is now considered the more serious and nerdy version of the smartphone, not the other way around. If there's a computer in the house it's almost always a laptop, or a sleek, sealed appliance, and it lives and breathes high-speed internet. It's not its own universe any more, but a gateway to another one. For the new generation, a computer without an internet connection is a broken computer.

The big box of isolated electronics was a critical part of my childhood development -- intellectually, socially, artistically, even emotionally. Now I am in middle-age, and those boxes are almost all gone, buried in landfills or smashed apart to recycle their guts. They linger as museum oddities, or nostalgic set decoration. No generation before mine had a chance to grow up in the home computer era, and no generation after mine will have it either.

Perhaps there are earlier examples in the long history of humanity where technological progress moved so fast that a generation found itself obsolete when it reached middle age ... but I can't think of any just now.

Today that kind of speed is a given. The generational borders feel narrower every year. Or perhaps that's just me, slowing down, and paying less attention. But, it seems to me that there's a gulf of experience between the first kids to wait in line for the Harry Potter movies, the first kids to play Guitar Hero, the first kids to like Justin Bieber, the first kids to dance Gangnam Style, and the first kids to play Pokemon Go, even though to me, they're all just kids. I wonder how they'll define their borders, in 20 years?

Anyway, in the 1980's, it was all about that big box of electronics, whether you had one at home or messed with one at school. As the box grew in versatility over the years, so did my creative ambition. Looking back, I'm surprised at all different ways I found to use it. For my own amusement I made a list of creative activities, describing the first time I used a computer for each activity. After I made the list I was surprised how much of it happened before the age of 20.

Clickety click for the list... )

Not sure what the point of that was. I've been in a list-y mood lately, I guess!

Code exams

Oct. 25th, 2017 11:08 pm
garote: (Default)
I recently stepped from one job into another. On Friday I was working for the Berkeley labs, then on Monday I was working for a startup company across the street. From my new office, I can look out the window and see the sunlit hallway in the old building, where I used to take snack breaks. My bicycle commute is exactly the same. Nice!

The old job was being cut due to lack of funding. I was going to spend a while "funemployed" after that, but with a couple weeks to go I changed my mind and hit the job market. During those few days I brushed up my interview skills. This drove me a bit bonkers.

There are a zillion companies now that claim to be able to train you for an interview -- or interview your candidate for you, with multiple-choice quizzes and in-browser code tests. These vary in quality and content, more than you can imagine. The worst of them are crowd-sourced, going for quantity so they can boast "thousands of questions" on their press material. All the questions - multiple choice, fill in the blank, whatever - are weird nit-picky crap. All of them. The ones that aren't are so few that they're within the margin of error in the measurement and don't matter.

I took a quiz on Bootstrap and the first question was:

"Which of the following classes are generated on a blue button in the UI: btn-blue, button, ui-btn, blue?"

Are you fucking kidding me? Is that knowledge absolutely essential in sorting out n00bs from rock stars?

"Does Javascript's getDay() function return the weekday starting from 0 or from 1? Does the lowest value represent Sunday, or Monday?"

Oh wow, yeah, I need to look that up at least twice a day. I should have it memorized.

"What's the default duration of a jQuery.fadeOut() call?"

Answer: Fuck you.

Then there are the sites where you actually write code. They are better, but they have their own problems. Sometimes the answer they expect is very, very specific, and if you don't follow their formatting, or put commands in the same arbitrary order, you lose. Eventually I settled on the Codility website, which didn't suck. I barreled my way through ten of their tests, picking them at random, switching between Python and Javascript. Felt pretty good! Then I hit a problem about calculating "semi-prime" numbers.

Find the number of semiprime numbers (numbers that are the product of two primes) within the range (P, Q), where 1 ≤ P ≤ Q ≤ 50000. Your program must have a worst-case run time of O(N*log(log(N))+M) or better.

There is exactly one way to solve this, and it means implementing a modified version of an algorithm called the “Sieve of Eratosthenes”. Go read about it; I'm sure you've never heard of it.

This problem is dropped in with a bunch of others about sorting arrays, walking trees, and munging lists. Guess how often the Sieve of Eratosthe-butt-ass has come up in thirty years of a very respectable programming career?  That’s right. Not once. But you either know it, or you don't. Or you're Nicomachus of Gerasa from the year 60, and you come up with it on the spot.

You know what this is? This is like qualifying a construction worker to operate heavy machinery by testing his skills with the drop-claw game at the pizza parlor.

Anyway, I quit after that. I'd gone far enough. Any interview that would ask me about the Sieve of Erazzo-poof-shart is an interview I'd be happy to flunk.

garote: (programmer)

I made a list like this about 7 years ago. Today I wondered: What's changed?

A lot less is honestly impressive now, so I've rearranged the entries, and added some new stuff (the items in green).

Totally unsurprising:

  1. Call people on the phone.
  2. Keep an address book that is synchronized online.
  3. Keep appointments with a calendar that is synchronized online.
  4. Set alarms and timers, including vibrating alarms.
  5. Do basic math.
  6. Type and sync unformatted notes.
  7. Send and receive emails, text messages, instant-messages, twitter alerts, et cetera.
  8. Record, play back, and sync voice memos.
  9. Use as a portable hard-drive (Air-Sharing, FileMagnet).
  10. Estimate currency conversions using up-to-date ratios (Currency).
  11. Take photos with GPS tags embedded, and post them online or send them to people immediately.
  12. Make international telephone calls at a discount (Skype, etc).
  13. Get local and remote weather forecasts.
  14. Watch movies in a tiny screen (Netflix).
  15. Purchase and read e-books and music.
  16. Pair with a physical keyboard for easier data-entry.
  17. Subscribe to video/audio podcasts, play them, and download current episodes.
  18. Scrawl pictures with my finger and save them (Scribble).
  19. Download and install enhancements to the device (App Store).
  20. Record a track of my physical location, and play it back later.
  21. Remotely view and crudely interact with the screen of my desktop or laptop (VNC, WinAdmin)
  22. Search on a map for services of all kinds, and call them up on the phone with one button.
  23. See a view of my living room, from a wifi camera attached to the wall, in real time, from across the country.
  24. Take a picture of a document and have it automatically read all the text on the document and turn it into a PDF.

Somewhat impressive or surprising:

  1. Mark areas of poor signal coverage and automatically report them to my provider.
  2. Connect to a television and present movie and slide shows.
  3. Calculate resistor color codes (OhmEE, ResistorCC).
  4. Record and do minor edits to a video, then place it online or send it to someone immediately.
  5. View and manage my bank accounts fairly recurely
  6. Locate the nearest movie theaters, see their schedules, and book tickets (Fandango, Flixster).
  7. Lose all my money in the stock market (E*TRADE Mobile Pro).
  8. Listen to a continuous mix of new music that the device thinks I will like, based on an ongoing analysis of my selections (Pandora)
  9. Wirelessly control nearby lighting fixtures, dimmers, and consoles (Luminair (DMX lighting control)).
  10. Display a number pad, and pair it with a nearby computer keyboard that lacks a number pad (NumberKey).
  11. Spot tornadoes and get advance warnings with weather graphs (Radar Scope).
  12. Browse my home music collection on it and play music through speakers in different rooms of my house (Remote with an AirPort Express).
  13. Track plane flight status, with real-time departure info, gate delays, and flight locations (FlightTrack, Live Flight Tracker).
  14. Attach a thumb-sized credit card reader and conduct business transactions (Square).
  15. Have a two-way video chat with someone in another country.
  16. Get a map, satellite view, or street view, all over the world, see my present location, and calculate walking or driving directions.
  17. Ask basic math questions out loud, and get the answer spoken back to me, e.g. "What's the square root of 1207?" "The answer is approximately 34.7419."
  18. Automatically grab photos and videos from my Canon DSLR camera, as I take them, and perform a variety of scripted actions on them. (ShutterSnitch)
  19. Poke a button in my chat history with a person, and see their exact location (assuming they're with their phone) on a map, accurate to within the last 5 seconds.
  20. Learn a new language 15 minutes at a time, on an app that speaks the language back to me.
  21. Attach a cardboard sleeve to the phone, with a pair of lenses in it, turning it into a 3D VR headset that can play back videos I record with my 360-degree handheld recorder.
  22. Track packages and get a notification seconds after they're placed on my doorstep.
  23. Automatically gather stats on my car's fuel efficiency, diagnose check engine light problems, and compile maps and mileage info on all my car trips. (Automatic)

Impressive or surprising:

  1. Mine a database of real-estate listings, including purchase and tax histories. (ZipRealty).
  2. Search for and then book international flights and hotels across multiple airlines (KAYAK HD).
  3. Carry and use a reference for how to recognize various animal tracks (MyNature Animal Tracks).
  4. Carry and use a reference for how to tie various knots, with video and written tutorials (Knot Guide).
  5. Scan barcodes of almost any product, accessing a worldwide database of products to both identify the item scanned and provide comparative pricing and locating (RedLaser).
  6. Control the presentation of slideshows (Keynote Remote).
  7. Tune my guitar (Guitar Toolkit, OmniTuner, TyroTuner).
  8. Record my voice as I sing along to music, measure my accuracy, and apply automatic pitch correction and harmony (Glee Karaoke).
  9. Measure the level and slope of flat objects and sides (Clinometer).
  10. Make a surprisingly accurate guess at the title of whatever music is playing in the environment (Shazam).
  11. Strum a mathematically emulated guitar (Twang).
  12. Mine and cache a real-time database of plane preflight information, including icing forecasts, wind mappings, radar and satellite images, flight rule and terminal procedure listings, approach plates, VFR and IFR charts, etc (ForeFlight).
  13. Act as a crude and uncalibrated seismometer (Seismometer).
  14. Hold the phone up to the sky and get a map of what constellations should be visible in that direction (Starmap, Star Walk).
  15. Remotely lock, unlock, and start my automobile (Viper Remote Start System, Mercedes-Benz mbrace).
  16. Record the amount of tossing and turning done in bed, and use the data to time a wakeup alarm to avoid REM sleep (Sleep Cycle).
  17. Automatically report back to the public works department when I hit a pothole in the road, so the accumulated data can be used to dispatch repairs (Street Bump).
  18. Get an automatic announcement about which lane I need to move to as I approach an interchange on the freeway.
  19. Shoot video that is processed to look like an ink sketch on paper, in real-time, at 60 frames a second.
  20. Secure my phone with my fingerprint, scanned fast enough that it unlocks the phone in less than a quarter of a second.
  21. Have my photos automatically organized by who's in each one ... including photos of my cat.
  22. Attach it to a wireless controller, and fly a drone with it, showing and recording its location and everything it sees. (DJI Go 4)
  23. Hold it up to a sign written in a foreign language, and have the translation appear in the picture as though it's written on the sign. (Translate)
  24. Rent a bicycle from a kiosk downtown. (Zagster)

Very impressive or surprising:

  1. Locate and reserve a nearby rental car, and when you get to it, unlock it (Zipcar).
  2. Explore 3D recreations of large cities around the world, at 60 frames a second, so detailed that I can see into the windows of my own car parked on the street.
  3. Summon a person to my door, driving their own car, who will then take me to my destination for less than a taxi would charge. (lyft)
  4. Secure my phone with a 3D scan of my own face, more accurate than using my fingerprint, validated in less than half a second.
  5. Speak to the phone in English, and have it translate my sentence into Mandarin and speak it back to me, after less than a second of delay.

What do you think, fellow modern people? Are there any items here I've forgotten about? Any new developments?

Page generated Jul. 5th, 2025 12:27 am