Sequenced Bits

Limits of Visual Processing

Normally it’s difficult to get to a question and answer it on StackOverflow before it already has plenty of responses, but with some of the less visited StackExchange sites questions last longer. I came across this one on programmers.stackexchange:

How much information can a user reasonably process from a UI

This happens to be a topic I’ve done some work on and was able to post an answer before the page was saturated.

My answer reproduced with minor edits:

There is research into this topic but it will give you a complex answer. You can increase how much a person can take in from a UI if you use different sensory modalities rather than just one. For example using sights and sounds you may be able to pump more information into a user than using just sight or just sound. There are also findings that suggest that if your user has to really process or think about the inputs there are more significant bottlenecks that are more difficult to avoid even if you cross sensory modalities. Training helps. Expert users can process more but in the typical cases you will run into limits.

But to get down to your question of how fast you can change the display in particular table: You can look into the Psychology literature on the topic of “Attentional Blink” and “Psychological Refractory Period (PRP)” but the general advice that I can give you from that is don’t push faster than changes every 500ms for a single watched location. Typical users can need that much time to process even simple single location changing input. If you’re doing it continuously 500ms is a speedy but perhaps roughly workable rate. You may be able to push down to 250ms but this will depend on what percentage of your users you’re willing to put off. Also if your users are having to scan multiple locations for possible changes you may have to slow down even from a 500ms change rate. This doesn’t necessarily mean 1000ms if you have two locations. It’s not a linear relationship but the answer for that is going to be more complex and depend a lot more on what your UI looks like exactly.

Even though this is technically correct information I provided, the answer to go with in a practical setting is: “Complicated. Requires exploratory testing.”

Software Prices 1987-2010

I found some old issues of COMPUTE and BYTE magazine on eBay and decided to buy them. The articles are not quite as riveting as when I flipped through them as a kid, instead the ads are now more interesting. I found some prices of old software such as Lotus 1-2-3, WordPerfect 4.2 and a golf game called World Class Leaderboard. I adjusted the old prices for inflation and looked up the prices for the modern equivalents.

Here’s the comparison:

Software prices 1987 vs. 2010 (in 2010 dollars)

Software prices 1987 vs. 2010 (in 2010 dollars)

This is dramatic but even more so when you consider that it doesn’t take into account that:

  • Office is a whole suite of software rather than just a word processor or spreadsheet
  • The 2010 software will be much better along almost any dimension that matters to users
  • As a share of disposable income the modern software is even more affordable than the graph suggests
  • There are even free software packages that would deliver far more functionality than the 1987 equivalents

It’s good to see that it’s not only hardware that keeps getting better and cheaper. It’s interesting to speculate why this might be happening in software. A lot of it is likely due to market growth that allows for lower prices per unit but some of the difference may be driven by an increase in software development productivity.