30.9.10

Physics Buzz: Hawking & Mlodinow: No 'theory of everything

In a Scientific American essay based on their new book A Grand Design, Stephen Hawking and Leonard Mlodinow are now claiming physicists may never find a theory of everything. Instead, they propose a "family of interconnected theories" might emerge, with each describing a certain reality under specific conditions.

Most of the history of physics has been dominated by a realist approach. Scientists simply accepted that their observations could give direct information about an objective reality. In classical physics, such a view was easily defensible, but the emergence of quantum mechanics has shaken even the staunchest realist.

In a quantum world, particles don't have definite locations or even definite velocities until they've been observed. This is a far cry from Newton's world, and Hawking/Mlodinow argue that - in light of quantum mechanics - it doesn't matter what is actually real and what isn't, all that matters is what we experience as reality.

As an example, they talk about Neo from The Matrix. Even though Neo's world was virtual, as long as he didn't know it there was no reason for him to challenge the physical laws of that world. Similarly, they use the example of a goldfish in a curved bowl. The fish would experience a curvature of light as its reality and while it wouldn't be accurate to someone outside the bowl, to the fish it would be.

Scientific American: The Elusive Theory of Everything (paywalled)

"In our view, there is no picture or theory-independent concept of reality. Instead we adopt a view that we call model - dependent realism: the idea that a physical theory or world is a model (generally of a mathematical nature) and a set of rules that connect the elements of the model to observations. According to model - dependent realism, it is pointless to ask whether a model is real, only whether it agrees with observation. If two models agree with observation, neither model can be considered more real than the other. A person can use whichever model is more convenient in the situation under consideration."

This view is a staunch reversal for Hawking, who 30 years ago argued that not only would physicists find a theory of everything, but that it would happen by the year 2000. In his first speech as Lucasian Chair at Cambridge titled "Is the end in sight for theoretical physics?," Hawking argued that the unification of quantum mechanics and general relativity into one theory was inevitable and that the coming age of computers would render physicists obsolete, if not physics itself.

Of course, Hawking has become rather well known for jumping way out on a limb with his public remarks and for decades he embraced supergravity as having the potential to solve theoretical physicist's ills, even hosting a major conference on it in 1982. However, but Hawking has never harbored allegiances to theories that describe a physical reality.

So, while two well-known physicists coming out against a theory of everything is compelling, it really shouldn't seem like anything new for Hawking.

"I take the positivist view point that a physical theory is just a mathematical model and that it is meaningless to ask whether it corresponds to reality. All that one can ask is that its predictions should be in agreement with observation."

Stephen hawking, The Nature of Space and Time (1996)

-Flash

29.9.10

Practical Software Design - A few thoughts! | Javalobby

13.9.10

Mathematics under the Microscope - How good software makes us stupid

An article by Dave Lee on the BBC website: the title sums it up perfectly. It quotes an earlier article by Bill Thompson Between a rock and an interface, which, in its turn, quotes a research paper by Christof van Nimwegen on user interfaces, The paradox of the guided user: assistance can be counter-effective (2008). I quote an abstract:

This project investigates the conditions under which externalizing interface information by interface controls influences users’ performance in solving problems requiring planning. Our main research question was: in tasks where planning is required, which interface style leads to more plan-based behavior, better strategy, and consequently better task performance? And besides immediate performance, which interface style causes better knowledge of the tasks and solutions afterwards in a transfer situation (with altered task/interface circumstances), or when a severe task interruption occurs? To answer our questions, two series of controlled experiments were conducted using two interface styles: one version in which certain information is externalized onto the interface (Externalization) and another version where this is not done (Internalization). In the Externalization version the operators in the interface conveyed information, in the Internalization version this was not the case. The first series of experiments used a computerized isomorphic version of the well-known “Missionaries & Cannibals” problem, called “Ball & Boxes”. The second series of experiments used a more realistic office-like application called “Conference Planner”. Immediate and delayed performance when using the Externalization interface, was worse than when performance use the Internalization interface. Also transfer of skill was worse for users of the Externalization interface, both to another task, and to another interface. These users were characterized by display-based behavior. Subjects that used the Internalization interface imprinted relevant task and rule knowledge better and were not affected by a severe interruption in the workflow, whereas Externalization subjects were. We conclude that users who internalize information themselves behave more plan-based, are more proactive and ready to make inferences. This in turn results in more focus, more direct and economical solutions, better strategies, and better imprinting of knowledge. This knowledge is easier to recall at a future point in time, and is better transferable to transfer situations where the interface, the task, or both were different, less vulnerable to a severe interruption, and better applicable to transfer situations. Human-computer interaction designers can take advantage from considerations that go beyond plain usability, even when they go against common sense. In designing interfaces we have to take care with providing interface cues that give away too much information, and must design in such a manner that the way users (should) think is optimally supported, which in turn could help the software to achieve its specific goal. Examples are situations where risky and complex tasks are performed, and where a user suddenly is confronted with a new situation. One can also think of situations in which interruptions are commonplace, or where operations come with a cost and direct solutions without deviations are the aim. Based on our findings an interaction framework is proposed that can guide decisions regarding interface design.

I had a similar experience when I used two software packages in my classes. I will write about it shortly.

Possibly related posts: (automatically generated)