Recent articles

Thursday, November 19, 2009

Releasing the Chromium OS open source project

The open source project designed to create the capstone of Google's cloud computing model has been unveiled. Google expects that within a year, Chromium OS will be ready to capture the hearts and souls of the Internet addicted and at the same time relegate Windows, Mac and Linux to the untethered dustbin of ages -- around about the same time that every mobile phone on the planet will be using Google's Linux-based android OS. Yes, that's right... GoogleWordDomination by 2012.

Today we are open-sourcing the project as Chromium OS. We are doing this early, a year before Google Chrome OS will be ready for users, because we are eager to engage with partners, the open source community and developers. As with the Google Chrome browser, development will be done in the open from this point on. This means the code is free, accessible to anyone and open for contributions. The Chromium OS project includes our current code base, user interface experiments and some initial designs for ongoing development. This is the initial sketch and we will color it in over the course of the next year.



Read more...

Sunday, November 15, 2009

Scientists demonstrate 'universal' programmable quantum processor

NIST postdoctoral researcher David Hanneke at the laser table used to demonstrate the first universal programmable processor for a potential quantum computer. A pair of beryllium ions (charged atoms) that hold information in the processor are trapped inside the cylinder at the lower right. A colorized image of the two ions is displayed on the monitor in the background. Credit: J. Burrus/NIST

Physicists at the National Institute of Standards and Technology have demonstrated the first "universal" programmable quantum information
processor able to run any program allowed by quantum mechanics -- the rules governing the submicroscopic world -- using two quantum bits (qubits) of information. The processor could be a module in a future quantum computer, which theoretically could solve some important problems that are intractable today.

Read more...
Read the paper...

How to build your own "Minority Report" surface...

It took Microsoft ten years and millions of dollars to build their touch screen surface.

A group of innovative young designers from the UmeƄ Institute of Design in Sweden show us how to do it ourselves... in a few days on a shoestring budget...



the 4th week video report from mengmeng on Vimeo.

Saturday, November 14, 2009

Google's new language "Go" ... please Stop!!


Go is a new experimental language from Google labs. It is still in early stages of maturity, but there is more than enough of it to get the feeling of what the intentions of the final product are; and sadly this language appears destined for the language FAIL file.

Go is a compiled, statically typed c-styled language that appears to amalgamate features from dynamically typed languages, such as python, java and perl. It supports garbage collection, multi-threading and concurrency control and a rather unusual lightweight type system.

According to Google: Programming today involves too much bookkeeping, repetition, and clerical work. As Dick Gabriel says, “Old programs read like quiet conversations between a well-spoken research worker and a well-studied mechanical colleague, not as a debate with a compiler. Who'd have guessed sophistication bought such noise?” The sophistication is worthwhile—no one wants to go back to the old languages—but can it be more quietly achieved?




Notable (or notorious) features, you decide...

  • No exception handling. Google's rationale: "They are, by definition, exceptional yet experience with other languages that support them show they have profound effect on library and interface specification. It would be nice to find a design that allows them to be truly exceptional without encouraging common errors to turn into special control flow that requires every programmer to compensate."

  • No static inheritance: Google's rationale: "Rather than requiring the programmer to declare ahead of time that two types are related, in Go a type automatically satisfies any interface that specifies a subset of its methods. Besides reducing the bookkeeping, this approach has real advantages. Types can satisfy many interfaces at once, without the complexities of traditional multiple inheritance. Interfaces can be very lightweight—having one or even zero methods in an interface can express useful concepts. Interfaces can be added after the fact if a new idea comes along or for testing—without annotating the original types. Because there are no explicit relationships between types and interfaces, there is no type hierarchy to manage or discuss."

  • No method overloading: Google's rationale: "Method dispatch is simplified if it doesn't need to do type matching as well. Experience with other languages told us that having a variety of methods with the same name but different signatures was occasionally useful but that it could also be confusing and fragile in practice. Matching only by name and requiring consistency in the types was a major simplifying decision in Go's type system."

  • No assertions. Google's rationale: "Our experience has been that programmers use them as a crutch to avoid thinking about proper error handling and reporting. Proper error handling means that servers continue operation after non-fatal errors instead of crashing"

  • No generic types. Google's rationale: "Generics are convenient but they come at a cost in complexity in the type system and run-time. We haven't yet found a design that gives value proportionate to the complexity, although we continue to think about it."

  • "len" is a funciton, not a method: Google's rationale: "We debated this issue but decided implementing len and friends as functions was fine in practice and didn't complicate questions about the interface (in the Go type sense) of basic types."
The word "experience" is bandied about in Google's justifications... and the humble project origins seem to bely the true nature of this "experience."

Robert Griesemer, Rob Pike and Ken Thompson started sketching the goals for a new language on the white board on September 21, 2007. Within a few days the goals had settled into a plan to do something and a fair idea of what it would be. Design continued part-time in parallel with unrelated work. By January 2008, Ken had started work on a compiler with which to explore ideas; it generated C code as its output. By mid-year the language had become a full-time project and had settled enough to attempt a production compiler. In May 2008, Ian Taylor independently started on a GCC front end for Go using the draft specification. Russ Cox joined in late 2008 and helped move the language and libraries from prototype to reality.

I must admit to being gobsmacked by the blatant lack of experience evident in the design decisions made by this small team-- who seem intent on erasing many of the strident achievements in third and fourth generation languages such as Java, Ruby, C++ and C# -- content instead to produce a very "retro" language that amounts to not much more than a compiled version of shell script.

Where has all the hard-earned wisdom gained over the last twenty years gone? To me, there is nothing revolutionary about Go apart from its utter disconnection from reality.

Read more...

Thursday, November 12, 2009

SPDY: Google wants to speed up the web by ditching HTTP

SPDY: Google wants to speed up the web by ditching HTTP: "
companion photo for SPDY: Google wants to speed up the web by ditching HTTP









On the Chromium blog, Mike Belshe and Roberto Peon write about an early-stage research project called SPDY ("speedy"). Unhappy with the performance of the venerable hypertext transfer protocol (HTTP), researchers at Google think they can do better. 

The main problem with HTTP is that today, it's used in a way that it wasn't designed to be used. HTTP is very efficient at transferring an individual file. But it wasn't designed to transfer a large number of small files efficiently, and this is exactly what the protocol is called upon to do with today's websites. Pages with 60 or more images, CSS files, and external JavaScript are not unusual for high-profile Web destinations. Loading all those individual files mostly takes time because of all the overhead of separately requesting them and waiting for the TCP sessions HTTP runs over to probe the network capacity and ramp up their transmission speed. Browsers can either send requests to the same server over one session, in which case small files can get stuck behind big ones, or set up parallel HTTP/TCP sessions where each must ramp up from minimum speed individually. With all the extra features and cookies, an HTTP request is often almost a kilobyte in size, and takes precious dozens of milliseconds to transmit. 



Read more...