Originally posted on Gigaom:

Chances are, the quad-core processor powering your desktop computer or high-end laptop is vastly underworked. But it’s not your fault: Writing code that executes in parallel is difficult, so most consumer applications (save for some compute-intensive video games that really need help, for example) continue to run on just one core at a time. Which makes it all the more impressive that a group of Stanford researchers recently ran a jet-engine-noise simulation across 1 million cores simultaneously.

As anyone even casually familiar with parallel processing knows, running applications across more nodes means jobs execute faster because they’re able to share the computing workload. The more cores, the faster it runs. This what makes Hadoop, for example, so great at processing large chunks of data. The MapReduce framework on which it’s based divvies up the work across nodes and everything they find is stitched back together as the…

View original 369 more words

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s