<?xml version='1.0' encoding='UTF-8'?><?xml-stylesheet href="http://www.blogger.com/styles/atom.css" type="text/css"?><feed xmlns='http://www.w3.org/2005/Atom' xmlns:openSearch='http://a9.com/-/spec/opensearchrss/1.0/' xmlns:blogger='http://schemas.google.com/blogger/2008' xmlns:georss='http://www.georss.org/georss' xmlns:gd="http://schemas.google.com/g/2005" xmlns:thr='http://purl.org/syndication/thread/1.0'><id>tag:blogger.com,1999:blog-4718119077220204108</id><updated>2024-09-06T03:08:32.962+08:00</updated><category term="programming"/><category term="gpu"/><category term="parallel"/><category term="CUDA"/><category term="Ruby"/><category term="parallelization"/><category term="matrix"/><category term="Cilk++"/><category term="MPI"/><category term="OpenMP"/><category term="TBB"/><category term="cpu"/><category term="hpc"/><category term="multi core"/><category term="OpenCL"/><category term="PThread"/><category term="cloud"/><category term="Brook+"/><category term="Charm++"/><category term="PVM"/><category term="algorithm"/><category term="apu"/><category term="auto"/><category term="compiler"/><category term="uC++"/><title type='text'>SpeedGo Computing</title><subtitle type='html'>Speed breaking computational problems with multi-core CPU and GPU</subtitle><link rel='http://schemas.google.com/g/2005#feed' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/posts/default'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default?redirect=false'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/'/><link rel='hub' href='http://pubsubhubbub.appspot.com/'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><generator version='7.00' uri='http://www.blogger.com'>Blogger</generator><openSearch:totalResults>24</openSearch:totalResults><openSearch:startIndex>1</openSearch:startIndex><openSearch:itemsPerPage>25</openSearch:itemsPerPage><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-2721007460498696070</id><published>2013-07-04T21:41:00.000+08:00</published><updated>2013-07-04T21:41:26.354+08:00</updated><title type='text'>[Resolved] Skype crashed on Fedora 19</title><summary type="text">
Running Skype on Fedora 19 gets core dumped immediately.


By luck I found a workaround using mesa-libGL library from Fedora 17.



Steps:


Download the&amp;nbsp;mesa-libGL-8.0.4-1.fc17.i686.rpm from Fedora, or get it here.
Extract the rpm file.

$ rpm2cpio mesa-libGL-8.0.4-1.fc17.i686.rpm | cpio -idv

Run skype with the library.

$ LD_LIBRARY_PATH=usr/lib /usr/bin/skype





That seems working </summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/2721007460498696070/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2013/07/resolved-skype-crashed-on-fedora-19.html#comment-form' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/2721007460498696070'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/2721007460498696070'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2013/07/resolved-skype-crashed-on-fedora-19.html' title='[Resolved] Skype crashed on Fedora 19'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>3</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-45239165705699544</id><published>2012-06-18T14:50:00.001+08:00</published><updated>2012-06-18T17:31:41.383+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="CUDA"/><category scheme="http://www.blogger.com/atom/ns#" term="gpu"/><category scheme="http://www.blogger.com/atom/ns#" term="parallel"/><title type='text'>Personal Supercomputing System with Quad GPUs</title><summary type="text">The secrets of the new Kepler GPUs have been revealed. The Kepler based graphics cards have been studied extensively for gaming performance. Most would suggest you don&#39;t need the upgrade. Furthermore, the supported PCI-E 3.0 is of little to no use.

Well, it&#39;s probably a different story for CUDA programs. Here&#39;s the setup I&#39;m going to use for testing CUDA programs extensively.


Asus P8Z99-V </summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/45239165705699544/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2012/06/personal-supercomputing-system-with.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/45239165705699544'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/45239165705699544'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2012/06/personal-supercomputing-system-with.html' title='Personal Supercomputing System with Quad GPUs'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCjmwWbT3Vma0xNStKyk_YK_cV-EtvlMvQyeZEexpF_7EdRwVV5PnzA9t531MXoTpY98Pz-v15TNly4yIXuRz1AjCooL3rGM6b2bKyL2_xRNXotsAx6LFrHunM4dSR8nt3pueJ6RLFCAs/s72-c/577561_10150887317423517_82959899_n.jpg" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-295106998493068235</id><published>2011-06-26T08:29:00.000+08:00</published><updated>2011-06-26T08:29:55.592+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="CUDA"/><title type='text'>Being Nvidia CUDA Certified Programmer!</title><summary type="text">It takes some courage and effort to take the Nvidia CUDA Certification exam. You&#39;ll have to pay S$350 for that yet there is no guarantee of real use in business and career. The exam questions are perfect to squeeze out all your brain juice.

After much feedback and long awaiting, delayed plans, finally I received an email about being Nvidia CUDA certified programmer now. It&#39;s better arrived late </summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/295106998493068235/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2011/06/being-nvidia-cuda-certified-programmer.html#comment-form' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/295106998493068235'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/295106998493068235'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2011/06/being-nvidia-cuda-certified-programmer.html' title='Being Nvidia CUDA Certified Programmer!'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-6500095993166424993</id><published>2011-05-09T21:03:00.000+08:00</published><updated>2011-05-09T21:03:13.775+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="CUDA"/><category scheme="http://www.blogger.com/atom/ns#" term="gpu"/><category scheme="http://www.blogger.com/atom/ns#" term="Ruby"/><title type='text'>The Choice is Yours: CUDA in C++ or Ruby</title><summary type="text">
See the output here: Ruby Query Output


See the output here: C++ Query Output</summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/6500095993166424993/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2011/05/choice-is-yours-cuda-in-c-or-ruby.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/6500095993166424993'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/6500095993166424993'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2011/05/choice-is-yours-cuda-in-c-or-ruby.html' title='The Choice is Yours: CUDA in C++ or Ruby'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-2908664133889538528</id><published>2011-05-03T17:44:00.000+08:00</published><updated>2011-05-03T17:44:28.418+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="CUDA"/><category scheme="http://www.blogger.com/atom/ns#" term="gpu"/><category scheme="http://www.blogger.com/atom/ns#" term="programming"/><title type='text'>Web Seminar: Programming GPUs Beyond CUDA</title><summary type="text">GPU/CUDA programming is easy if we ignore the performance, or even the correctness of the program. It becomes tough when the performance is critical, one has to optimize very hard on the specific hardware. Fortunately, GPU hardware performance improves drastically every 2 years. Unfortunately, the performance is not portable across different generations of GPUs.

Prof Chen from Tshing Hua </summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/2908664133889538528/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2011/05/web-seminar-programming-gpus-beyond.html#comment-form' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/2908664133889538528'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/2908664133889538528'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2011/05/web-seminar-programming-gpus-beyond.html' title='Web Seminar: Programming GPUs Beyond CUDA'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-266462278933999672</id><published>2011-04-30T17:56:00.003+08:00</published><updated>2011-05-02T15:51:10.967+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="CUDA"/><category scheme="http://www.blogger.com/atom/ns#" term="gpu"/><category scheme="http://www.blogger.com/atom/ns#" term="hpc"/><category scheme="http://www.blogger.com/atom/ns#" term="programming"/><category scheme="http://www.blogger.com/atom/ns#" term="Ruby"/><title type='text'>First Release of SGC Ruby CUDA - Beginning of a long way path</title><summary type="text">Today we decided to put up the first release of the SGC Ruby CUDA v0.1.0 as a mean to attract Rubyists to try out GPU programming as their new toy projects, and also to encourage HPC developers to evaluate if Ruby is good to use for their HPC applications.

When important software libraries are not available in Ruby, we certainly do not expect much Ruby usage in the area. As time is running short</summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/266462278933999672/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2011/04/first-release-of-sgc-ruby-cuda.html#comment-form' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/266462278933999672'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/266462278933999672'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2011/04/first-release-of-sgc-ruby-cuda.html' title='First Release of SGC Ruby CUDA - Beginning of a long way path'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-177248029062926548</id><published>2011-04-24T10:32:00.001+08:00</published><updated>2011-04-24T10:32:58.165+08:00</updated><title type='text'>GPU Computing with Ruby</title><summary type="text">Presented in RedDotRubyConf 2011 - PechaKucha Night Singapore.GPU Computing with RubyView more presentations from myxman.</summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/177248029062926548/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2011/04/gpu-computing-with-ruby.html#comment-form' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/177248029062926548'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/177248029062926548'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2011/04/gpu-computing-with-ruby.html' title='GPU Computing with Ruby'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-2538637127840456512</id><published>2010-11-19T21:16:00.003+08:00</published><updated>2010-11-24T12:32:15.786+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="cloud"/><category scheme="http://www.blogger.com/atom/ns#" term="CUDA"/><category scheme="http://www.blogger.com/atom/ns#" term="gpu"/><category scheme="http://www.blogger.com/atom/ns#" term="Ruby"/><title type='text'>Using SGC-Ruby-CUDA on the Newly Launched Amazon EC2 Cluster GPU</title><summary type="text">Wonder if GPU works for you? No budget for a system with decent GPU? Installations and configurations are too much trouble for you? You can now try out SGC-Ruby-CUDA on Amazon EC2 with the system image, located at US East Virginia zone, called SGCRubyCUDA.1 which is available as a community AMI.

Compile for rubycu shared library and run tests.

[root@ip-10-17-130-174 sgc-ruby-cuda.git]# rake
(in</summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/2538637127840456512/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2010/11/using-sgc-ruby-cuda-on-newly-launched.html#comment-form' title='6 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/2538637127840456512'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/2538637127840456512'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2010/11/using-sgc-ruby-cuda-on-newly-launched.html' title='Using SGC-Ruby-CUDA on the Newly Launched Amazon EC2 Cluster GPU'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>6</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-2873007604611679152</id><published>2010-11-16T10:35:00.000+08:00</published><updated>2010-11-16T10:35:13.076+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="cloud"/><category scheme="http://www.blogger.com/atom/ns#" term="gpu"/><category scheme="http://www.blogger.com/atom/ns#" term="hpc"/><title type='text'>GPU Anywhere with Cloud Computing</title><summary type="text">Simulation taking months to run? Buying and maintaining new systems causing too much hassle? Perhaps Cluster GPU would be a good candidate to save time and trouble. Cloud solution is an excellent platform for proof of concept before committed to a large system in-house.

Paying $2.10 per hour (Amazon pricing as of 16 Nov 2010) for the spec of:

22 GB of memory
33.5 EC2 Compute Units (2 x Intel </summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/2873007604611679152/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2010/11/gpu-anywhere-with-cloud-computing.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/2873007604611679152'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/2873007604611679152'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2010/11/gpu-anywhere-with-cloud-computing.html' title='GPU Anywhere with Cloud Computing'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-2793087737173429540</id><published>2010-09-26T08:10:00.001+08:00</published><updated>2010-10-17T12:40:19.811+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="parallel"/><category scheme="http://www.blogger.com/atom/ns#" term="programming"/><title type='text'>Parallel programming knowledge is must-have skill for Wall Street</title><summary type="text">Parallel programming knowledge is must-have skill for Wall Street</summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/2793087737173429540/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2010/09/parallel-programming-knowledge-is-must.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/2793087737173429540'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/2793087737173429540'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2010/09/parallel-programming-knowledge-is-must.html' title='Parallel programming knowledge is must-have skill for Wall Street'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-6750439634358197261</id><published>2010-09-17T21:58:00.001+08:00</published><updated>2010-10-17T12:41:08.053+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="CUDA"/><category scheme="http://www.blogger.com/atom/ns#" term="gpu"/><category scheme="http://www.blogger.com/atom/ns#" term="OpenCL"/><category scheme="http://www.blogger.com/atom/ns#" term="programming"/><title type='text'>Unigine crew: CUDA vs OpenCL vs SPU Part IV</title><summary type="text">Which language or library you choose to use for your software development has great and prolong impact to the software. Just come across a simple yet interesting benchmark. Perhaps, more details on why such numbers are obtained would be even more enlightening.Unigine crew: CUDA vs OpenCL vs SPU Part IV</summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/6750439634358197261/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2010/09/unigine-crew-cuda-vs-opencl-vs-spu-part.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/6750439634358197261'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/6750439634358197261'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2010/09/unigine-crew-cuda-vs-opencl-vs-spu-part.html' title='Unigine crew: CUDA vs OpenCL vs SPU Part IV'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-3053667535687587978</id><published>2010-09-17T10:46:00.005+08:00</published><updated>2010-10-23T09:45:04.125+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="CUDA"/><category scheme="http://www.blogger.com/atom/ns#" term="gpu"/><category scheme="http://www.blogger.com/atom/ns#" term="programming"/><category scheme="http://www.blogger.com/atom/ns#" term="Ruby"/><title type='text'>CUDA Programming with Ruby</title><summary type="text">Need GPU computing power in your Ruby program? Great! SpeedGo Computing is developing Ruby bindings for CUDA, called sgc-ruby-cuda. Take advantage of your Nvidia CUDA-enabled graphics cards with Ruby now.Currently, only part of the CUDA Driver API is included. More components such as the CUDA Runtime API will be included to make it as complete as possible.CUDA Programming with Rubyrequire &#39;</summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/3053667535687587978/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2010/09/cuda-programming-with-ruby.html#comment-form' title='6 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/3053667535687587978'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/3053667535687587978'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2010/09/cuda-programming-with-ruby.html' title='CUDA Programming with Ruby'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>6</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-3830094285084907840</id><published>2010-09-07T19:23:00.003+08:00</published><updated>2010-10-17T12:34:42.179+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="cpu"/><category scheme="http://www.blogger.com/atom/ns#" term="gpu"/><category scheme="http://www.blogger.com/atom/ns#" term="hpc"/><category scheme="http://www.blogger.com/atom/ns#" term="multi core"/><title type='text'>High Performance for All</title><summary type="text">Parallel programming is much more affordable now as multi-core CPU and programmable GPU become commodity products. Unlike a decade ago where a minimum dual socket system equipped with lower clocked CPU &amp; RAM would relatively cost a fortune to a typical desktop user, but dual-core system is basically everywhere nowadays. The use of dual-core systems is not really because it&#39;s affordable, but </summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/3830094285084907840/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2010/09/high-performance-for-all.html#comment-form' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/3830094285084907840'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/3830094285084907840'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2010/09/high-performance-for-all.html' title='High Performance for All'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-5226702773969879284</id><published>2010-08-25T12:38:00.020+08:00</published><updated>2010-10-17T12:35:06.454+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="apu"/><category scheme="http://www.blogger.com/atom/ns#" term="cpu"/><category scheme="http://www.blogger.com/atom/ns#" term="multi core"/><title type='text'>AMD’s Bulldozer vs Intel&amp;#39;s Hyper-Threading?</title><summary type="text">AMD&#39;s so called Strong Thread approach in the Bulldozer module is that really compelling?Extra cores are added when a processor can&#39;t operate at a faster clock speed, that&#39;s a good and easy way to expand a product line with effectively faster products, even though it may NOT be any faster depending on whether the applications are taking advantage of the multiple cores. But fully duplicating x86 </summary><link rel='enclosure' type='text/html' href='http://www.tomshardware.com/reviews/bulldozer-bobcat-hot-chips,2724.html' length='0'/><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/5226702773969879284/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2010/08/amds-bulldozer-vs-intels-hyper.html#comment-form' title='4 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/5226702773969879284'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/5226702773969879284'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2010/08/amds-bulldozer-vs-intels-hyper.html' title='AMD’s Bulldozer vs Intel&amp;#39;s Hyper-Threading?'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjm4bJ_5P2E5X_3eWQ6UALzKTNgwh4HgU1ujoBBEyvmNTLgqMz0WYbRw5e6Z-m5q2U5eByH0n9Vmg8dnGtzVER4ID9N6pKm1ulo3k1_vCocx4Oj_ZMDXTHHNtCCgBKDuGA3VQFG1t3Vv30/s72-c/bulldozer_module.jpg" height="72" width="72"/><thr:total>4</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-3630854547807992104</id><published>2010-08-17T12:41:00.005+08:00</published><updated>2010-10-17T12:30:34.805+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="matrix"/><category scheme="http://www.blogger.com/atom/ns#" term="MPI"/><category scheme="http://www.blogger.com/atom/ns#" term="parallel"/><category scheme="http://www.blogger.com/atom/ns#" term="parallelization"/><category scheme="http://www.blogger.com/atom/ns#" term="programming"/><title type='text'>Parallelizing Matrix Multiplication using MPI</title><summary type="text">MPI is a popular mechanism in high performance computing. It works for both cluster and shared memory environment. Why don&#39;t we simply use MPI when it works for both environments? Why do we care about OpenMP? Cilk++? etc. Perhaps that depends on the complexity of the applications you are dealing with.Parallel Matrix Multiplication using MPI/* matrix-mpi.cpp */#include &amp;lt;mpi.h&amp;gt;const int size </summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/3630854547807992104/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2010/08/parallelizing-matrix-multiplication_17.html#comment-form' title='4 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/3630854547807992104'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/3630854547807992104'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2010/08/parallelizing-matrix-multiplication_17.html' title='Parallelizing Matrix Multiplication using MPI'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>4</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-3689730311957495531</id><published>2010-08-15T14:13:00.004+08:00</published><updated>2010-10-17T12:30:35.695+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="matrix"/><category scheme="http://www.blogger.com/atom/ns#" term="parallel"/><category scheme="http://www.blogger.com/atom/ns#" term="parallelization"/><category scheme="http://www.blogger.com/atom/ns#" term="programming"/><category scheme="http://www.blogger.com/atom/ns#" term="TBB"/><title type='text'>Parallelizing Matrix Multiplication using TBB</title><summary type="text">Parallelizing matrix multiplication using TBB isn&#39;t too difficult. It&#39;s just a little more work than OpenMP or Cilk++.Parallel Matrix Multiplication using TBB/* matrix-tbb.cpp */#include &amp;lt;tbb/parallel_for.h&amp;gt;#include &amp;lt;tbb/blocked_range.h&amp;gt;using namespace tbb;const int size = 1000;float a[size][size];float b[size][size];float c[size][size];class Multiply{public:    void operator()(</summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/3689730311957495531/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2010/08/parallelizing-matrix-multiplication_8641.html#comment-form' title='10 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/3689730311957495531'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/3689730311957495531'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2010/08/parallelizing-matrix-multiplication_8641.html' title='Parallelizing Matrix Multiplication using TBB'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>10</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-7031147384088228471</id><published>2010-08-15T11:01:00.011+08:00</published><updated>2010-10-17T12:30:36.605+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Cilk++"/><category scheme="http://www.blogger.com/atom/ns#" term="matrix"/><category scheme="http://www.blogger.com/atom/ns#" term="parallel"/><category scheme="http://www.blogger.com/atom/ns#" term="parallelization"/><category scheme="http://www.blogger.com/atom/ns#" term="programming"/><title type='text'>Parallelizing Matrix Multiplication using Cilk++ in Two Lines</title><summary type="text">Following the parallelization of matrix multiplication using OpenMP in Parallelizing Matrix Multiplication using OpenMP in One Line, can we do the same using Cilk++?Parallel Matrix Multiplication using Cilk++/* matrix.cilk */const int size = 1000;float a[size][size];float b[size][size];float c[size][size];int cilk_main(){    // Initialize buffers.    for (int i = 0; i &lt; size; ++i) {        for (</summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/7031147384088228471/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2010/08/parallelizing-matrix-multiplication_15.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/7031147384088228471'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/7031147384088228471'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2010/08/parallelizing-matrix-multiplication_15.html' title='Parallelizing Matrix Multiplication using Cilk++ in Two Lines'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-4724360251246360907</id><published>2010-08-14T22:29:00.006+08:00</published><updated>2010-10-17T12:30:37.964+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="matrix"/><category scheme="http://www.blogger.com/atom/ns#" term="OpenMP"/><category scheme="http://www.blogger.com/atom/ns#" term="parallel"/><category scheme="http://www.blogger.com/atom/ns#" term="parallelization"/><category scheme="http://www.blogger.com/atom/ns#" term="programming"/><title type='text'>Parallelizing Matrix Multiplication using OpenMP in One Line</title><summary type="text">Matrix multiplication is often used for academic study. It&#39;s well suited for parallelization due to its intensive O(N^3) computation and independent computation. Parallel programming is hard. Does it surprise you if we parallelize matrix multiplication in merely one line of OpenMP directive?Serial Matrix Multiplication/* matrix.cpp */const int size = 1000;float a[size][size];float b[size][size];</summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/4724360251246360907/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2010/08/parallelizing-matrix-multiplication.html#comment-form' title='8 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/4724360251246360907'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/4724360251246360907'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2010/08/parallelizing-matrix-multiplication.html' title='Parallelizing Matrix Multiplication using OpenMP in One Line'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>8</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-4023392998806497194</id><published>2010-08-11T18:14:00.013+08:00</published><updated>2010-10-17T12:30:39.007+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Cilk++"/><category scheme="http://www.blogger.com/atom/ns#" term="MPI"/><category scheme="http://www.blogger.com/atom/ns#" term="OpenMP"/><category scheme="http://www.blogger.com/atom/ns#" term="parallel"/><category scheme="http://www.blogger.com/atom/ns#" term="programming"/><category scheme="http://www.blogger.com/atom/ns#" term="PThread"/><category scheme="http://www.blogger.com/atom/ns#" term="Ruby"/><category scheme="http://www.blogger.com/atom/ns#" term="TBB"/><title type='text'>Parallel Programming - Hello World</title><summary type="text">Many computer science/engineering students learn to write Hello World program at their first programming lecture. What&#39;s your first parallel program? What about Hello World program in OpenMP, MPI, Cilk++, TBB, Ruby thread, PThread?Hello World in C/* hello.c */#include &amp;lt;stdio.h&amp;gt;int main(){    printf(&quot;hello world\n&quot;);    return 0;}$ gcc hello.c -o hello$ ./hellohello worldHello World in </summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/4023392998806497194/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2010/08/parallel-programming-hello-world.html#comment-form' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/4023392998806497194'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/4023392998806497194'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2010/08/parallel-programming-hello-world.html' title='Parallel Programming - Hello World'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-1351378876605459310</id><published>2010-07-31T02:21:00.013+08:00</published><updated>2010-10-17T12:30:40.045+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Brook+"/><category scheme="http://www.blogger.com/atom/ns#" term="Charm++"/><category scheme="http://www.blogger.com/atom/ns#" term="Cilk++"/><category scheme="http://www.blogger.com/atom/ns#" term="CUDA"/><category scheme="http://www.blogger.com/atom/ns#" term="MPI"/><category scheme="http://www.blogger.com/atom/ns#" term="OpenCL"/><category scheme="http://www.blogger.com/atom/ns#" term="OpenMP"/><category scheme="http://www.blogger.com/atom/ns#" term="parallel"/><category scheme="http://www.blogger.com/atom/ns#" term="programming"/><category scheme="http://www.blogger.com/atom/ns#" term="PThread"/><category scheme="http://www.blogger.com/atom/ns#" term="PVM"/><category scheme="http://www.blogger.com/atom/ns#" term="TBB"/><category scheme="http://www.blogger.com/atom/ns#" term="uC++"/><title type='text'>Parallel Programming - What Are The Options?</title><summary type="text">There are simply way too many parallel programming languages and libraries to keep track of. Many of them are no longer active in development, or difficult to get them working in decent operating systems. What are the practical options currently available for multi-core CPU or GPU?OpenMPHardware: Shared memory multi-core CPU system.Parallelization: Use directives e.g. #pragma omp parallel {} in C</summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/1351378876605459310/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2010/07/parallel-programming-what-are-options.html#comment-form' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/1351378876605459310'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/1351378876605459310'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2010/07/parallel-programming-what-are-options.html' title='Parallel Programming - What Are The Options?'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-2054027409662684340</id><published>2010-07-29T16:10:00.002+08:00</published><updated>2010-10-17T12:30:41.099+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="cpu"/><category scheme="http://www.blogger.com/atom/ns#" term="gpu"/><category scheme="http://www.blogger.com/atom/ns#" term="multi core"/><category scheme="http://www.blogger.com/atom/ns#" term="programming"/><title type='text'>Who Is Responsible For The Programming Of Multi Core CPU And GPU?</title><summary type="text">Multi core CPU and GPU are now commodity products. But, where are the software that could take advantage of their parallel architecture? Who should be developing such software? The domain expert? HPC (high performance computing) sofware engineer? or parallel programming tools such as auto parallelizing compilers?Domain experts typically do not wish to spend too much time on computing problems. </summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/2054027409662684340/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2010/07/who-is-responsible-for-programming-of.html#comment-form' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/2054027409662684340'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/2054027409662684340'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2010/07/who-is-responsible-for-programming-of.html' title='Who Is Responsible For The Programming Of Multi Core CPU And GPU?'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-5851352855862165993</id><published>2010-07-28T17:33:00.007+08:00</published><updated>2010-10-17T12:36:32.190+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="auto"/><category scheme="http://www.blogger.com/atom/ns#" term="compiler"/><category scheme="http://www.blogger.com/atom/ns#" term="parallelization"/><title type='text'>Why Can&amp;#39;t Compilers Auto Parallelize Serial Code Effectively?</title><summary type="text">An auto parallelizing tool takes in a serial code base in C/C++/Fortran etc. and produces parallel version of the code. For instance, specifying -parallel option at Intel compiler compilation produces parallelized binary with OpenMP runtime. MIPSpro compiler provides similar auto parallelizing function with -apo option, where you can view the code transformation which consists of SGI OpenMP </summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/5851352855862165993/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2010/07/why-cant-compilers-auto-parallelize.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/5851352855862165993'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/5851352855862165993'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2010/07/why-cant-compilers-auto-parallelize.html' title='Why Can&amp;#39;t Compilers Auto Parallelize Serial Code Effectively?'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-5504948159830210305</id><published>2010-07-22T12:14:00.007+08:00</published><updated>2010-10-17T12:30:43.375+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="algorithm"/><category scheme="http://www.blogger.com/atom/ns#" term="parallel"/><category scheme="http://www.blogger.com/atom/ns#" term="programming"/><title type='text'>Where Are All The Practical Parallel Algorithms and Libraries?</title><summary type="text">Multi-core CPU and GPU are everywhere nowadays from laptops to desktops to high-end computing clusters. Is your particular application running any faster? Nope. But generally you need parallel algorithms for an application to make full use of the multiple cores.Perhaps you&#39;ll expect doing some searches on the web, research publications and academic books would provide you all the state of art </summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/5504948159830210305/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2010/07/where-are-all-practical-parallel.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/5504948159830210305'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/5504948159830210305'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2010/07/where-are-all-practical-parallel.html' title='Where Are All The Practical Parallel Algorithms and Libraries?'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-4718119077220204108.post-5482603593589817550</id><published>2010-07-21T03:14:00.020+08:00</published><updated>2010-10-17T12:30:44.819+08:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="parallel"/><category scheme="http://www.blogger.com/atom/ns#" term="programming"/><title type='text'>Why Is Parallel Programming Difficult?</title><summary type="text">Parallel programming is generally perceived as an activity only for people going after high tech, bleeding edge research. It is difficult and alien enough to drive most software engineers away, whether it is really the case or merely their misconceptions. The fact is, software engineers run away from parallel programming while modern general purpose processors consist more and more multiple cores</summary><link rel='replies' type='application/atom+xml' href='http://blog.speedgocomputing.com/feeds/5482603593589817550/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://blog.speedgocomputing.com/2010/07/why-is-parallel-programming-difficult.html#comment-form' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/5482603593589817550'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/4718119077220204108/posts/default/5482603593589817550'/><link rel='alternate' type='text/html' href='http://blog.speedgocomputing.com/2010/07/why-is-parallel-programming-difficult.html' title='Why Is Parallel Programming Difficult?'/><author><name>xman</name><uri>http://www.blogger.com/profile/05695636905017529897</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>1</thr:total></entry></feed>