CPUs vs. GPUs vs. FPGAs

Different Hardware for Different Purposes

When you're trying to tackle a problem involving some heavy data processing, the question often arises: "What's the best tool for the job?". Well, it's not always as simple as entering some numbers into an Excel spreadsheet and pressing return. There are certain tradeoffs between the different types of electronic hardware out there that you'll have to consider.

The text and videos on this page are an attempt to approach some of these differences from a very basic level using some simple analogies that most people should be familiar with (if you are 21 and above...apologies to the underage visitors).


The following video is a simple illustration to show how CPUs work, and the limitations of doing operations serially. A CPU has to run the entire show pretty much by itself, just like the one and only bartender at a nice Hawaiian resort. It has to handle any externally or internally driven interrupts. For instance, answering the phone, taking out the trash, etc. It can only hand certain things off to others (e.g., DMA transfers).

One of the really nice features about CPUs, though, is that you can write instructions for them in so many high level languages. One of the not-so-nice features is that getting lots of data from memory and through the processor takes awhile since reads and writes are relatively slow (unless it's from cache)


The following video shows how GPUs speed things up by allowing parallelization and massive multi-threading.

Each core in the GPU can be entirely devoted to their primary task. They don't have to deal with interrupts or handling multiple processes. Getting data from memory and into the cores is very efficient and so is putting the resulting bytes back into memory.

Writing code for a GPU is a bit trickier than it is for a CPU since there are only a handful of languages available.


The following video shows how FPGAs can speed things up even further by pipelining operations.