Computer Architecture Basics
Some of the most basic and vital components of what makes a computer work are the processor, memory and I/O devices. Different setups and structures of these components can allow for greater efficiency or specialization for specific tasks.
The main component of a computer is the processor. Also referred to as the CPU (central processing unit), the processor is what’s responsible for carrying out the instructions of the computer program. One of the principal circuits of the processor is the APU (arithmetic logic unit) which performs the bitwise and mathematical operations on binary numbers. APU operations include addition and subtraction as well as AND/OR boolean algebra.
CISC vs. RISC
The two main types of processor architecture include CISC (complex instruction set computer) and RISC (reduced instruction set computer).
In the late 70s and early 80s computer memory was slow and expensive. Therefore the primary goal of processor innovation was to have a single instruction do more and have it execute that instruction faster. The CISC processor takes a single complex instruction and executes it all at once over the course of several clock cycles. It aims to keep code size small, reduce the amount of memory used, and keep the number of times memory is accessed to a minimum. With the CISC approach the majority of the computing burden is placed on the hardware. As the instructions got more complex the CISC approach started to provide diminishing returns. Processors started to become large, expensive and required more power.
The RISC processor brought about the concept of using a larger amount of smaller, simpler steps as opposed a single complex instruction. Each of these smaller instructions require fewer clock cycles to perform and overall a task can be completed in a faster time. Memory started to become cheaper and faster, compilers became more efficient and RISC started becoming more widely used. The fact that RISC placed more of the burden on the software compiler also allowed for lower power usage.
RISC also uses the method of “pipelining” instructions. This allows for increased performance by executing several instructions simultaneously but at different stages. While one instruction is being fetched from the memory, another that was already fetched is being decoded, and a third decoded instruction is being executed.
Interrupts allow for the processor to be stopped from what it’s currently working on and be redirected to a more high priority task. When the processor receives an interrupt it determines whether or not it is of a higher priority than the current task. If the interrupt is of higher importance it pauses its current task to work on the new one. Interrupts can be received from either the hardware or software. A hardware interrupt could be a signal from an I/O peripheral like the mouse or keyboard. Software interrupts are typically low priority. An example of a software interrupt could be a request from an application for services from the operating system.
Instead of only using one processor, multiple processors can be combined to process data. Two architectures used for parallel computing include SIMD (single instruction, multiple data) and MIMD (multiple instruction, multiple data). SIMD uses multiple processors to perform a single task while MIMD performs multiple actions simultaneously in smaller parts. SIMD processors are typically small, simple and fast whereas MIMD processors have the ability to perform extremely complex operations.
Memory is where the data and software to be used by the processor is stored. The two main forms of memory are RAM (random access memory) and ROM (read-only memory).
RAM is used to store temporary data for the programs that are currently running. RAM is typically “volatile” meaning that it loses it data after the computer loses power. The two main categories of RAM are “static RAM” (SRAM) and “dynamic RAM” (DRAM). SRAM is the faster version. It also uses a low amount of power. The downsides of SRAM are that it holds less memory and is more expensive. DRAM on the other hand is cheaper and holds more data but requires more power usage. RAM also uses caches to speed up performance. Frequently used information is stored in the cache to allow for a faster retrieval.
ROM is used for long term storage. It is “nonvolatile” meaning that the data is kept after the computer loses power. Standard ROM is not rewritable and is typically used for firmware (software used to initialize the computer’s hardware). Rewritable forms of ROM include EPROM (erasable programable read-only memory) which requires the ROM to be removed from the computer to be rewritten, EEROM (electronically erasable read-only memory) which can be rewritten while still installed but needs to be completely erased before rewriting, and the more popular Flash memory which can be rewritten while installed and without having to erase the entire contents of the devise.
Harvard vs. von Neumann Architecture
Two main forms of memory allocation architecture are the Harvard and von Neumann models. The Harvard model uses what’s known as “ported I/O” where program instruction memory and data memory are separated and use separate buses (communication systems that transfer data between computer components). The Harvard model allows for simultaneous fetching of both instruction and data memory. Tradeoffs include being more expensive and using more power. The von Neumann model by comparison uses “memory mapped I/O” where the same physical memory address and bus system is used for both instruction and data memory. The combination allows for lower production costs and is used in most personal computers.
I/O devises, also known as peripherals, allow for information to be input and output from the internal computer. These can include keyboards, mice, modems, disk drives and video/audio/network interfaces. Different forms of I/O include “programmed I/O” where it is controlled by instructions in the computer program, “interrupt-driven I/O” where it is controlled by external peripherals, and “direct memory access” where an I/O devise can interact with the memory directly without going through the processor.