RISC (Reduced Instruction Set Computer) and CISC (Complex Instruction Set Computer) are two different CPU architectures. Their names pretty much explain the difference. The argument over which is “better” has been around forever. When I learned VAX assembly language, I remember being impressed that there was a single instruction to evaluate a polynomial. Granted, one had to load a great many registers appropriately before invoking it, but still: One CPU instruction to evaluate a (for a CPU) complex algebraic equation! How cool is that! Meanwhile, Sun Microsystems was building RISC machines that fewer people were buying.
Well, it turns out that CISC doesn’t age well. The x86 architecture (Intel and AMD) has grown immensely since its inception. All those complex CPU instructions that very few applications actually use are burned into every chip produced. Meanwhile, ARM (Advance Risc Machine) chips are creeping towards the desktop.
This comes to mind because the Rapberry Pi is an ARM system. Apple has moved their tablets to ARM. Almost every phone is ARM-based. “Raspberian” became “Raspberry Pi OS” when it stopped running on the x86 CPUs.
These days, CPU cycles are cheap (my CISC AMD chip runs 32 cores at over 4 GHz). The problem is now heat dissipation. Those complex instructions require a lot of transistors and the faster the clock speed, the more heat generated (my chip is liquid cooled). RISC systems must execute more instructions to accomplish the same task, but there is less idle circuitry at any given time and what is active generates less heat, because it consumes less power. Hence the creeping from phone to tablet to laptop to desktop. Only the latter is not dependent upon battery power.
The two are radically different. While one compiler can generate machine code for either system (“gcc” comes to mind), there are differences that require human-level attention. At some point, the cost of supporting both architectures is going to tip in favor of RISC, simply due to power consumption and the existence of more battery powered CPUs than wall powered CPUs (not to mention the air conditioning issues in data centers).
Why this is interesting: Software architecture is changing as “external” systems speed up. There is nothing conceptually new about systems such as Cassandar, Kafka, and Pulsar. What makes them “interesting” now is network speeds have increased to such a degree that what was formerly a hardware solution (RAID) is now a software solution (clusters).
Does the RISC vs CISC debate have software repercussions? We seem to be moving in a RISC direction with “micro services”, “containers”, and “pods”. Again, there is nothing conceptually new about this, the hardware has just evolved to a point where it is practical.
BUT, what does your “micro service” do? Does it multiply or does it evaluate a polynomial?
Latency is still an issue. That is the advantage of CISC over RISC: It can accomplish (most) tasks in fewer instructions. The less a micro service does, the more of them needed to accomplish a complex task – and it takes time to move data from service to service. Orders of magnitude more time than it takes to move data from instruction to instruction inside a CPU.
CISC has reigned supreme in the computing world because it run with better throughput, even if the clock speed was a bit lower than a RISC processor. This is starting to change as CISC runs into scaling and heat/power limits.
In software, complex services seem likely to dominate for the mid-term future for much the same reasons. Until network speed and reliability start to approach bus speeds and reliability – a point that isn’t even a dream, right now – complex services will continue to be the preferred solution for low-latency needs.