Moore’s Law & Landauer’s Principle
The progress of semiconductor technology stalls. Moore’s Law and Landau’s Principle reveal existing problems.
Moore’s Law states that the number of transistors contained in a microprocessor doubles approximately every two years. The law is now obsolete. The originally assumed values are no longer achieved. Moore’s Law owes its decades-long success to the fact that as transistors became smaller, cheaper, faster and more energy efficient. The profit from this scenario enabled reinvestment in semiconductor manufacturing technology, which could then enable even smaller, more densely packed transistors. This cycle continued, decade after decade.
Computers have experienced steady improvements for about five decades and that only because of the exponential increase in the number of transistors that could be installed on an integrated circuit (IC) per unit area. Today, experts from industry, academia and government laboratories believe that semiconductor miniaturization will continue for perhaps another five or ten years. The miniaturization of transistors no longer brings the improvements it once did. The physical properties of small transistors led to a stagnation in clock speeds more than a decade ago, prompting the industry to build chips with multiple cores.
Enormous efforts are currently being made in the semiconductor industry to keep miniaturization going. But no investment can change the laws of physics. Although smaller transistors can still be made at present, we are gradually leaving the realm of classical physics and entering the realm of quantum mechanics, where materials behave differently. In other words, there comes a time when an electron cannot pass such a thin wire. Its electromagnetic wave “jumps” from one conductor to another, causing undesirable values in the surrounding transistors.
Physical limitations aside, in the not-too-distant future, a new computer that has only smaller transistors will not be cheaper, faster, or more energy efficient than its predecessors. At that point, the progress of conventional semiconductor technology will be halted.
Landauer’s Principle states that today’s irreversibly (i.e. irrecoverably) operating computers lose the injected energy predominantly in the form of heat to the environment. This happens in particular by erasing a bit of information and even if one would drastically reduce the power loss, one would not get below a certain limit of power loss. The hypothesis described by Rolf Landauer in 1961 has since been confirmed by experiments and links information theory with thermodynamics and statistical physics. Thus, throughout the history of computing, our computers have operated in a manner that causes the intentional loss of some information (it is destructively overwritten) when performing computations.
The current lower limit of power dissipation can be undercut only with fundamental technical innovations such as quantum computers or reversible (i.e., recoverable) computers. The latter are directly derived from Landauer’s principle. Reversible computing means that a calculation can be performed without loss of information and thus from the final result of a calculation, the initial state can be restored. In order to avoid a deletion of information, these run backwards after the end of a computation again into the initial state. If one would like to convert this, all elements must be developed reversibly again from the logical gate to the programming language.
Reversible computing is the only possible way within the laws of physics that we might be able to improve the cost and energy efficiency of universal computing well into the future. In the past, not much attention has been paid to reversible computing. This is because it is very difficult to implement and there was little reason to pursue this major challenge as long as conventional technology could keep improving (Moore’s Law). Now that the end of constant progress is in sight, it is time to put reversible computing into practice.
Ternary based concepts provide solutions to the existing problems
JINN Labs (unfortunately stopped, see here) is already working on a ternary microcontroller called “JINN”. This could trigger the next evolutionary step in the semiconductor industry. Space-saving and at the same time energy-efficient ternary microcontrollers are tailor-made for the future IoT, because all miniature devices have to provide relatively high computing power for Proof-of-Work. Any energy savings, no matter how small, is beneficial, especially for battery-powered devices.
The IF in the person of David Sønstebø argues that ternary based software running on ternary hardware does the following, be more efficient.
“To understand the choice of ternary, you have to understand where IOTA comes from. IOTA comes from the hardware industry, where Moore’s Law has long been exhausted, such that new hardware needs to be developed for novel use cases. As I always say, software drives hardware design, not the other way around. Currently there is no hardware (except mining hardware and wallets) in existing products that support any kind of DLT. That is why IOTA are pioneers and we want to make the future standard as optimal as possible for long term benefits.” – David Sønstebø
With the mass-produced binary components for computers, ternary computers became a footnote in computing history, even though the balanced ternary system is a more efficient number representation in mathematics than binary. This will change again in the future, according to some experts. So the IF is already betting on ternary-based technology to prepare for the future.
Last Updated on 16. February 2021