On Feb 3 2020, it was announced that David Sønstebø is withdrawing from this project and ceding all shares in JINN Labs in favor of Sergey lvancheglo “Come-from-Beyond”. I leave the article in the guide because the ternary approach to IOTA remains, however it was decided to focus on binary first for support of current hardware and ternary later for future hardware.
CfB is no longer a member of the IF, but would like to continue the project alone according to his own statements. But at this point, no one can say if and how it will continue. According to David’s statement on Feb 4 2020, the JINN project is dead.
Info about the history
As already described in the “Founding History“, the hardware startup JINN-Labs was founded back in 2014 by David Sønstebø and Sergey lvancheglo “Come-from-Beyond” (both are also founding members of IF). JINN-Labs is an independent company. The nature of its business relationship with IOTA are not yet disclosed, but according to statements by David Sønstebø, JINN Labs is a major funder of the IOTA Foundation.
Qubic is the software protocol (first ideas in 2012) that will be implemented with the help of JINN’s development. To realize this goal, among others, the IOTA Foundation was established in 2015.
Everything about JINN Labs is top secret and current details about the JINN processor are unfortunately not known or come to light only very scarce (see updates below). Almost all available information is from the early days when no non-disclosure agreement was signed.
What is JINN?
JINN is the working title for a new type of microcontroller (general purpose processor). Due to the high mobility for devices in the IoT (for example sensors) the particular challenge is to get by with as little power as possible. JINN is a space-saving yet energy-efficient general-purpose processor chip that works with the ternary system rather than the binary system, with the purpose of performing thousands of transactions per second.
How powerful will a JINN processor be?
JINN should not be compared with today’s binary processors. These are based on vertical scaling, which means that these processors become more powerful only by growing (more transistors). JINN on the other hand uses asynchronous circuits, ternary logic gates and allows horizontal scaling. The increase in computing power is achieved by a network of JINN processors. The performance of a single JINN processor is not that important because as the number of processors in the JINN network increases, the processing power of a single JINN also increases. As soon as the user releases his own single JINN, the processing power of his own JINN can be used for the whole network of JINNs (distributed computing).
What is special about JINN?
Every computer that wants to send a transaction into the IOTA Tangle currently has to do some work as spam protection and solve a small computational problem. This is called “Proof of Work”. A current PC takes about 20-30 seconds to do this calculation. With distributed computing using JINN and outsourced computations using Qubic, PoW can be done on low-resource micro devices. The JINN microcontroller can additionally be built into everyday devices. So any cell phone, refrigerator, sensor, drill, or any other device can send transactions through the Tangle, as it can now perform the required proof of work without its own large computing power.
The JINN processor is optimized in hardware for the Qupla / Abra (Qubic) ternary software. Future IOTA-based IoT devices could communicate with each other via JINN and Qubic, exchanging data or IOTA tokens for real-time value transfer. The big goal is that the JINN ternary microcontroller (or concept) will one day be integrated into the majority of Internet of Things devices.
This may give rise to an entire new sector of the economy that is highly automated and optimized. Ultimately, this is the vision of IOTA – the machine economy.
The progress of semiconductor technology stalls. Moore’s Law and Landau’s Principle reveal existing problems.
Moore’s Law states that the number of transistors contained in a microprocessor doubles approximately every two years. The law is now obsolete. The originally assumed values are no longer achieved. Moore’s Law owes its decades-long success to the fact that as transistors became smaller, cheaper, faster and more energy efficient. The profit from this scenario enabled reinvestment in semiconductor manufacturing technology, which could then enable even smaller, more densely packed transistors. This cycle continued, decade after decade.
Computers have experienced steady improvements for about five decades and that only because of the exponential increase in the number of transistors that could be installed on an integrated circuit (IC) per unit area. Today, experts from industry, academia and government laboratories believe that semiconductor miniaturization will continue for perhaps another five or ten years. The miniaturization of transistors no longer brings the improvements it once did. The physical properties of small transistors led to a stagnation in clock speeds more than a decade ago, prompting the industry to build chips with multiple cores.
Enormous efforts are currently being made in the semiconductor industry to keep miniaturization going. But no investment can change the laws of physics. Although smaller transistors can still be made at present, we are gradually leaving the realm of classical physics and entering the realm of quantum mechanics, where materials behave differently. In other words, there comes a time when an electron cannot pass such a thin wire. Its electromagnetic wave “jumps” from one conductor to another, causing undesirable values in the surrounding transistors.
Physical limitations aside, in the not-too-distant future, a new computer that has only smaller transistors will not be cheaper, faster, or more energy efficient than its predecessors. At that point, the progress of conventional semiconductor technology will be halted.
Landauer’s Principle states that today’s irreversibly (i.e. irrecoverably) operating computers lose the injected energy predominantly in the form of heat to the environment. This happens in particular by erasing a bit of information and even if one would drastically reduce the power loss, one would not get below a certain limit of power loss. The hypothesis described by Rolf Landauer in 1961 has since been confirmed by experiments and links information theory with thermodynamics and statistical physics. Thus, throughout the history of computing, our computers have operated in a manner that causes the intentional loss of some information (it is destructively overwritten) when performing computations.
The current lower limit of power dissipation can be undercut only with fundamental technical innovations such as quantum computers or reversible (i.e., recoverable) computers. The latter are directly derived from Landauer’s principle. Reversible computing means that a calculation can be performed without loss of information and thus from the final result of a calculation, the initial state can be restored. In order to avoid a deletion of information, these run backwards after the end of a computation again into the initial state. If one would like to convert this, all elements must be developed reversibly again from the logical gate to the programming language.
Reversible computing is the only possible way within the laws of physics that we might be able to improve the cost and energy efficiency of universal computing well into the future. In the past, not much attention has been paid to reversible computing. This is because it is very difficult to implement and there was little reason to pursue this major challenge as long as conventional technology could keep improving (Moore’s Law). Now that the end of constant progress is in sight, it is time to put reversible computing into practice.
Ternary based concepts provide solutions to the existing problems
JINN Labs (unfortunately stopped, see here) is already working on a ternary microcontroller called “JINN”. This could trigger the next evolutionary step in the semiconductor industry. Space-saving and at the same time energy-efficient ternary microcontrollers are tailor-made for the future IoT, because all miniature devices have to provide relatively high computing power for Proof-of-Work. Any energy savings, no matter how small, is beneficial, especially for battery-powered devices.
The IF in the person of David Sønstebø argues that ternary based software running on ternary hardware does the following, be more efficient.
“To understand the choice of ternary, you have to understand where IOTA comes from. IOTA comes from the hardware industry, where Moore’s Law has long been exhausted, such that new hardware needs to be developed for novel use cases. As I always say, software drives hardware design, not the other way around. Currently there is no hardware (except mining hardware and wallets) in existing products that support any kind of DLT. That is why IOTA are pioneers and we want to make the future standard as optimal as possible for long term benefits.”– David Sønstebø
With the mass-produced binary components for computers, ternary computers became a footnote in computing history, even though the balanced ternary system is a more efficient number representation in mathematics than binary. This will change again in the future, according to some experts. So the IF is already betting on ternary-based technology to prepare for the future.
Ternary system – What is it and is there an advantage?
The ternary logic used in IOTA runs through the entire project. Starting with the ternary microcontroller JINN, the hash function Troika, or that the seed consists only of capital letters from A-Z and the number 9, to the fact that exactly 2,779,530,283,277,761 IOTA tokens exist.
Before we clarify why IOTA relies on a ternary logic, we first need to take a closer look at the two systems that are relevant to this article.
The binary system: A bit (binary digit) can assume exactly two states (2 x 1 = 2):
Eight bits make a byte (28 = 256) and can thus represent 256 combinations.
The ternary system: A trit (trinary digit) can assume exactly three states (3 x 1 = 3):
Three trits make a tryte (33 = 27) and can thus represent 27 combinations.
The opposite number of each balanced ternary number, is determined by swapping each -1 by 1 and vice versa. Negative numbers can thus be represented just as easily as positive numbers. Unlike the decimal system, no negative sign needs to be noted. This circumstance makes some calculations in the ternary system more efficient than in the binary system.
Since trytes are even more complex than bytes, it is important to make them more readable. This is done by converting them into some kind of other language. For this purpose, the IOTA Development Team has created the tryte alphabet. This consists of the number 9 and the capital letters A-Z. This makes a total of 27 different digits, exactly the number of combinations of a tryte. Thus, each combination of a tryte can be represented by a digit.
IOTA uses exactly this alphabet for the seed, addresses, hashes etc..
The seed itself consists of 81 digits, so 81 trytes. Example:
Each tryte has 27 combinations. Thus, IOTA’s seed has 27^81combinations = 8.7189642485960958 * 10^115. This is significantly more than there are atoms in the entire visible universe. The probability to guess a seed is practically 0.
For comparison: The possible key pairs in bitcoin are actually 2^256 (elliptic curve secp256k1). Each public key is encrypted again by the hash function RIPEMD160. This has the effect that the possible key pairs are compressed to 2^160. This brings the Bitcoin to 1.4615016373309029 * 10^115 = 1461501637330902918203684832716283019655932542976 possible key pairs.
Why exactly are there 2,779,530,283,277,761 IOTA tokens?
This also has something to do with the trytes. Trytes “balance” themselves around the value 0 ranging from -13 to +13. Hence the term balanced trinary system. As an example we have 27 combinations and the maximum value 13. This can also be calculated mathematically:
((3^3) – 1) / 2 = 13, 27 combinations with three trits (1 tryte).
33 trits give the maximum supply: ((3^33) – 1) / 2 = 2.779.530.283.277.761 IOTA
PS: It was important to create a high number of tokens, because they will be used for micropayments between machines. In this case high prices per token are very hindering. If in the distant future, due to a very high demand, it is necessary to increase the maximum number of tokens, this is possible.
Why exactly do 33 trits result in the max. supply?
The value field in a transaction is 81 trits long, of which 33 trits are currently used. Exactly 33 trits are used because it was determined (presumably by CfB) that the current max. supply is the maximum positive number that can be represented by 33 trits in the balanced ternary, which is exactly ((3^33) – 1) / 2 or 2,779,530,283,277,761.
Since 81 trits have been reserved for the value field, the max supply can be increased to ((3^81) – 1) / 2 if necessary. In this case, each current owner would still own an equal share of the total value, but these amounts can now be divided into smaller units.
Why do current computers use the binary system?
In the early days of computer technology, mechanical calculating machines were replaced by electrical calculating machines. These first computers worked with relays that could only assume the states “on” or “off”. Because relays with a high percentage of mechanical components are very susceptible to failure, they were replaced by transistors, which also only assume the states “on” or “off” (voltage on, voltage off). Since transistors do not have any mechanical components, they can be made very small in size.
Today’s computers consist of many interconnections and components that are used to transmit and store data and to communicate with other components. Most of the storage, transmission and communication today is done with digital electronics. These still use the binary system with the two clearly separated states on=1 and off=0.
Binary systems are still used because of their high speed in changing circuit states. Transistors are very fast and effective switches. In the interaction of the amount of transistors and the kind of their states different characters can be mapped quickly and arithmetic operations can be performed. The disadvantage is: To increase the speed of a chip based on electrical transistors, more transistors have to be built into the chip. To do this, either the chip must be made larger or the individual transistors must be made smaller. Currently, there is a tendency towards smaller transistors, but this has the disadvantage of higher heat generation and greater susceptibility to faults.
Advantages of a ternary system
A system with three states, would have advantages in that one could rely on other basic elements instead of transistors to produce more efficient ICs (integrated circuits). Ternary is more efficient because it has the highest density of information representation among other integer bases (2,4…). Thus, in the ternary system, larger numbers can be accommodated in less memory. For example, the decimal number 6 in the binary system would be the number 110 (needs 3 bits), in the ternary system only 20 (one digit less). The efficiency of a numbering system to the base of 3 is more efficient than to 2 (about 1.58 times more efficient). Thus, one saves memory and calculations would run faster with a lower clock number of the chip.
The effort required for a ternary system to build a complex logic circuit within the CPU can be reduced to about 36% compared to an equivalent binary system. This leads to a corresponding energy saving in addition to a space-saving smaller design of the microcontroller. Ternary boards have not been used in the computer industry to date because the hardware implementation is much more complex, and there is also a lack of widespread mass market support. However, once this hardware implementation is achieved, these microcontrollers are significantly more energy efficient and much more powerful than their binary counterparts.
With the advent of mass-produced binary components for computers, ternary calculators (which existed at the time) unfortunately became a minor footnote in computer history, even though the balanced ternary system is a more efficient number representation in mathematics than the binary. In the seventies, the development of ternary computers was largely stopped because binary systems could be developed faster and cheaper. As often in the past, it was not the technically superior concept that prevailed in the course of time, but the concept that was easiest and cheapest to implement for the mass market at that time. The question arises as to where ternary computer technology would be today if it had also been continuously developed.
Considering the elegance and efficiency of ternary logic, expert Donald E. Knuth, predicts a resurgence of ternary computers in the future. Possible ways in which ternary computers could evolve in the future include combining an optical computer with the ternary system. A ternary computer using optical fibers could use dark as 0 and two orthogonal polarizations of light as 1 and -1.
For any computing device (cell phone, refrigerator, etc.), there will always be tasks that are too CPU intensive for the device to compute, or tasks that require locally unavailable data. This is especially true for the devices that comprise the Internet of Things (IoT). These devices are typically constrained by a lack of memory, computing power, energy availability, or similar.
What if such low-power devices could simply offload intensive computations to an external, more powerful machine? It would provide numerous cost and functionality benefits to the entire project.
The ultimate goal in secure outsourced computation is to develop protocols that minimize the computational burden on clients and ensure the confidentiality and integrity of their data.
Qubic enables just such outsourced computation and enables secure, permissionless participation for consumers and producers. The protocol allows anyone to create or run a computational task on one or more external devices, which in return send the results back to the requester. Similarly, any user can find tasks in the Tangle and participate in their processing.
As with oracle machines, this processing takes place in a decentralized and secure manner. The Qubic protocol ensures that the results can be trusted with a high degree of security.
Thinking now about combining Qubic with Cloud and / or Fog data processing technologies, economic clustering, and Swarm clients, opens up entirely new application areas. This technology merger could take parallel computing to a new level. Large problems (computations) can be solved much faster by dividing the computations among many devices (processors).
Individual qubics are essentially pre-packaged quorum-based (majority principle) computational tasks.
Qubic uses the IOTA Tangle to package qubics from their owners and distribute them to the oracles that will process them.
Qubics are published as messages in normal IOTA transactions and contain special instructions, called metadata, on how and when to process them. Oracles can read qubic metadata to decide whether or not to perform the required processing for the associated rewards. Qubic-enabled IOTA nodes (Q-nodes for short) can handle much of this decision making automatically based on parameters set by their operators.
Qubic tasks are specified using the ternary-based functional intermediate programming language called Qupla / Abra. The Qubic programming model allows data to flow from qubic to qubic throughout the system. Technically, qubics are event-driven, meaning that they listen to the tangle directly for input data and are executed whenever the input data changes. Because qubics report their results back to the Tangle, the execution of a single qubic can cause many other qubics to run in a kind of “chain reaction.” qubics act as triggers for other qubics, which in return trigger other qubics, and so on.
In other words, qubics can live at idle in the Tangle. When certain input data becomes available or changes, they “wake up” and begin processing, which can cause a cascade of other qubics to wake up as new results become available. This makes for a very dynamic programming environment, as you can add new qubics at any time and bind them to any input data.
In addition, anyone can publish a transaction with the qubic code they want to run and see the results in the Tangle for the appropriate rewards.
The following steps describe the life cycle of a qubic:
Prepare a qubic for processing
Decide on the rewards offered
Decide in which assembly the qubic should run
Link the qubic to the assembly (or to a specific epoch of that assembly)
Wait for the assembly to start processing the qubic
Collect the confirmed processing results
Evaluate the consensus on the quorum
Collect the revealed processing results
Pay the promised reward to the quorum participants once the results are available
To prepare a qubic, the owner packages the Qupla / Abra code and qubic metadata within an IOTA transaction. An IOTA transaction that contains abra code and qubic metadata is called a qubic transaction. In this regard, Qubic uses the value-free, data-only transaction mechanism of the IOTA protocol.
The owner then selects which assembly the qubic should process by searching for specific transactions in the tangle that provide information about an Oracle assembly. These are referred to as assembly transactions. Attaching a qubic transaction to an assembly transaction informs oracles that the qubic is available for processing.
Oracles are thus the Q-Nodes operators that process qubics. They listen to the tangle for qubic transactions and construct a private sub-tangle to track them. Qubic transactions must be well formulated and signed, or the nodes will refuse to process them.
Once a qubic transaction is validated, the node prepares the qubic to run on the node’s respective hardware and schedules the qubic for processing. Once the qubic has been processed, a quorum has been reached, and the results have been placed in the tangle, two things happen:
The qubic goes back to inactive and waits for the next change in inputs.
The cascade effect is triggered so that dependent qubics start up and begin processing with new inputs.
Blockchains and also IOTA’s Tangle cannot tap data outside their network. An Oracle is a data channel, usually provided by a third-party provider, which nevertheless enables this access into the real world in a trustworthy manner. Since a single Oracle cannot always be trusted (possible malicious intent), multiple Oracles are combined into an assembly (gathering) where they contribute their results independently of the other Oracles.
An automated quorum procedure (consensus procedure > at least 2/3 of the oracles in the assembly agree with the result) then decides whether the result can be released as correct and accordingly rewards are paid out to the oracles with the correct statement. With IOTA as the underlying technology, this process can be carried out in real time and enables wide-ranging applications, for example, in the financial market, the crypto market, insurance, the betting and gaming industry, etc.
In addition to simple decisions such as “open the window when the temperature in the room reaches 30 degrees”, more complex use cases are also possible, for example, a car can also be an Oracle. it collects sensor measurement data for pollution concentrations, temperature, humidity, traffic jams, accidents, etc.. If several Oracle cars now report the same thing, this data is confirmed as ,,True,, and written to the Tangle. This verified data now has value and opens up completely new business areas, for example, environmental agencies, emergency services, police, transportation companies, news agencies, etc. can access this data and improve their operations or service offerings.
Note: Oracle – each operator may get paid for their honest data (sensors, etc.) with IOTA tokens. It will depend on whether someone is willing to pay for the service offered.
Q-Nodes are Qubic-enabled nodes (powerful servers, IRI nodes). These nodes offer their own unused computing power or storage capacity as a “Proof-of-Work” service in the Tangle. Qubic provides a great incentive for people to run these Q-Nodes. Let other participants in the network use their spare computing power or storage capacity to provide a useful service, and get paid for it with IOTA tokens. Q-Nodes can breathe new life into old, abandoned PCs, and they can provide a way for traditional cryptocurrency “miners” to monetize devices by performing useful computations rather than meaningless (and wasteful) cryptographic puzzles.
In the future, Qubic will leverage this globally untapped computing power to solve all kinds of computational problems. Qubic is optimized for IoT – low power consumption and low memory requirements – but that doesn’t preclude large-scale computation, especially computation that can be parallelized and distributed across a large number of processors.
The development of the original Abra programming language increased in scope over time to such an extent that this limited language was no longer sufficient for the newly emerging requirements for Qubic. This was not expected in advance, so the IF decided to rework the existing parser / interpreter / compiler into a higher-level programming language called Qupla (short for QUbic Programming LAnguage). According to the explanations of Eric Hop, the chief developer of Qubic, some concepts in Abra (ternary) cannot be easily mapped to an existing binary language, nor can many concepts of existing binary languages be easily or not at all mapped to Abra. However, this is necessary because current hardware still works in binary and you have to go both ways for a longer time.
Thus, Qupla was introduced as the parent programming language for Qubic in December 2018. Abra is now the tritcode specification and Qupla generates the Abra tritcode according to this specification in a dataflow architecture focused on the IoT. Qupla is the first higher-level programming language to implement the new Abra specification.
Abra is a ternary-based functional “low-level” intermediate programming language and is extremely minimal. Its runtime model is implemented by a common tritcode specification that allows Abra code to run anywhere.
Abra has been designed with state-of-the-art hardware in mind. Qubic, like the IoT itself, is intended to run on a variety of hardware. In order to run the same code on different hardware platforms, Qubics are packaged in this intermediate language, which essentially means that the language lends itself to easy translation for specific hardware. This makes Qubics largely independent of hardware.
A functional programming language allows for easier analysis to prove the correctness of the code. Writing correct programs is not a trivial task. We have seen in the past how difficult it is to make even simple smart contracts error-free. Having a language that lends itself to automated analysis is a dramatic improvement over a traditional imperative language. As an added benefit, functional programs lend themselves to heavy use of parallelization, meaning that different parts of a larger program can be run simultaneously to take advantage of multiple CPUs or even multiple devices.
Abra consists of functions, which in turn can consist of functions that converge asymptotically to functions that can return a ternary value or zero, indicating the termination of a logical branch. There is no control flow, but an implicit branching and merging of data, when executing these functions. A given endpoint is considered to be the surviving function branch that returned a value. Abra thus belongs to a programming paradigm called dataflow programming and is suitable for what is known as wave parallel processing. Data can flow through the various levels of the processor without the need for synchronization or buffering delays. It has been shown that this allows clock frequencies to be 2 to 7 times higher than conventional parallel processing.
A final note on optimizations: Abra comes with a library of predefined basic functions for the most common platforms. Although most functions are defined at a very low level, the nature of Abra allows functions to be overridden with hardware-specific implementations that are more efficient when needed.
Qubic is a decentralized protocol and stands for quorum-based computation (QBC). The code provides universal, cloud or fog-based, permissionless, multiple processing functions on the Tangle. The QBC code puts a second layer on top of the Tangle protocol and uses the IOTA token to pay for services. The code is always executed by multiple IOTA nodes. This protocol is intended to enable a novel form of Smart Contracts and oracles based on the tangle where micropayments are captured in real time during execution. In addition, the protocol is intended to offload quorum-based computing power.
In summary, Qubic is a global supercomputer based on the IOTA platform with wide-ranging applications. Qubic provides a way to securely communicate with the outside world in a trusted environment. It offers a reward system (IOTA tokens) for incentives to honestly participate in the Tangle. In the IoT, there will be millions of participants. Each device will have its own wallet and use the cryptocurrency IOTA to pay each other. Qubic will be used by means of novel smart contracts to automate simple and complex processes.
In detail, the powerful Qubic protocol defines the structure, execution and evolutionary lifecycle of Qubics. It uses the IOTA protocol for secure, decentralized communication between different participants. Since IOTA has its own built-in payment system, IOTA tokens are used to create an incentive system for Qubic operators. Everyone can decide for themselves at what threshold a reward becomes interesting enough to participate.