# softdouble.c

floating point numbers are a useful way to store an approximation of real numbers inside a finite computer memory — often inside single registers. they support a large dynamic range and precision relative to the magnitude of the number.

floating point numbers come in many sizes, defined by their component's width in bits. IEEE 754 defines two standard floating point formats, the "single precision": 32 bits of data, comomnly referred to as a "float", and the "double precision": 64 bits of data, referred to as a "double".

in microcontrollers and other low-resource hardware, it's not always the case that a floating point processor is available on board.
for this reason, a lot of compilers support *software floats* (sometimes shortened to *softfloats*), such as gcc's option `-msoft-float`

.
here, instead of using hardware instructions to perform operations on floating point numbers, the operations are instead implement, or "emulated", in software,
representing floating point numbers using equally sized unsigned numbers.

in this post, we'll explore my attempt at solving a slightly more contrived problem — what if we have hardware processors which support single-precision numbers (floats),
but not double-precision numbers (doubles)? can we make *software doubles*, which emulate double-precision numbers using only hardware capable of operating on single-precision numbers?

i implemented this code for code guessing #14. the submission on the site had all it's functions renamed and obfuscated for extra obscurity, but here we'll be looking at the original version. if you want to take a look at the code without my comments on it, the original file is available here at this link.

### background — what are floats

because we'll be working with binary strings a lot in this article, we'll use the notation $abcd$ to mean the binary number $abcd$. for example, $0101$ would represent the number 3, and $1.101$ represent 1.625.

we'll also use the notation $[abcd]$ to notate binary sequences. for example, if $x=[1011]$, then we have $x_{0}=1,x_{1}=0,x_{2}=1,x_{3}=1$

we represent a number $x$ using a "mantissa" $m$ of $N$ bits, an "exponent" $e$ or $M$ bits and a "sign bit" $s$ which is either $+1$ (represented as a zero bit) or $−1$ (represented as a one bit) $x=s⋅1.m_{0}m_{1}m_{2}⋯m_{N} ⋅2_{e_{0}e_{1}⋯e_{M}−2_{M−1}}$ reading from right to left, we first use the exponent $e$ to determine the order of magnitude of the number. the subtraction by $2_{M−1}$ subtracts a number exactly halfway in the range of $e$, giving us a symmetrical range around 0. we then represent the significand using the mantissa of the number. we choose to set the first bit to 1 because if it were to be 0 we could simply decrease the exponent by one and shift the mantissa to the right, giving us multiple ways of representing the same number. we note that the significand here will always be in the range $[1,2)$ (half open range). lastly, we multiply by the sign to make the number positive or negative.

as an example, say we wanted to represent the number 6.21 in this format. we'll fix the number of mantissa bits to $N=9$ and the number of exponent bits to $M=4$

we'll first rewrite the number as $x2_{y}$, where the significand $x$ is between 1 and 2. in this case, we see

$6.21=1.5525⋅2_{2}$we write 1.5525 in binary as $1.10001101$, and the exponent $2_{2}=2_{1010−2_{3}}$. from this, we see that $m=[10001101]$ and $e=[1010]$. lastly, we have that the number is positive, so the sign bit $s=[0]$

int main() { printf("hello world"); }