## Partial Functions

From “Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I”, John McCarthy, CACM, (3)4, April 1960:

A partial function is a function that is defined only on part of its domain. Partial functions necessarily arise when functions are defined by computations because for some values of the arguments the computation defining the value of the function may not terminate.

## Dependent Variable

Whilst sitting in a coffee shop today, what appeared to be a rowdy

three year old kid wearing a “Dependent Variable” t-shirt, tumbled in.

You don’t get that every day. I’m still waiting for the bib with a description of the *Lovasz local lemma*. Then I can be a kid again and party in style.

## Algebrista y Sangrador

“Algebrista y Sangrador” (bonesetter and bloodletter). See here. (Thanks Vincent.)

## Extension Fields

These are notes from reading “Effective erasure codes for reliable computer communication protocols” by Luigi Rizzo.

An *extension field* is a field, *GF(q)* with *q = p ^{r}* elements, where

*p*is prime and

*r > 1*.

**Addition** in *GF(p ^{r})* can be implemented differently than the usual addition modulo

*p*: Elements are interpreted as polynomials of degree

^{r}*r-1*with coefficients in

*GF(p)*. The sum of two elements in the field is the sum of their coefficients modulo

*p*

**Product** is a polynomial product modulo an irreducible polynomial of degree *r* (one with divisors in *GF(p ^{r})*) and with coefficients again modulo

*p*.

In the case of *p = 2*, i.e. our favorite *GF(2 ^{r})*, elements in the field require exactly

*r*bits for representation; addition and subtraction also become a simple XOR.

**Multiplication and division in GF(2^{r})**

In a prime extension field, all elements can be represented as powers of a single number, α, called the

*generator*. E.g., in

*GF(5)*, α =

*2*, and the elements of the field are

*|2*,

^{0}|_{5}*|2*,

^{1}|_{5}*|2*,

^{2}|_{5}*|2*,

^{3}|_{5}*|2*i.e.,

^{4}|_{5}*1, 2, 4, 3, …*. and

*α*=

^{q-1}*α*= 1.

^{0}(Is *1* therefore a member of every prime extension field ?)

Each non-zero element of the field, *x* can thus be written as *α ^{kx}*, and thus:

**Multiplication**:

*xy = α*

^{|kx+ky|q-1}**Quotient**:

*1/x = α*

^{-kx+q-1}## Typesetting mathematics

It is a general convention in typesetting mathematical expressions, that variable names are typeset in *italics* but function names are typeset in Roman.

## Ergodicity

What does it mean for a system to be *ergodic* ? On the surface, we could say that a system is ergodic if the *time average* of one path taken by the system , equals the *ensemble average*, the cut across all paths at some instant in time. For illustrative purposes, think of each path as a squiggly curve, starting from the origin. When looking at multiple paths, we have multiple squiggly lines coming out of the origin. The *y-*value or *ordinate* in this case is the system state, and the *x*-value or *abscissa* is time.

In order for this to happen, the state transitions of the system have to be such that each state of the system is likely to be visited infinitely often. This cannot happen if there is some state, which when entered, one cannot depart from.

How does ergodicity relate to fixed-points ?

## Systematic Code

In a systematic code, data bits are left unchanged, and the parity or redundancy bits are concatenated onto them.

## Notes from “What Every Computer Scientist Should Know About Floating Point”

Floating point is one way of representing *real numbers* in a fixed number of bits. Other approaches are fixed point, floating slash and signed logarithm representations.

In floating point, a number is represented as `d.ddd...dxβ`

The ^{e}`p`

digits `d.ddd...d`

are called the *significand* (or *mantissa*, which is antiquated). `β`

is the *base* and `e`

is the *exponent*. The number `p`

is the *precision*.

Not all real numbers are exactly representable by a floating point number. For example, with β=2, there is no exact representation for *0.1*.

**Normalization and the Hidden Bit**

A number may have multiple floating point represetnations. For example, if

`β=10`

, and `p=2`

we can represent `0.1`

as` 0.1xβ`^{0}

or as `1.0xβ`^{ -1}

. A floating point representation is said to be normalized if `d`_{0}

is `1`

. Now, if working with only normalized representations of floating point numbers, we could then elide the explicit statement of`d`_{0}

, and use `p-1`

precision digits to represent a `p`

-precision number. The implicit `d`_{0}

in this case is called the *implicit bit*or

*hidden bit*. The first use of this idea is attributed to Konrad Zuse.

As concrete examples, here are the formats defined by the IEEE 754 floating point standard:

**[Single Precision]**

*s* *e*_{0}*e*_{1}*e*_{2}*e*_{3}*e*_{4}*e*_{5}*e*_{6}*e*_{7}*f*_{0}*f*_{1}*f*_{2}*f*_{3}*f*_{4}*f*_{5}*f*_{6}*f*_{7}*f*_{8}*f*_{9}*f*_{10}*f*_{11}*f*_{12}*f*_{13}*f*_{14}*f*_{15}*f*_{16}*f*_{17}*f*_{18}*f*_{19}*f*_{20}*f*_{21}*f*_{22}

**[Single Precision Extended]**

**[Double Precision]**

**[Double Precision Extended]**

The motivation for the particular ordering of the fields in the floating point format words stems from a desire by the IEEE 754 committee to enable fast operations such as search, on floating point numbers, using integer arithmetic. Thus when interpreted as integer values, larger floating point numbers whould be bigger, etc.

For the same reasons, the exponent is in *biased notation*, with a bias of *127* for single precision and a bias of *1023* for souble precision. This is so that negative exponents have a smaller value when interpreted as integers, than positive exponents. *-1* is represented as (*-1* in two’s complement + *127*) = *01111110*, whereas *1* is represented as (*1* in two’s complement + *127*) = *10000000*.

The value of a particular floating point representation is therefore **( -1)^{s} x (1 + significand) x 2^{(Exponent – Bias)}**, where the

*1*added to the significand is to account for the hidden bit.

**Measuring Error**

The results of a floating point computation (for a representation with `p`

, `β`

and `e`

and _{min}`e`

) may differ from the real number value. For example if the floating point representation of the result is _{max}`3.12x10`

, and the real-valued result should be ^{-2}`3.14159`

, then the error is `0.02`

, or `2`

**u**nits in the **l**ast **p**lace or *2 ulps*.

Another way of stating the error, the *relative error* is to state it as the magnitude of error divided by the magnitude of the correct real number representation. So in the above example, the relative error is `0.02159/3.14159 = 0.00687232`

.

Based on these definitions for the relative error and error in ulps, you can show that for an error in ulps of 0.5 the possible relative errors might differ by as much as a factor of β. This is termed *wobble*. So *ulps* are *wobbly* relative to *relative error* huh ?