K24



Twee maanden rekenen


Een PDF versie hier (of rechts-clikken de scherm vor een grote versie):  

Partial Functions

From “Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I”, John McCarthy, CACM, (3)4, April 1960:

A partial function is a function that is defined only on part of its domain. Partial functions necessarily arise when functions are defined by computations because for some values of the arguments the computation defining the value of the function may not terminate.

Dependent Variable

Whilst sitting in a coffee shop today, what appeared to be a rowdy
three year old kid wearing a “Dependent Variable” t-shirt, tumbled in.
You don’t get that every day. I’m still waiting for the bib with a description of the Lovasz local lemma. Then I can be a kid again and party in style.

Algebrista y Sangrador

“Algebrista y Sangrador” (bonesetter and bloodletter). See here. (Thanks Vincent.)

Extension Fields

These are notes from reading “Effective erasure codes for reliable computer communication protocols” by Luigi Rizzo.

An extension field is a field, GF(q) with q = pr elements, where p is prime and r > 1.

Addition in GF(pr) can be implemented differently than the usual addition modulo pr: Elements are interpreted as polynomials of degree r-1 with coefficients in GF(p). The sum of two elements in the field is the sum of their coefficients modulo p

Product is a polynomial product modulo an irreducible polynomial of degree r (one with divisors in GF(pr)) and with coefficients again modulo p.

In the case of p = 2, i.e. our favorite GF(2r), elements in the field require exactly r bits for representation; addition and subtraction also become a simple XOR.

Multiplication and division in GF(2r)
In a prime extension field, all elements can be represented as powers of a single number, α, called the generator. E.g., in GF(5), α = 2, and the elements of the field are |20|5, |21|5, |22|5, |23|5, |24|5 i.e., 1, 2, 4, 3, …. and αq-1 = α0 = 1.

(Is 1 therefore a member of every prime extension field ?)

Each non-zero element of the field, x can thus be written as αkx, and thus:
Multiplication: xy = α|kx+ky|q-1
Quotient: 1/x = α-kx+q-1

Typesetting mathematics

It is a general convention in typesetting mathematical expressions, that variable names are typeset in italics but function names are typeset in Roman.

Ergodicity

What does it mean for a system to be ergodic ? On the surface, we could say that a system is ergodic if the time average of one path taken by the system , equals the ensemble average, the cut across all paths at some instant in time. For illustrative purposes, think of each path as a squiggly curve, starting from the origin. When looking at multiple paths, we have multiple squiggly lines coming out of the origin. The y-value or ordinate in this case is the system state, and the x-value or abscissa is time.

In order for this to happen, the state transitions of the system have to be such that each state of the system is likely to be visited infinitely often. This cannot happen if there is some state, which when entered, one cannot depart from.

How does ergodicity relate to fixed-points ?

Systematic Code

In a systematic code, data bits are left unchanged, and the parity or redundancy bits are concatenated onto them.

Notes from “What Every Computer Scientist Should Know About Floating Point”

Floating point is one way of representing real numbers in a fixed number of bits. Other approaches are fixed point, floating slash and signed logarithm representations.

In floating point, a number is represented as d.ddd...dxβe The p digits d.ddd...d are called the significand (or mantissa, which is antiquated). β is the base and e is the exponent. The number p is the precision.

Not all real numbers are exactly representable by a floating point number. For example, with β=2, there is no exact representation for 0.1.

Normalization and the Hidden Bit
A number may have multiple floating point represetnations. For example, if β=10, and p=2 we can represent 0.1 as 0.1xβ0 or as 1.0xβ -1. A floating point representation is said to be normalized if d0 is 1. Now, if working with only normalized representations of floating point numbers, we could then elide the explicit statement of
d0, and use p-1 precision digits to represent a p-precision number. The implicit d0 in this case is called the implicit bit or hidden bit. The first use of this idea is attributed to Konrad Zuse.

As concrete examples, here are the formats defined by the IEEE 754 floating point standard:
[Single Precision]
s e0e1e2e3e4e5e6e7f0f1f2f3f4f5f6f7f8f9f10f11f12f13f14f15f16f17f18f19f20f21f22

[Single Precision Extended]
[Double Precision]
[Double Precision Extended]

The motivation for the particular ordering of the fields in the floating point format words stems from a desire by the IEEE 754 committee to enable fast operations such as search, on floating point numbers, using integer arithmetic. Thus when interpreted as integer values, larger floating point numbers whould be bigger, etc.

For the same reasons, the exponent is in biased notation, with a bias of 127 for single precision and a bias of 1023 for souble precision. This is so that negative exponents have a smaller value when interpreted as integers, than positive exponents. -1 is represented as (-1 in two’s complement + 127) = 01111110, whereas 1 is represented as (1 in two’s complement + 127) = 10000000.

The value of a particular floating point representation is therefore (-1)s x (1 + significand) x 2(Exponent – Bias), where the 1 added to the significand is to account for the hidden bit.

Measuring Error
The results of a floating point computation (for a representation with p, β and emin and emax) may differ from the real number value. For example if the floating point representation of the result is 3.12x10-2, and the real-valued result should be 3.14159, then the error is 0.02, or 2 units in the last place or 2 ulps.

Another way of stating the error, the relative error is to state it as the magnitude of error divided by the magnitude of the correct real number representation. So in the above example, the relative error is 0.02159/3.14159 = 0.00687232.

Based on these definitions for the relative error and error in ulps, you can show that for an error in ulps of 0.5 the possible relative errors might differ by as much as a factor of β. This is termed wobble. So ulps are wobbly relative to relative error huh ?

 steal compass, drive north, disappear...