# 5 Numerics & Ranges Written by Ray Fix

Heads up... You're reading this book for free, with parts of this chapter shown beyond this point as scrambled text.

You can unlock the rest of this book, and our entire catalogue of books and videos, with a raywenderlich.com Professional subscription.

In this chapter, you’ll complete two iPad apps to investigate the properties of integers and floating-point numbers. The first of these apps is BitViewer, which lets you look at bit-level representations and operations. The second app is Mandelbrot, which allows you to test Apple’s new Swift numerics package. The app lets you visualize the precision of different floating-point types. Finally, you’ll use a playground to explore how Swift implements ranges and strides. Throughout the chapter, you’ll flex your generic programming muscles and write code that works with a family of types.

This chapter might feel a little academic because it deals with the low-level machine representation of numbers. A little knowledge in this area will give you extra confidence and the ability to deal with low-level issues if they ever come up. For example, if you deal with file formats directly, or find yourself worrying about numerical range and accuracy, these topics will come in useful. Swift numerics is also an excellent case study for using protocols and generics that you looked at in previous chapters.

## Representing numbers

Computers are number-crunching machines made of switching transistors. Consider the base-10 number 123.75. You can represent it as 1, 2, 3, 7 and 5 if you multiply each digit by an appropriate weight:

The diagram shows how the number is composed. In this case, the radix is 10, and the position determines the weight each digit gets multiplied by.

Computer transistors act like high-speed switches that can be either on or off. What would it look like if you had only two states (0 and 1) to represent a number instead of 10? 123.75 would look like this:

The radix here is two. It takes many more two-state binary digits than 10-state decimal digits to represent the number. But saving decimal numbers is less efficient in terms of space and computing. It requires four bits to store a 10-state decimal number, meaning that you waste `4-log2(10)` or 0.678 bits for each digit you store.

The first bit (in the 64 position) has a special name. It’s called the most significant bit or MSB. That’s because it has the most significant effect on the overall value. The last bit (in the 0.25 position) is called the least significant bit or LSB. It has the smallest effect on the overall value.

You can see that number systems rely on exponents. If you need a refresher on those, you might get a quick review over at the Khan Academy. https://bit.ly/3k0Tsin.

## Integers

The first personal computers could deal with only 1 byte — 8 bits — at a time (numbers from 0 to 255). You needed to juggle these small values around to produce anything larger. Over the years, the size of the information computers could handle repeatedly doubled — to 16 bits, 32 bits and now 64 bits on the latest Intel and Apple processors.

### Protocol oriented integers

Swift’s integer types are `struct`-based values that wrap an LLVM numeric built-in type. Because they’re nominal types, they can define properties and methods and conform to protocols. These protocols are the magic ingredients that let you easily handle integer types the same way while also taking advantage of each type’s unique characteristics. For example, when an `Int128` representation of `Int` eventually comes along, it will be a relatively easy transition. The protocol hierarchy for integers looks like this:

### Getting started with BitViewer

To get hands-on experience with the integers, open the BitViewer project in the projects/starter folder for this chapter. When you run, using either a device or simulator, rotate into landscape and tap on the show sidebar item in the upper-left, you’ll see a screen like this:

### Understanding two’s complement

Using BitViewer, you can poke at the bits to see how the values change. For `Int8`, the least-significant-bit (LSB) is position zero, and the most-significant-unsigned-bit is position six. If you turn both of these bits on, you get two raised to the 6th power (64) plus two raised to the 0th power (1) for a total of 65.

#### Negation in two’s complement

The unary `-` operator and `negate` method change the sign of an integer, but what happens to the bits? To negate a number using two’s complement, toggle all the bits and add one. For example, `0b00000010` (2) negated would be `0b11111101` + 1 = `0b11111110` (-2). Now try it yourself with a few numbers in BitViewer. Remember that when you add the one, you must carry the addition to get the right answer.

### Exercises

• What are the minimum and maximum representable values of a make-believe `Int4` and `Int10` type?
• What bit pattern represents -2 using `Int4`? (Add it to 2 to see if you get zero.)
• List all the protocols shown in this chapter (the above diagrams) that an `Int32` supports.

### Adding integer operations to BitViewer

Time to add some features to the BitViewer app. Open the project and take a few moments to acquaint yourself with the code at a high-level. Here are some key points to notice:

``````enum IntegerOperation<IntType: FixedWidthInteger> {
// 1
typealias Operation = (IntType) -> IntType

// 2
struct Section {
let title: String
let items: [Item]
}

// 3
struct Item {
let name: String
let operation: Operation
}
}
``````
``````extension IntegerOperation {
static var menu: [Section] {
[
// Add sections below
]
}
}
``````
``````// TODO: - Uncomment after implementing IntegerOperation.
// : etc
``````

#### Setting value operations

Back in Model/NumericOperation.swift, add the following to the static `menu` property:

``````Section(title: "Set Value", items:
[
Item(name: "value = 0") { _ in 0 },
Item(name: "value = 1") { _ in 1 },
Item(name: "all ones") { _ in ~IntType.zero },
Item(name: "value = -1") { _ in -1 },
Item(name: "max") { _ in IntType.max },
Item(name: "min") { _ in IntType.min },
Item(name: "random") { _ in
IntType.random(in: IntType.min...IntType.max)
}
]), // To be continued
``````

#### Endian operations

The term endian refers to two competing ideologies in “Gulliver’s Travels” by Jonathan Swift that clash over whether you should crack the little end or big end of an egg.

``````Section(title: "Endian", items:
[
Item(name: "bigEndian") { value in value.bigEndian },
Item(name: "littleEndian") { value in value.littleEndian },
Item(name: "byteSwapped") { value in value.byteSwapped }
]),
``````

#### Bit manipulation operations

Still inside `IntegerOperation`’s `menu`, add some bit manipulation operations:

``````Section(title: "Bit Manipulation", items:
[
Item(name: "toggle") { value in ~value },
Item(name: "value << 1") { value in value << 1 },
Item(name: "value >> 1") { value in value >> 1 },
Item(name: "reverse") { print("do later"); return \$0 }
]),
``````

#### Arithmetic operations

Add these arithmetic operations:

``````Section(title: "Arithmetic", items:
[
Item(name: "value + 1") { value in value &+ 1 },
Item(name: "value - 1") { value in value &- 1 },
Item(name: "value * 10") { value in value &* 10 },
Item(name: "value / 10") { value in value / 10 },
Item(name: "negate") { value in ~value &+ 1 }
])
``````

#### Implementing a custom reverse operation

To flex your bit-hacking muscles, make an extension on `FixedWidthInteger` that reverses all the bits. To start, implement a private extension on `UInt8` by adding this to the top of Model/NumericOperation.swift:

``````private extension UInt8 {
mutating func reverseBits() {
self = (0b11110000 & self) >> 4 | (0b00001111 & self) << 4
self = (0b11001100 & self) >> 2 | (0b00110011 & self) << 2
self = (0b10101010 & self) >> 1 | (0b01010101 & self) << 1
}
}
``````

``````extension FixedWidthInteger {
var bitReversed: Self {
var reversed = byteSwapped
withUnsafeMutableBytes(of: &reversed) { buffer in
buffer.indices.forEach { buffer[\$0].reverseBits() }
}
return reversed
}
}
``````
``````Item(name: "reverse") { value in value.bitReversed }
``````

#### Improving bitReversed

The above code requires eight iterations to reverse the native 64-bit type. Can you do better and use the full width of the processor? Yes, you can.

``````extension FixedWidthInteger {
var bitReversed: Self {
precondition(MemoryLayout<Self>.size <=
MemoryLayout<UInt64>.size)

var reversed = UInt64(truncatingIfNeeded: self.byteSwapped)
reversed = (reversed & 0xf0f0f0f0f0f0f0f0) >> 4 |
(reversed & 0x0f0f0f0f0f0f0f0f) << 4
reversed = (reversed & 0xcccccccccccccccc) >> 2 |
(reversed & 0x3333333333333333) << 2
reversed = (reversed & 0xaaaaaaaaaaaaaaaa) >> 1 |
(reversed & 0x5555555555555555) << 1
return Self(truncatingIfNeeded: reversed)
}
}
``````

## Floating-point

Floating-point numbers can represent fractional values. The standard floating-point types include a 64-bit `Double`, a 32-bit `Float` and a relatively new 16-bit `Float16`. There’s an Intel-only `Float80` type dating back to when PCs had separate math co-processor chips. Because ARM doesn’t support it, you’ll only encounter this type on an Intel-based platform, such as an Intel Mac or the iPad simulator running on an Intel Mac.

### The floating-point protocols

Just as integers have a hierarchy of protocols to unify their functionality, floating-point numbers conform to protocols that look like this:

### Understanding IEEE-754

A 64-bit two’s complement integer can range from a colossal -9,223,372,036,854,775,808 (`Int64.min`) to 9,223,372,036,854,775,807 (`Int64.max`). But a 64-bit `Double` can range by an unfathomable ±1.8e+308 (as reported by `Double.greatestFiniteMagnitude` via the `FloatingPoint` protocol) . Moreover, this same `Double` can represent numbers as small as 4.9e-324 (as reported by `Double.leastNonzeroMagnitude`). How is this even possible?

``````(-1 ^ sign) * significand * (radix ^ exponent)
``````
``````bias = 2 ^ (exponentBitCount -1) - 1
``````

### Adding floating-point operations to BitViewer

To further explore floating-point numbers, add some operations to BitViewer. Again, open the source file Model/NumericOperation.swift and add this to the bottom:

``````enum FloatingPointOperation<FloatType: BinaryFloatingPoint> {
typealias Operation = (FloatType) -> FloatType

struct Section {
let title: String
let items: [Item]
}

struct Item {
let name: String
let operation: Operation
}

static var menu: [Section] {
[
// Add sections below
]
}
}
``````
``````// TODO: - Uncomment after implementing FloatingPointOperation.
// : etc
``````

#### Setting value operations

Back in Model/NumericOperation.swift, add this section to the floating-point `menu` property.

``````Section(title: "Set Value", items:
[
Item(name: "value = 0") { _ in 0 },
Item(name: "value = 0.1") { _ in FloatType(0.1) },
Item(name: "value = 0.2") { _ in FloatType(0.2) },
Item(name: "value = 0.5") { _ in FloatType(0.5) },
Item(name: "value = 1") { _ in 1 },
Item(name: "value = -1") { _ in -1 },
Item(name: "value = pi") { _ in FloatType.pi },
Item(name: "value = 100") { _ in 100 }
]),
``````

#### Subnormals

Values can either be normal or subnormal or neither in the case of zero. A normal number uses the leading bit convention you saw with `1.0`. A subnormal (also denormal) assumes the zero leading bit and supports really small numbers. Subnormal numbers are created by keeping all exponent bits zero and setting one of the significand bits. Try it and see!

#### Set special values operations

Add another section to the floating-point `menu` property:

``````Section(title: "Set Special Values", items:
[
Item(name: "infinity") { _ in
FloatType.infinity
},
Item(name: "NaN") { _ in
FloatType.nan
},
Item(name: "Signaling NaN") { _ in
FloatType.signalingNaN
},
Item(name: "greatestFiniteMagnitude") { _ in
FloatType.greatestFiniteMagnitude
},
Item(name: "leastNormalMagnitude") { _ in
FloatType.leastNormalMagnitude
},
Item(name: "leastNonzeroMagnitude") { _ in
FloatType.leastNonzeroMagnitude
},
Item(name: "ulpOfOne") { _ in
FloatType.ulpOfOne
}
]),
``````

#### Stepping and functions operations

The final two sections explore the ulp or unit of least precision of floating-point numbers. Add them to the menu.

``````Section(title: "Stepping", items:
[
Item(name: ".nextUp") { \$0.nextUp },
Item(name: ".nextDown") { \$0.nextDown },
Item(name: ".ulp") { \$0.ulp },
Item(name: "add 0.1") { \$0 + 0.1 },
Item(name: "subtract 0.1") { \$0 - 0.1 }
]),
Section(title: "Functions", items:
[
Item(name: ".squareRoot()") { \$0.squareRoot() },
Item(name: "1/value") { 1/\$0 }
])
``````
``````if value == value + 1 {
fatalError("Can this happen?")
}
``````

## Full generic programming with floating-point

With the BitViewer app, you saw how you could use `BinaryFloatingPoint` to operate on floating-point types generically. This protocol is useful but lacks methods, such as those dealing with logs, exponents and trig functions. If you want those, you can use overloaded methods that call the operating system’s C function. However, calling these functions can’t be done generically.

### Understanding the improved numeric protocols

The Swift Numerics package, which will eventually become part of Swift proper, adds important protocols to the standard library, including: `AlgebraicField`, `ElementaryFunctions`, `RealFunctions` and `Real`. They fit together with the currently shipping protocols like this:

``````func compute<RealType: Real>(input: RealType) -> RealType {
// ...
}
``````

### Getting started with Mandelbrot

Open the Mandelbrot starter project and build and run the app. You’ll see that the Swift Numerics package is loaded and built as a dependency.

### What is the Mandelbrot set?

In mathematics, a set is a collection of mathematical objects. The Mandelbrot set is a collection of complex numbers. Sound complex? It isn’t. Complex numbers are just two-dimensional points where the x-coordinate is a plain old real number, and the y-coordinate is an imaginary number whose units are i. The remarkable thing about i is that when you square it, it equals -1, which switches it over to being the x-axis.

### Converting to and from CGPoint

SwiftUI and UIKit depend on Core Graphics for rendering. The red dot that you can drag around in the interface represents a `CGPoint` with an `x` and `y` value consisting of `CGFloat`s.

### Add a test point path

Let the generic programming using `Real` begin! Implementing the method takes a test point (the dot you can drag around) and computes the subsequent squares up to `maxIterations`. To do this, open the file MandelbrotMath.swift and find `points(start:maxIterations:)`.

``````static func points<RealType: Real>(start: Complex<RealType>,
maxIterations: Int)
-> [Complex<RealType>] {
// 1
var results: [Complex<RealType>] = []
results.reserveCapacity(maxIterations)

// 2
var z = Complex<RealType>.zero
for _ in 0..<maxIterations {
z = z * z + start
defer {
results.append(z) // 3
}
// 4
if z.lengthSquared > 4 {
break
}
}
return results
}
``````

#### Explore the landmarks

The interface provides a set of named landmarks to try. Tap the landmark name, and the starting dot moves to a preset position.

### Implement Mandelbrot image generation

Time to turn your floating-point generic programming to 11. You’ll want to do just what you did above. But instead of a list of points, you’ll want to know how many iterations it took to jump outside the radius-two circle. You could use the same method and call `.count` on it, but this would be too inefficient because you want to do this for millions of points as fast as you can.

``````@inlinable static
func iterations<RealType: Real>(start: Complex<RealType>,
max: Int) -> Int {
var z = Complex<RealType>.zero
var iteration = 0
while z.lengthSquared <= 4 && iteration < max {
z = z * z + start
iteration += 1
}
return iteration
}
``````
``````static func makeImage<RealType: Real & CGFloatConvertable>(
for realType: RealType.Type,
imageSize: CGSize,
displayToModel: CGAffineTransform,
maxIterations: Int,
palette: PixelPalette
) -> CGImage? {
// TODO: implement (2)
nil
}
``````
``````static func makeImage<RealType: Real & CGFloatConvertable>(
for realType: RealType.Type,
imageSize: CGSize,
displayToModel: CGAffineTransform,
maxIterations: Int,
palette: PixelPalette
) -> CGImage? {
let width = Int(imageSize.width)
let height = Int(imageSize.height)

let scale = displayToModel.a
let upperLeft = CGPoint.zero.applying(displayToModel)

// Continued below
return nil
}
``````
``````let bitmap = Bitmap<ColorPixel>(width: width, height: height) {
width, height, buffer in
for y in 0 ..< height {
for x in 0 ..< width {
let position = Complex(
RealType(upperLeft.x + CGFloat(x) * scale),
RealType(upperLeft.y - CGFloat(y) * scale))
let iterations =
MandelbrotMath.iterations(start: position,
max: maxIterations)
buffer[x + y * width] =
palette.values[iterations % palette.values.count]
}
}
}
return bitmap.cgImage
``````

### Precision and performance

The Float Size control lets you pick which generic version gets called. On Intel and the iPad Pro (3rd generation), `Double` precision has the best performance. `Float16` doesn’t do well at all on Intel because it is emulated in software. Surprisingly, it doesn’t do that great on an actual device, either — all the conversions between `CGFloat` and `Float16` result in lower performance.

### Improving performance with SIMD

Can you make the rendering loop run faster and remain in pure Swift? Yes, you can.

``````static func makeImageSIMD8_Float64(
imageSize: CGSize,
displayToModel: CGAffineTransform,
maxIterations: Int,
palette: PixelPalette
) -> CGImage? {
// TODO: implement (3)
nil
}
``````
``````static func makeImageSIMD8_Float64(
imageSize: CGSize,
displayToModel: CGAffineTransform,
maxIterations: Int,
palette: PixelPalette
) -> CGImage? {
typealias SIMDX = SIMD8
typealias ScalarFloat = Float64
typealias ScalarInt = Int64
// Continued below
}
``````
``````let width = Int(imageSize.width)
let height = Int(imageSize.height)

let scale = ScalarFloat(displayToModel.a)
let upperLeft = CGPoint.zero.applying(displayToModel)
let left = ScalarFloat(upperLeft.x)
let upper = ScalarFloat(upperLeft.y)
// Continued below
``````
``````let fours = SIMDX(repeating: ScalarFloat(4))
let twos = SIMDX(repeating: ScalarFloat(2))
let ones = SIMDX<ScalarInt>.one
let zeros = SIMDX<ScalarInt>.zero
// Continued below
``````
``````let bitmap = Bitmap<ColorPixel>(width: width, height: height) {
width, height, buffer in
// 1
let scalarCount = SIMDX<Int64>.scalarCount
// 2
var realZ: SIMDX<ScalarFloat>
var imaginaryZ: SIMDX<ScalarFloat>
var counts: SIMDX<ScalarInt>
// 3
let initialMask = fours .> fours // all false
// 4
let ramp = SIMDX((0..<scalarCount).map {
left + ScalarFloat(\$0) * scale })
// 5
for y in 0 ..< height {
// Continue adding code here
}
}
return bitmap.cgImage
``````
``````let imaginary = SIMDX(repeating: upper - ScalarFloat(y) * scale)

for x in 0 ..< width / scalarCount {
let real = SIMDX(repeating: ScalarFloat(x * scalarCount) * scale) + ramp
realZ = .zero
imaginaryZ = .zero
counts = .zero

// Continue adding code here
}
// Process remainder
``````
``````// 1
for _ in 0..<maxIterations {
// 2
let realZ2 = realZ * realZ
let imaginaryZ2 = imaginaryZ * imaginaryZ
let realImaginaryTimesTwo = twos * realZ * imaginaryZ
realZ = realZ2 - imaginaryZ2 + real
imaginaryZ = realImaginaryTimesTwo + imaginary

// 3
let newMask = (realZ2 + imaginaryZ2) .>= fours

// 4

// 5
let incrementer = ones.replacing(with: zeros,
if incrementer == SIMDX<ScalarInt>.zero {
break
}

// 6
counts &+= incrementer
}

// 7
let paletteSize = palette.values.count
for index in 0 ..< scalarCount {
buffer[x * scalarCount + index + y * width] =
palette.values[Int(counts[index]) % paletteSize]
}
``````
``````let remainder = width % scalarCount
let lastIndex = width / scalarCount * scalarCount
for index in (0 ..< remainder) {
let start = Complex(
left + ScalarFloat(lastIndex + index) * scale,
upper - ScalarFloat(y) * scale)
var z = Complex<ScalarFloat>.zero
var iteration = 0
while z.lengthSquared <= 4 && iteration < maxIterations {
z = z * z + start
iteration += 1
}
buffer[lastIndex + index + y * width] =
palette.values[iteration % palette.values.count]
}
``````

### Where are the limits?

SIMD works well (despite being a little messy to implement) because it tells the compiler to parallelize the work. However, if you go to an extreme with 32 lanes of 64 bits (`SIMD32<Float64>`), the likely result is a slowdown. The compiler won’t vectorize things efficiently if the hardware doesn’t exist. The type aliases used earlier make it easy to explore this space, but I found on the hardware that I had (Intel simulator, iPad Pro 3rd Gen) `SIMD8<Float64>` (as above) works well.

## Ranges

Now, turn your attention to another important aspect of Swift numeric types that you’ve been using all along — ranges. Earlier, you saw that integers and floating-point types conform to the `Comparable` protocol. This conformance is crucial for supporting operations on ranges of numbers.

``````enum Number: Comparable {
case zero, one, two, three, four
}
``````
``````let longForm =
Range<Number>(uncheckedBounds: (lower: .one, upper: .three))
``````
``````let shortForm = Number.one ..< .three
shortForm == longForm   // true
``````
``````shortForm.contains(.zero)   // false
shortForm.contains(.one)    // true
shortForm.contains(.two)    // true
shortForm.contains(.three)  // false
``````
``````let longFormClosed =
ClosedRange<Number>(uncheckedBounds: (lower: .one, upper: .three))

let shortFormClosed = Number.one ... .three

longFormClosed == shortFormClosed  // true

shortFormClosed.contains(.zero)   // false
shortFormClosed.contains(.one)    // true
shortFormClosed.contains(.two)    // true
shortFormClosed.contains(.three)  // true
``````
``````let r1 = ...Number.three       // PartialRangeThrough<Number>
let r2 = ..<Number.three       // PartialRangeUpTo<Number>
let r3 = Number.zero...        // PartialRangeFrom<Number>
``````

### Looping over a range

You might wonder if you can use these ranges in a `for` loop, such as:

``````for i in 1 ..< 3 {
print(i)
}
``````
``````enum Number: Int, Comparable {
static func < (lhs: Number, rhs: Number) -> Bool {
lhs.rawValue < rhs.rawValue
}

case zero, one, two, three, four
}
``````
``````extension Number: Strideable {
public func distance(to other: Number) -> Int {
other.rawValue - rawValue
}
public func advanced(by n: Int) -> Number {
Number(rawValue: (rawValue + n) % 4)!
}
public typealias Stride = Int
}
``````
``````typealias CountableRange<Bound> = Range<Bound>
where Bound: Strideable, Bound.Stride: SignedInteger
``````
``````for i in Number.one ..< .three {
print(i)
}
``````

### Striding backward and at non-unit intervals

Ranges always require the lower and upper bounds to be ordered. What if you want to count backward?

``````for i in (Number.one ..< .three).reversed() {
print(i)
}
``````
``````for i in stride(from: Number.two, to: .zero, by: -1) {
print(i)
}

for i in stride(from: Number.two, through: .one, by: -1) {
print(i)
}
``````

### Range expressions

If you’re writing a function that takes a range as an input, you might wonder which of the five flavors to use. A good option is to use the `RangeExpression` protocol to conform to all range types. Diagrammed, it looks like this:

``````func find<R: RangeExpression>(value: R.Bound, in range: R)
-> Bool {
range.contains(value)
}
``````
``````find(value: Number.one, in: Number.zero ... .two) // true
find(value: Number.one, in: ...Number.two)        // true
find(value: Number.one, in: ..<Number.three)      // true
``````

## Key points

You’ve seen how Swift builds numeric types and ranges from the ground up using protocols and generics. Here are some key points to take away:

## Where to go from here?

Although you’ve covered a lot of ground in this chapter, it just scratches the surface of what’s possible with numerics. You can explore some of the corners of IEEE-754 by reading the Wikipedia article at:

Have a technical question? Want to report a bug? You can ask questions and report bugs to the book authors in our official book forum here.

© 2022 Razeware LLC

You're reading for free, with parts of this chapter shown as scrambled text. Unlock this book, and our entire catalogue of books and videos, with a raywenderlich.com Professional subscription.