Metal Tutorial with Swift 3 Part 3: Adding Texture
In part 3 of our Metal tutorial series, you will learn how to add textures to 3D objects using Apple’s built-in 3D graphics framework. By Andrew Kharchyshyn.
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Contents
Metal Tutorial with Swift 3 Part 3: Adding Texture
35 mins
- Getting Started
- Reusing Uniform Buffers (optional)
- The Problem
- The Solution
- A Wild Race Condition Appears!
- Like A Ninja
- Performance Results
- Texturing
- Texture Coordinates
- Using Textures in Metal
- MetalTexture
- Handling Texture on the GPU
- Colorizing a Texture (Optional)
- Adding User Input
- Debugging Metal
- Fixing Drawable Texture Resizing
- Where To Go From Here?
Like A Ninja
Like an undisciplined ninja, the CPU lacks patience, and that’s the problem. It’s good that the CPU can encode commands so quickly, but it wouldn’t hurt the CPU to wait a bit to avoid this racing condition.
Fortunately, it’s easy to “train” the CPU to wait when the buffer it wants is still in use.
For this task you’ll use semaphores, a low-level synchronization primitive. Basically, semaphores allow you to keep track of the count of limited resources are available, and block when no more resources are available.
Here’s how you’ll use a semaphore in this example:
- Initialize with the number of buffers. You’ll be using the semaphore to keep track of how many buffers are currently in use on the GPU, so you’ll initialize the semaphore with the number of buffers that are available (3 to start in this case).
- Wait before accessing a buffer. Every time you need to access a buffer, you’ll ask the semaphore to “wait”. If a buffer is available, you’ll continue running as usual (but decrement the count on the semaphore). If all buffers are in use, this will block the thread until one becomes available. This should be a very short wait in practice as the GPU is fast.
- Signal when done with a buffer. When the GPU is done with a buffer, you will “signal” the semaphore to track that it’s available again.
Note: To learn more about semaphores, check out this great explanation.
This will make more sense in code than in prose. Go to BufferProvider.swift and add the following property:
var avaliableResourcesSemaphore: DispatchSemaphore
Now add this to the top of init
:
avaliableResourcesSemaphore = DispatchSemaphore(value: inflightBuffersCount)
Here you create your semaphore with an initial count equal to the number of available buffers.
Now open Node.swift and add this at the top of render
:
_ = bufferProvider.avaliableResourcesSemaphore.wait(timeout: DispatchTime.distantFuture)
This will make the CPU wait in case bufferProvider.avaliableResourcesSemaphore
has no free resources.
Now you need to signal the semaphore when the resource becomes available.
While you’re still in render
, find:
let commandBuffer = commandQueue.makeCommandBuffer()
And add this below:
commandBuffer.addCompletedHandler { (_) in
self.bufferProvider.avaliableResourcesSemaphore.signal()
}
When the GPU finishes rendering, it executes a completion handler to signal the semaphore and bumps its count back up again.
Also in BufferProvider.swift, add this method:
deinit{
for _ in 0...self.inflightBuffersCount{
self.avaliableResourcesSemaphore.signal()
}
}
deinit
simply does a little cleanup before object deletion. Without this, your app would crash when the semaphore is waiting and you’d deleted BufferProvider
.
Build and run. Everything should work as before — ninja style!
Performance Results
You must be eager to see if there’s been any performance improvement. As you did before, open the Debug Navigator tab and select the FPS row.
These are my stats: the CPU Frame Time decreased from 1.7ms to 1.2ms. It looks like a small win, but the more objects you draw, the more value it gains. Please note that your actual results will depend on the device you’re using.
Texturing
So, what are textures? Simply put, textures are 2D images that are typically mapped to 3D models.
Think about some real life objects, such as a orange. How would the orange’s texture look in Metal? Probably something like this:
If you wanted to render an orange, you’d first create a sphere-like 3D model, then you would use a texture similar to the one above, and Metal would map it.
Texture Coordinates
Contrary to the bottom-left origination of OpenGL, Metal’s textures originate in the top-left corner. Standards — aren’t they great?
Here’s a sneak peek of the texture you’ll use in this tutorial.
With 3D graphics, it’s typical to see the texture coordinate axis marked with letter s for horizontal and t for vertical, just like the image above.
To differentiate between iOS device pixels and texture pixels, you’ll refer to texture pixels as texels.
Your texture has 512×512 texels. In this tutorial, you’ll use normalized coordinates, which means that coordinates within the texture are always within the range of 0->1. So therefore:
- The top-left corner has the coordinates (0.0, 0.0)
- The top-right corner has the coordinates (1.0, 0.0)
- The bottom-left corner has the coordinates (0.0, 1.0)
- The bottom-right corner has the coordinates (1.1, 1.1)
When you map this texture to your cube, normalized coordinates will be important to understand.
Using normalized coordinates isn’t mandatory, but it has some advantages. For example, say you want to switch texture with one that has the resolution of 256×256 texels. If you use normalized coordinates, it’ll “just work”, as long as the new texture maps correctly.
Using Textures in Metal
In Metal, an object that represents texture is any object that conforms to MTLTexture
protocol. There are countless texture types in Metal, but for now all you need is a type called MTLTexture2D
.
Another important protocol is MTLSamplerState
. An object that conforms to this protocol basically instructs the GPU how to use the texture.
When you pass a texture, you’ll pass the sampler as well. When using multiple textures that need to be treated similarly, you use the same sampler.
Here is a small visual to help illustrate how you’ll work with textures:
For your convenience, the project file contains a special, handcrafted class named MetalTexture
that holds all the code to create MTLTexture
from the image file in bundle.
Note: I’m not going to delve into it here, but if you want to learn how to create MTLTexture
, refer to this post on MetalByExample.com.
MetalTexture
Now that you understand how this will work, it’s time to bring this texture to life. Download and copy MetalTexture.swift to your project and open it.
There are two important methods in this file. The first is:
init(resourceName: String,ext: String, mipmaped:Bool)
Here you pass the name of the file and its extension, and you also indicate whether you want mipmaps
.
But wait, what’s a mipmap?
When mipmaped
is true
, it generates an array of images instead of a single image when the texture loads, and each image in the array is two times smaller than the previous one. The GPU automatically selects the best mip level from which to read texels.
The other method to note is this:
func loadTexture(device: MTLDevice, commandQ: MTLCommandQueue, flip: Bool)
This method is called when MetalTexture
actually creates MTLTexture
. To create this object, you need a device object (similar to the way you use buffers). Also, you pass in MTLCommandQueue
, which is used when mipmap levels are generated. Usually textures are loaded upside down, so this also has a flip
param to deal with that.
Okay — it’s time to put it all together.
Open Node.swift, and add two new variables:
var texture: MTLTexture
lazy var samplerState: MTLSamplerState? = Node.defaultSampler(device: self.device)
For now, Node
holds just one texture and one sampler.
Now add the following method to the end of the file:
class func defaultSampler(device: MTLDevice) -> MTLSamplerState {
let sampler = MTLSamplerDescriptor()
sampler.minFilter = MTLSamplerMinMagFilter.nearest
sampler.magFilter = MTLSamplerMinMagFilter.nearest
sampler.mipFilter = MTLSamplerMipFilter.nearest
sampler.maxAnisotropy = 1
sampler.sAddressMode = MTLSamplerAddressMode.clampToEdge
sampler.tAddressMode = MTLSamplerAddressMode.clampToEdge
sampler.rAddressMode = MTLSamplerAddressMode.clampToEdge
sampler.normalizedCoordinates = true
sampler.lodMinClamp = 0
sampler.lodMaxClamp = FLT_MAX
return device.makeSamplerState(descriptor: sampler)
}
This method generates a simple texture sampler that basically holds a bunch of flags. Here you’ve enabled “nearest-neighbor” filtering, which is faster than “linear”, as well as “clamp to edge”, which instructs Metal how to deal with out-of-range values. You won’t have out-of range values in this tutorial, but it’s always smart to code defensively.
Find the following code in render
:
renderEncoder.setRenderPipelineState(pipelineState)
renderEncoder.setVertexBuffer(vertexBuffer, offset: 0, at: 0)
And add this below it:
renderEncoder.setFragmentTexture(texture, at: 0)
if let samplerState = samplerState{
renderEncoder.setFragmentSamplerState(samplerState, at: 0)
}
This simply passes the texture and sampler to the shaders. It’s similar to what you did with vertex and uniform buffers, except that now you pass them to a fragment shader because you want to map texels to fragments.
Now you need to modify init
. Change its declaration so it matches this:
init(name: String, vertices: Array<Vertex>, device: MTLDevice, texture: MTLTexture) {
Now find this:
vertexCount = vertices.count
And add this just below it:
self.texture = texture
Each vertex needs to map to some coordinates on the texture. So open Vertex.swift and replace its contents with the following:
struct Vertex{
var x,y,z: Float // position data
var r,g,b,a: Float // color data
var s,t: Float // texture coordinates
func floatBuffer() -> [Float] {
return [x,y,z,r,g,b,a,s,t]
}
};
This adds two floats that hold texture coordinates.
Now open Cube.swift, and change init
so it looks like this:
init(device: MTLDevice, commandQ: MTLCommandQueue){
// 1
//Front
let A = Vertex(x: -1.0, y: 1.0, z: 1.0, r: 1.0, g: 0.0, b: 0.0, a: 1.0, s: 0.25, t: 0.25)
let B = Vertex(x: -1.0, y: -1.0, z: 1.0, r: 0.0, g: 1.0, b: 0.0, a: 1.0, s: 0.25, t: 0.50)
let C = Vertex(x: 1.0, y: -1.0, z: 1.0, r: 0.0, g: 0.0, b: 1.0, a: 1.0, s: 0.50, t: 0.50)
let D = Vertex(x: 1.0, y: 1.0, z: 1.0, r: 0.1, g: 0.6, b: 0.4, a: 1.0, s: 0.50, t: 0.25)
//Left
let E = Vertex(x: -1.0, y: 1.0, z: -1.0, r: 1.0, g: 0.0, b: 0.0, a: 1.0, s: 0.00, t: 0.25)
let F = Vertex(x: -1.0, y: -1.0, z: -1.0, r: 0.0, g: 1.0, b: 0.0, a: 1.0, s: 0.00, t: 0.50)
let G = Vertex(x: -1.0, y: -1.0, z: 1.0, r: 0.0, g: 0.0, b: 1.0, a: 1.0, s: 0.25, t: 0.50)
let H = Vertex(x: -1.0, y: 1.0, z: 1.0, r: 0.1, g: 0.6, b: 0.4, a: 1.0, s: 0.25, t: 0.25)
//Right
let I = Vertex(x: 1.0, y: 1.0, z: 1.0, r: 1.0, g: 0.0, b: 0.0, a: 1.0, s: 0.50, t: 0.25)
let J = Vertex(x: 1.0, y: -1.0, z: 1.0, r: 0.0, g: 1.0, b: 0.0, a: 1.0, s: 0.50, t: 0.50)
let K = Vertex(x: 1.0, y: -1.0, z: -1.0, r: 0.0, g: 0.0, b: 1.0, a: 1.0, s: 0.75, t: 0.50)
let L = Vertex(x: 1.0, y: 1.0, z: -1.0, r: 0.1, g: 0.6, b: 0.4, a: 1.0, s: 0.75, t: 0.25)
//Top
let M = Vertex(x: -1.0, y: 1.0, z: -1.0, r: 1.0, g: 0.0, b: 0.0, a: 1.0, s: 0.25, t: 0.00)
let N = Vertex(x: -1.0, y: 1.0, z: 1.0, r: 0.0, g: 1.0, b: 0.0, a: 1.0, s: 0.25, t: 0.25)
let O = Vertex(x: 1.0, y: 1.0, z: 1.0, r: 0.0, g: 0.0, b: 1.0, a: 1.0, s: 0.50, t: 0.25)
let P = Vertex(x: 1.0, y: 1.0, z: -1.0, r: 0.1, g: 0.6, b: 0.4, a: 1.0, s: 0.50, t: 0.00)
//Bot
let Q = Vertex(x: -1.0, y: -1.0, z: 1.0, r: 1.0, g: 0.0, b: 0.0, a: 1.0, s: 0.25, t: 0.50)
let R = Vertex(x: -1.0, y: -1.0, z: -1.0, r: 0.0, g: 1.0, b: 0.0, a: 1.0, s: 0.25, t: 0.75)
let S = Vertex(x: 1.0, y: -1.0, z: -1.0, r: 0.0, g: 0.0, b: 1.0, a: 1.0, s: 0.50, t: 0.75)
let T = Vertex(x: 1.0, y: -1.0, z: 1.0, r: 0.1, g: 0.6, b: 0.4, a: 1.0, s: 0.50, t: 0.50)
//Back
let U = Vertex(x: 1.0, y: 1.0, z: -1.0, r: 1.0, g: 0.0, b: 0.0, a: 1.0, s: 0.75, t: 0.25)
let V = Vertex(x: 1.0, y: -1.0, z: -1.0, r: 0.0, g: 1.0, b: 0.0, a: 1.0, s: 0.75, t: 0.50)
let W = Vertex(x: -1.0, y: -1.0, z: -1.0, r: 0.0, g: 0.0, b: 1.0, a: 1.0, s: 1.00, t: 0.50)
let X = Vertex(x: -1.0, y: 1.0, z: -1.0, r: 0.1, g: 0.6, b: 0.4, a: 1.0, s: 1.00, t: 0.25)
// 2
let verticesArray:Array<Vertex> = [
A,B,C ,A,C,D, //Front
E,F,G ,E,G,H, //Left
I,J,K ,I,K,L, //Right
M,N,O ,M,O,P, //Top
Q,R,S ,Q,S,T, //Bot
U,V,W ,U,W,X //Back
]
//3
let texture = MetalTexture(resourceName: "cube", ext: "png", mipmaped: true)
texture.loadTexture(device: device, commandQ: commandQ, flip: true)
super.init(name: "Cube", vertices: verticesArray, device: device, texture: texture.texture)
}
Taking each numbered comment in turn:
Note that you also need to create vertices for each side of the cube individually, rather than reusing vertices. This is because the texture coordinates might not match up correctly otherwise. It’s okay if the process of adding extra vertices is a little confusing at this stage — your brain will grasp it soon enough.
- As you create each vertex, you also specify the texture coordinate for each vertex. To understand this better, study the following image, and make sure you understand the s and t values of each vertex.
- Here you form triangles, just as you did in part two of this tutorial series.
- You create and load the texture using the
MetalTexture
helper class.
Since you aren’t drawing triangles anymore, delete Triangle.swift