Skip to content

Aperture

Aperture is a new pipeline for Iris, set to be released with Iris 2.0.

  • It uses pipeline code written by shader developers in TypeScript, allowing for fully customizable packs.
  • It has a mod API that allows for mods to apply custom vertex formats, vertex transformations, and (basic) fragment warping without interfering with shader developers.
  • It has no limits on the number of textures, buffers, or programs you can have, how complex they can be, or how they are running.
  • It has a new shadow mapping system based on cascaded shadow maps, allowing shadow maps that reach far beyond what Optifine packs can without distortion.
  • Packs written on Aperture use a modern version of OpenGL that can be translated to other API’s in the future.

This serves as temporary documentation whilst Aperture is being developed.

pack.ts

pack.ts is where the shaderpack is configured. All textures and programs are registered from here. When instantiating objects like textures and programs, options for them can be configured using builder syntax. For example, to set the format of a texture, I would call .format(<FORMAT>) on it. Once an object is built, the .build() function must be called. For example:

let sceneTex = new Texture("sceneTex")
.format(Format.RGB16F)
.clear(true)
.build();

To see a list of builder functions that can be called on an object (and what can be passed to them), you can go to the type definition in iris.d.ts.

Registering Textures

Textures are registered in the setupShader function. They are instantiated by creating a new Texture object, or alternatively:

  • RawTexture (which reads from a binary file)
  • ArrayTexture (required for the shadow map)
  • PNGTexture (guess what this does)

Registering Programs

Shader programs are registered in the setupShader function with the registerShader function, which takes in one of the following

  • ObjectShader (‘gbuffers’ style)
  • Composite (fullscreen pass)
  • Compute

ObjectShaders take a programUsage, these are predefined for you, for example TERRAIN_SOLID.

Composite and Compute shaders require a Stage argument passed to registerShader before the shader object itself. This defines when the shader runs - for example Stage.POST_RENDER is equivalent to a composite pass in the Optifine format.

To add a vertex and fragment shader to your shader object, you can call .vertex(<PATH_TO_VERTEX>) and .fragment(<PATH_TO_FRAGMENT>).

To choose which buffers the fragment shader writes to, use .target(<index>, <texture>) - index is the index in your layout(location = <index>) definition. This essentially replaces RENDERTARGETS.

Object Shaders

Object shaders require special handling to allow for mod support

Vertex

Vertex shaders should define two functions:

void iris_emitVertex(inout VertexData data)

Set data.clipPos as you would gl_Position. Usually this can just be iris_projectionMatrix * iris_modelViewMatrix * data.modelPos

void iris_sendParameters(VertexData data)

This is where you can write to your out declarations.

Both of these functions make use of the VertexData struct.

struct VertexData {
vec4 modelPos; // model space position you are given
vec4 clipPos; // clip space position you must set
vec2 uv; // texture coordinate in the atlas
vec2 light; // lightmap texture coordinate - x is blocklight, y is skylight
vec4 color; // vertex color, equivalent to gl_Color
vec3 normal; // model space normal
vec4 tangent; // model space tangent
vec4 overlayColor; // stuff like the red damage flash, equivalent to entityColor
vec3 midBlock; // equivalent to at_midBlock
uint blockId; // hooray
uint textureId; // ???
float ao; // vanilla ambient occlusion
};

Fragment

Fragment shaders are a lot simpler, you just define void iris_emitFragment() instead of void main().

Composite Shaders

Vertex

Since Aperture uses the core profile, ftransform() is not patched. Instead, the position of the vertex and the UV must be manually calculated as follows.

#version 450 core
out vec2 uv;
void main() {
vec2 position = vec2(gl_VertexID % 2, gl_VertexID / 2) * 4.0 - 1.0;
uv = (position + 1.0) * 0.5;
gl_Position = vec4(position, 0.0, 1.0);
}

Buffers

The depth textures are depthTex and solidDepthTex, replacing depthtex0 and depthtex1. Note that depthTex is not available before translucents are rendered, so make sure to use solidDepthTex in this case.

Uniforms

Matrices

Instead of gbufferModelView and gbufferProjection there is playerModelView and playerProjection. To access the previous frame versions, prefix last (maintaining camelCase). To access the inverse, suffix Inverse.

The shadow matrices have not been renamed, although currently shadowModelViewInverse does not exist for some reason.

Uniforms which have not changed

  • isEyeInWater
  • eyeBrightness
  • shadowLightPosition
  • sunPosition
  • moonPosition
  • moonPhase
  • frameTime
  • worldTime
  • frameCounter
  • skyColor
  • fogColor
  • fogStart
  • fogEnd
  • rainStrength

Uniforms Which Have Changed

  • cameraPosition is now cameraPos. previousCameraPosition is likewise lastCameraPos.
  • far is now renderDistance
  • near is now nearPlane
  • sunAngle is now dayProgression
  • frameTimeCounter is now timeCounter
  • hideGUI is now guiHidden
  • viewWidth and viewHeight have been replaced with screenSize

New Uniforms

  • farPlane, the actual far plane
  • shadowProjectionSize

Any uniforms not mentioned are probably not available yet.

Shadows

Iris makes use of cascaded shadow maps, which means everything is a lot more complicated.

Setting Up the Shadow Pass

You do not need to declare the shadow map, as it is automatically defined. If you want any shadowcolor style buffers, declare them as an ArrayTexture.

Sampling the Shadow Map

Aperture defines two shadow maps.

  • shadowMap (equivalent to shadowtex0)
  • solidShadowMap (equivalent to shadowtex1)

Both of these should be defined as sampler2DArrays. To sample them, you pass the cascade as the z component of your coordinate, i.e:

float shadow = step(shadowScreenPos.z, texture(shadowMap, vec3(shadowScreenPos.xy, cascade)));

Cascade 0 is the smallest, containing only a small area around the player, and cascade 3 is the largest.

Bear in mind that shadowProjection is also an array, because each cascade has a different projection. As such, to sample the shadow map, you must loop through each projection until your position is inside the frustum.

int cascade;
for(cascade = 0; cascade < 4; cascade++){
shadowClipPos = shadowProjection[cascade] * shadowViewPos;
if(clamp(shadowClipPos.xy, vec2(-1.0), vec2(1.0)) == shadowClipPos.xy) break;
}

Hardware Filtering

If you want hardware filtering, you can instead use a sampler2DArrayShadow. Due to reasons known only to the developers over at Khronos, the cascade is still the z component of the coordinate you pass in, so it becomes

float shadow = texture(shadowMap, vec4(shadowScreenPos.xy, cascade, shadowScreenPos.z));