Texture assign for Unity

Ever wondered why every Substance Painter to Unity workflow tutorial shows the Unity scene with all textures already applied? Because this process is incredibly tedious and time consuming. Each material has 4 or 5 textures which you need to find, then drag and drop onto the material.

The TextureAssign editor script solves this problem by automating assigning the textures to the materials in Unity. It works by name-matching textures with materials.


Get it here:


A320 cockpit CAD model renders

Here are some new renders I made from the A320 cockpit CAD model. It now includes all panel text, which is embossed into the geometry. View the screenshots at full size by clicking on the thumbnail and then selecting “view full size” on the bottom right.

The CAD model is available for purchase. Please use the contact form at www.airlinetools.com for inquiries.

Using sprites for button states

Changing the state of a button in Unity (ON, OFF, FAULT, etc.) can be done in a few different ways. The easiest is to make a few different materials and change the material at runtime. However, this is not good for performance.

There is a better way. Use a texture atlas (sprite sheet) and shift the UV’s. This way the operation runs entirely on the GPU which is many times faster. It is not as easy to set up though, so here is a detailed description how to do this..

First you need to render separate emissive textures for each button state. In most cases there are 4 states: no lights, ON light, FAULT light, ON and FAULT light. This is great because each texture can be stored in a corner of the main texture making it both efficient and easy to set up.

The textures can be rendered in Substance Painter by creating a layer for each button state, each with different emissive materials placed using the ID map. Then disable all emissive layers except one and export the textures. Rename the emissive texture and export again with another button state enabled. Do this for each button state until you have 4 separate textures. Click on the thumbnail for a better view. I wrote a plugin for Substance Painter which makes exporting the textures more easy. You can find it here:


Once the plugin is installed, just press on the “Export Emissive” button and it will automatically save the emissive channel and rename the texture as necessary.

Below you can see the 4 exported emissive channel textures. Note that the orientation is on its side. This is due to the automatic UV unwrapping from Unwrella. It might not look nice but it is completely irrelevant in our workflow.


The next step is to place each of the 4 textures in the corner of a new, bigger texture. This can be done in Gimp using a plugin called “fuse layers”. You can find the plugin here:

Place the plugin in the following directory:
C:\Program Files\GIMP 2\share\gimp\2.0\scripts\

Once the plugin is installed, fuse the 4 textures into a single one.
File->new-> set the same resolution of the input image. The resolution of the final image will be automatically increased accordingly.
Set Image->mode to RGB.
File->Open as layers-> select all 4 images.
Delete the background layer.
Filters->Combine-Fuse layers. Set x = 2.
To save, use file->export.

Now we have a single texture containing a button state in each corner:


This texture can’t be used as-is because the UV mapping needs to be changed in code. This is what happens when you apply the texture in Unity without any UV modifications:


To fix this, the Unity standard shader needs to be modified. Here is how to do that:

-Download build in shaders.

-Copy “Standard.shader”, rename to “StandardShift.shader”, and put into project in the same folder called Shaders.

-Copy the following files and put into project in a folder called Shaders. Note that only the file “UnityStandardInput.cginc” from this list will be modified, but all other files are needed, otherwise it won’t work.


-Open StandardShift.shader.
-Modify the line —–Shader “Standard”—– at the beginning of the shader to:

Shader "StandardShift"

Place this code below the line —–_DetailNormalMap(“Normal Map”,—–

_EmissionTileOffset("EmissionTileOffset", Vector) = (1,1,0,0)

Note: because the programmers at WordPress think it is a good idea to change the quote format (“), you might not be able to find a line of code using copy-paste-search.  Just search for a single word instead.

-Open UnityStandardInput.cginc.
-Place this code just below the line —–sampler2D  _EmissionMap;—–

half4   _EmissionTileOffset;

-Search for this function:
—–half3 Emission(float2 uv)—–
-Place this code just above the line —–return tex2D(…—–

uv.x *= _EmissionTileOffset.x;
uv.y *= _EmissionTileOffset.y;
uv.x += _EmissionTileOffset.z;
uv.y += _EmissionTileOffset.w;

-Create a material and set the shader to StandardShift.
-Add the material to an object.
-Place the texture with the 4 button state in the emissive slot.
-Create a script and add some code to change the button state using SetVector(). Here is an example:

//The vector format is:Tile X, Tile Y, Offset X, Offset Y
Renderer rend = GetComponent();
//Bottom left.
rend.material.SetVector("_EmissionTileOffset", new Vector4(0.5f, 0.5f, 0f, 0f));
//Bottom right.
rend.material.SetVector("_EmissionTileOffset", new Vector4(0.5f, 0.5f, 0.5f, 0f));
//Top left.
rend.material.SetVector("_EmissionTileOffset", new Vector4(0.5f, 0.5f, 0f, 0.5f));
//Top right.
rend.material.SetVector("_EmissionTileOffset", new Vector4(0.5f, 0.5f, 0.5f, 0.5f));

Now we can cycle through the different button states using a script in Unity. Here is the result:

If it doesn’t work, make sure all required files are copied to the Shader folder. Then go to Unity->Assets->Reimport All. After that, select the StandardShift shader -> Inspector -> compile and show code.

Added a fix so it now works in Unity 5.5+ and is tested using Unity 2017.3

Inventor for Game Design #2

I released the InvToSP script along with a few tutorials which describe how to get an Autodesk Inventor model into Substance Painter for texturing.

Here is a playlist of the tutorials:

Script features:
-Significant workflow speedup.
-Automatically import high poly and low poly from supplied directory.
-Fix flipped faces using automated re-import.
-Dynamically adjust mesh resolution.
-Fuse objects (collapse, attach) without destroying the explicit normals.
-Supports Unwrella for automatic unwrapping.
-Convert materials to FBX compatible materials.
-Name modifying for Substance Painter “match by name” baking.
-Add nearby objects to high res model for AO baking.
-Assemble standalone parts into final model.

You can find the script and manual here:

Rendering light sources

The funny thing is that there are thousands of references available on how a light affects an object. But the amount of references available on how the light itself looks you can count on one hand. I once found a scientific paper, but that’s about it. Perhaps that is why very few people get it right. Often you see an emissive sphere with a flare sprite slapped on top of it. But that is a far cry from a physically based approach, which I will describe here.

Most lights have a lens, which makes them either highly directional like a flashlight, or horizontally directional, the result of a cylindrical Fresnel lens. This directional behavior is simulated with a phase function which shows nicely on a polar graph. Here you can see two common light radiation patterns:


The blue graph has the function 1 + cos(theta*2) where theta is the angle between the light normal and the vector from the light to the camera. The output of the function is the irradiance. Adding this to the shader gives the lights a nice angular effect.


Next is the attenuation. Contrary to popular belief, focused lights (in the extreme case, lasers) still attenuate with the inverse square law, as described here:

But contrary to even popular scientific belief, lights themselves don’t behave in quite the same way, or at least not perceptually. The inverse square law states that the intensity is inversely proportional to the square of the distance. Because of this:


You see this reference all over, for example here:


Yet the light itself is brighter than bar number 4, which is about at the same distance as the light to the camera. The light itself doesn’t seem to attenuate with the inverse square law. So why is this? Turns out that in order to model high gain light sources (such as directional lights), you need to place the source location far behind the actual source location. Then you can apply the inverse square law like this:


Note that highly directional lights have a very flat attenuation curve, which can be approximated with a linear function if needed in order to save GPU cycles.

Some more reading about the subject here (chapter Validity of the Inverse Square Law):

One other problem is that the light will disappear if it gets too far from the camera. This is the result of the light being smaller than one pixel. That is fine for normal objects but not for lights because even extremely distant or small lights are easily visible in real life, for example a star. It would be nice if we would have a programmable rasterizer, but so far no luck. Instead, I scale the lights up when they are smaller than one pixel, so they remain the same screen size. Together with the attenuation, this gives a very realistic effect. And all of this is done in the shader so it is very fast, about 0.4 ms for 10.000 lights on a 780ti.

Since I made this system for a flight simulator, I included some specific lights you find in aviation, like walking strobe lights (also done entirely in the shader):


And PAPI lights, which are a bit of a corner case. They radiate light in a split pattern like this (used by pilots to see if they are high or low on the approach):


Simulated here, also entirely in the shader.


Normally there are only 4 of these lights in a row, but here are 10.000, just for the fun of it. They have a small transition where the colors are blended (just like in reality), which you won’t find in any simulator product, even multi million dollar professional simulators. That’s a simple lerp() by the way.

I should also note that the shaders don’t use any conditional if-else statements but use lerp, clamp, and scaling trickery instead. So it plays nice even on low-end hardware.

Released for free, but with limited support:

Controls: same as in the editor.
Speed up/slow down: 1 and 2 (not numpad)
Bloom is SE natural bloom.
It looks best in fullscreen.
Note that the frame rate is much higher in a standalone build.


Hole in a curved surface

Previously, adding a hole in on a complex curved surface was not easy to do in Inventor. With the new Curve On Face feature in Inventor 2017, this has been made much easier. This is how it’s done:

-Start a 3D sketch.
-Select “Curve On Face”.
-Draw a 3 point spline on the surface, with the 4th connecting to the first one to close the loop.

hole on surface

-Finish the 3D sketch.
-Create a plane out of 3 points, using the 3 points from the spline.

hole on surface 2

-Create a 2D sketch on the plane.
-Add a circle on the 2D sketch and place it in the center of the spline points. The circle will be used to extrude the hole.
-Finish the 2D sketch.
-Create an axis by selecting “Normal to Plane through Point” and select the mid point and the work plane. This axis can be used later to align a screw.

hole on surface 3

-Add a work point to the center point. This will be used later to place a screw.
-Extrude the circle with a cut operation. You might have to extrude it both ways to create a clean cut.

hole on surface 4

-Hide the 3D sketch and the work plane, leaving only the hole, work point, and the work axis behind.

hole on surface 5

-Now you can add a screw, rod, etc in an assembly.

hole on surface 6

A320 CAD design

Here are some screenshots of the finished A320 cockpit CAD model. The CAD model is available for purchase. Please use the contact form at www.airlinetools.com for inquiries.

Experimental texturing of monitor in Substance painter:

monitor 2

Test of leather material in Substance Painter.


Test of leather material (armrest) in Substance Painter:

leather 2

Throttle unit. The text is embossed into the geometry:

full 9

Some more screenshots:

full 8 full 7 full 6 full 5 full 4 full 3 full 2 full 1

CAD conversion pitfalls

When editing imported CAD data, there are several things to be aware of. CAD data uses explicit (custom) normals. Smoothing groups are not used at all. It is important that all explicit normals are maintained when editing the mesh, otherwise you are likely to get nasty shading errors.

There are three ways to check for shading errors:

-Check the normals.
-Check the shading look.
-Check for any back facing triangles using Views->xView->Face Orientation (shown in green).

The only way to view explicit normals is to add an “Edit Normals” modifier to the top of the stack, then adjust the Display Length (length of the vectors). The explicit normals are shown in green. Blue lines are unspecified normals which are calculated using averaging and you do not want to see those in a CAD model.

Unfortunately 3ds Max is very buggy maintaining explicit normals and not very helpful displaying any resulting shading errors. The main problems are:

-The way the default viewport shading works makes incorrect normals not very apparent.
-Some operations dump the explicit normals and replace it with unspecified normals.
-Some operations maintain the explicit normals but rotate them in some random direction.
-Some operations create double triangles (back facing and forward facing), which go undetected unless you view the vertex/triangle count.
-Importing an Inventor model as a body object will create flipped faces on mirrored features and in some cases even removes faces completely.
-Flipped triangles go undetected unless you use xView.

I wrote a script called InvToSP (Inventor to Substance Painter) which fixes all of these issues. You can find it here:

Here are some things to keep in mind if you want to do the conversion manually.

Some surfaces appear to be watertight but are actually separate surfaces. To fix this, you need to weld the vertices of all objects. This is done in order to reduce vertex count and to make sure the derived projection cage is correct, in order to prevent texture baking errors. There are several ways to merge vertices, but all of them destroy explicit normals, except this method:

-Convert the object to an Editable_Mesh.
-Add a ProOptimizer modifier with the following settings:
*Keep Normals = on
*Protect Normals = on
*Merge Vertices = on
*Threshold = 0.0
-Press Calculate on the ProOptimizer modifier.
-Collapse the stack. Note that this is important because if there are a lot of objects and you do the above steps in a script loop, PropOptimizer WILL fail without warning.

If Inventor objects (and possibly other CAD files) are imported into 3ds Max as a body object, some mirrored features will have back facing triangles. The only reliable way to fix flipped faces is to re-import the Inventor model as a mesh instead of a body object. But this will trigger another bug in 3ds Max. The wireframe can’t be made visible. To work around this, add a ProOptimizer modifier to the object and configure the settings as above. Then press Calculate and the wireframe will be visible if enabled.

When the CAD file is exported to an FBX file, all objects will have a transform which is not set at zero. Unity imports this correctly, but unfortunately Substance Painter do not and it is unlikely they will fix this. A workaround is to use a script called “reset_Xform_on_selected.ms”, available here:

In order to reduce draw calls, static objects need to be fused into a single object wherever practical. There are several ways to do this, but all of them destroy explicit normals, except this method:

-Execute the script “reset_Xform_on_selected.ms” on each which is to be fused. This prevents the explicit normals from pointing in the wrong direction after the mesh is fused.
-Convert all objects which need to be fused to an Editable_Poly. This prevents the explicit normals to be converted into unspecified normals.
-Select the first object which need to be fused.
-On the Editable Poly modifier, on the Edit Geometry rollout, select the little icon next to the Attach button (Attach List). Or click on the Attach button and then click on a second object you want to attach.

If you detach a surface, the normals will point in the wrong direction after you fuse the new object with another object. In order to prevent this from happening, you need to add an Edit Normals modifier to the detached object and make the normals explicit,  then collapse the stack. Now you can fuse the detached object with another object.

If you want to manually flip a surface, do this to prevent the loss of the explicit normals:
-Reset the Xform using “reset_Xform_on_selected.ms”.
-Convert it to an Editable Poly.
-Add an Edit Normals modifier.
-Select he Editable Poly modifier on the stack.
-Enable polygon selection and select the face you want to flip.
-Go to the Edit Polygons rollout and select Flip.
-Collapse the stack.

Another bug in 3ds Max occasionally causes holes in an imported CAD model. The only way to fix this is to adjust the mesh resolution slider to a different resolution, or re-import the model as a mesh instead of a body object.

Imported Inventor files contain materials which are incompatible with the FBX file format, and possibly other export file formats. This results in the loss of all materials, which will make a texture set disappear in Substance Painter. In order to fix this, you need to convert all materials to a standard material.