Fast light source rendering

Rendering light sources is typically done using individual sprites but this can become computationally expensive pretty quickly if you use thousands of lights. A better approach is to use a single mesh and make the individual triangles face the camera in the shader. This way you can render a huge amount of lights (21844 with Unity 2017.2, or 1.431.655.765 lights with Unity 2017.3) in one draw call.

The lights don’t actually light other objects and it needs a good bloom shader but the aim is to make the lights itself look realistic.

Note that Unity’s post processing stack V1 bloom shader does not work well with SpriteLights. However, the current V2 beta (available on github) works exceptionally well, even better than Sonic Ether’s bloom shader as it has almost no flicker.

The funny thing is that there are thousands of references available on how a light affects an object. But the amount of references available on how the light itself looks you can count on one hand. I once found a scientific paper, but that’s about it. Perhaps that is why very few people get it right. Often you see an emissive sphere with a flare sprite slapped on top of it. But that is a far cry from a physically based approach, which I will describe here.

Most lights have a lens, which makes them either highly directional like a flashlight, or horizontally directional, the result of a cylindrical Fresnel lens. This directional behavior is simulated with a phase function which shows nicely on a polar graph. Here you can see two common light radiation patterns:

polar

The blue graph has the function 1 + cos(theta*2) where theta is the angle between the light normal and the vector from the light to the camera. The output of the function is the irradiance. Adding this to the shader gives the lights a nice angular effect.

lobe

Next is the attenuation. Contrary to popular belief, focused lights (in the extreme case, lasers) still attenuate with the inverse square law, as described here:
http://www.quora.com/Is-the-light-f…distance-grows-similar-to-other-light-sources

But contrary to even popular scientific belief, lights themselves don’t behave in quite the same way, or at least not perceptually. The inverse square law states that the intensity is inversely proportional to the square of the distance. Because of this:

640px-Inverse_square_law.svg

You see this reference all over, for example here:

inversesquare

Yet the light itself is brighter than bar number 4, which is about at the same distance as the light to the camera. The light itself doesn’t seem to attenuate with the inverse square law. So why is this? Turns out that in order to model high gain light sources (such as directional lights), you need to place the source location far behind the actual source location. Then you can apply the inverse square law like this:

inversesquarelaw

Note that highly directional lights have a very flat attenuation curve, which can be approximated with a linear function if needed in order to save GPU cycles.

Some more reading about the subject here (chapter Validity of the Inverse Square Law):
http://blazelabs.com/f-u-photons.asp

One other problem is that the light will disappear if it gets too far from the camera. This is the result of the light being smaller than one pixel. That is fine for normal objects but not for lights because even extremely distant or small lights are easily visible in real life, for example a star. It would be nice if we would have a programmable rasterizer, but so far no luck. Instead, I scale the lights up when they are smaller than one pixel, so they remain the same screen size. Together with the attenuation, this gives a very realistic effect. And all of this is done in the shader so it is very fast, about 0.4 ms for 10.000 lights on a 780ti.

Since I made this system for a flight simulator, I included some specific lights you find in aviation, like walking strobe lights (also done entirely in the shader):

strobe

And PAPI lights, which are a bit of a corner case. They radiate light in a split pattern like this (used by pilots to see if they are high or low on the approach):

DeWiTec_PAPI_Bremerhaven-3

Simulated here, also entirely in the shader.

papi

Normally there are only 4 of these lights in a row, but here are 10.000, just for the fun of it. They have a small transition where the colors are blended (just like in reality), which you won’t find in any simulator product, even multi million dollar professional simulators. That’s a simple lerp() by the way.

I should also note that the shaders don’t use any conditional if-else statements but use lerp, clamp, and scaling trickery instead. So it plays nice even on low-end hardware.

Released for free, but with limited support:
https://www.assetstore.unity3d.com/en/#!/content/46409

Unity 2017.1 version here:
https://drive.google.com/file/d/0Bwk4bDWv3jAcVnBmZHp0d2lTUWc/view?usp=sharing

Unity 2017.3 version here (up to 1.4 billion lights in one mesh):
https://drive.google.com/file/d/1pEnVRJKYH4TyglL4o5XbpZisVFOZroer/view?usp=sharing

Video:

Advertisements

Driving non linear gauges

airspeed

Setting the needle of a gauge in code is easy when the scale is linear but it gets surprisingly complicated when the scale is not linear.

There are a few ways to deal with this problem. The easiest is to simply map different linear ranges to different segments of the gauge. However, this creates a change in needle speed when crossing the boundary. A better way is to create a logarithmic function which best fits the scale. But this can be difficult to maintain and it can be hard to make the needle follow the scale exactly, especially when the scale is not logarithmic to begin with.

The best way to deal with this problem is to make the needle follow a spline. The needle angle vs scale value will be stored in an array which are treated as control points for a Catmull-Rom spline.

The function of a Catmull-Rom spline is defined as:
0.5 * (2*P1 + (-P0 + P2) * t + (2*P0 – 5*P1 + 4*P2 – P3) * t^2 + (-P0 + 3*P1 – 3*P2 + P3) * t^3)
Variables P0 to P3 are the control points. Variable t is the position on the spline, with a range of 0 to 1. This only creates a spline with one section and 4 control points. To create a spline with more control points, the spline segments have to be stitched together.

The points P0 to P3 are vectors where in the case of the gauge, x is the needle angle, and y is the scale value at that angle.

A Catmull-Rom spline with multiple control points placed in zig-zag shape Note that the first and last control point is not shown here:catmull-rom

A Catmull-Rom spline with 6 control points placed in curved shape. Note that the spline does not exist at the first and last segment:
spline curve Unity

It is also possible to make the spline into a closed loop. For that, the first and last two control points have to be overlapping

Using a spline like this will make the needle follow the sampled points (scale values) exactly using smooth interpolation in between. To get an intermediate position on the spline, a value between 0 and 1 (t) has to be supplied to the spline function. The problem is that t is not known because the needle angle (x) has to be found for a certain scale number (y).

There are two ways to find t. One way is by using a brute force method of calculating many points on the spline and then finding the closest one to the number we are looking for. This works but is not exactly elegant, not to mention the performance and memory overhead involved. A better way is to find t mathematically. This is quite complicated but luckily it has been done before:
http://lifeinacubicleblog.com/2016/10/17/finding-catmull-rom-spline-and-line-intersection-part-2-mathematical-approach/

The blog post explains how to substitute the variables from a standard linear equation with parts of the spline formula. This allows you to solve a Cubic equation which gives you the intersection points of a straight line and a spline. Solving a Cubic equation is not exactly easy either, but luckily it has been implemented in code here:
https://www.cs.rit.edu/~ark/pj/lib/edu/rit/numeric/Cubic.shtml
https://www.codeproject.com/Articles/798474/To-Solve-a-Cubic-Equation

The code below includes the Catmull-Rom spline, create a Cubic function from a line spline intersection and solve it. It supports multiple spline segments.

//Get a point on a Catmull-Rom spline.
//The percentage is in range 0 to 1, which starts at the second control point and ends at the second last control point. 
//The array cPoints should contain all control points. The minimum amount of control points should be 4. 
//Source: https://forum.unity.com/threads/waypoints-and-constant-variable-speed-problems.32954/#post-213942
public static Vector2 GetPointOnSpline(float percentage, Vector2[] cPoints) {

	//Minimum size is 4
	if (cPoints.Length >= 4) {

		//Convert the input range (0 to 1) to range (0 to numSections)
		int numSections = cPoints.Length - 3;
		int curPoint = Mathf.Min(Mathf.FloorToInt(percentage * (float)numSections), numSections - 1);
		float t = percentage * (float)numSections - (float)curPoint;

		//Get the 4 control points around the location to be sampled.
		Vector2 p0 = cPoints[curPoint];
		Vector2 p1 = cPoints[curPoint + 1];
		Vector2 p2 = cPoints[curPoint + 2];
		Vector2 p3 = cPoints[curPoint + 3];

		//The Catmull-Rom spline can be written as:
		// 0.5 * (2*P1 + (-P0 + P2) * t + (2*P0 - 5*P1 + 4*P2 - P3) * t^2 + (-P0 + 3*P1 - 3*P2 + P3) * t^3)
		//Variables P0 to P3 are the control points.
		//Variable t is the position on the spline, with a range of 0 to numSections.
		//C# way of writing the function. Note that f means float (to force precision).
		Vector2 result = .5f * (2f * p1 + (-p0 + p2) * t + (2f * p0 - 5f * p1 + 4f * p2 - p3) * (t * t) + (-p0 + 3f * p1 - 3f * p2 + p3) * (t * t * t));

		return new Vector2(result.x, result.y);
	}

	else {

		return new Vector2(0, 0);
	}
}

//Finds the intersection points between a straight line and a spline. Solves a Cubic polynomial equation
//The output is in the form of a percentage along the length of the spline (range 0 to 1).
//The linePoints array should contain two points which form a straight line.
//The cPoints array should contain all the control points of the spline.
//Use case: create a gauge with a non-linear scale by defining an array with needle angles vs the number it should point at. The array creates a spline.
//Driving the needle with a float in range 0 to 1 gives an unpredictable result. Instead, use the GetLineSplineIntersections() function to find the angle the
//gauge needle should have for a given number it should point at. In this case, cPoints should contain x for angle and y for scale number.
//Make a horizontal line at the given scale number (y) you want to find the needle angle for. The returned float is a percentage location on the spline (range 0 to 1). 
//Plug this value into the GetPointOnSpline() function to get the x coordinate which represents the needle angle.
//Source: http://lifeinacubicleblog.com/2016/10/17/finding-catmull-rom-spline-and-line-intersection-part-2-mathematical-approach/
public static float[] GetLineSplineIntersections(Vector2[] linePoints, Vector2[] cPoints) {

	List<float> list = new List<float>();
	float[] crossings;

	int numSections = cPoints.Length - 3;

	//The line spline intersection can only be calculated for one segment of a spline, meaning 4 control points,
	//with a spline segment between the middle two control points. So check all spline segments.
	for (int i = 0; i < numSections; i++) {

		//Get the 4 control points around the location to be sampled.
		Vector2 p0 = cPoints[i];
		Vector2 p1 = cPoints[i + 1];
		Vector2 p2 = cPoints[i + 2];
		Vector2 p3 = cPoints[i + 3];

		//The Catmull-Rom spline can be written as:
		// 0.5 * (2P1 + (-P0 + P2) * t + (2P0 - 5P1 + 4P2 - P3) * t^2 + (-P0 + 3P1 - 3P2 + P3) * t^3)
		//Variables P0 to P3 are the control points.
		//Notation: 2P1 means 2*controlPoint1
		//Variable t is the position on the spline, converted from a range of 0 to 1.
		//C# way of writing the function is below. Note that f means float (to force precision).
		//Vector2 result = .5f * (2f * p1 + (-p0 + p2) * t + (2f * p0 - 5f * p1 + 4f * p2 - p3) * (t * t) + (-p0 + 3f * p1 - 3f * p2 + p3) * (t * t * t));

		//The variable t is the only unknown, so the rest can be substituted:
		//a = 0.5 * (-p0 + 3*p1 - 3*p2 + p3)
		//b = 0.5 * (2*p0 - 5*p1 + 4*p2 - p3) 
		//c = 0.5 * (-p0 + p2)
		//d = 0.5 * (2*p1)

		//This gives rise to the following Cubic equation:
		//a * t^3 + b * t^2 + c * t + d = 0

		//The spline control points (p0-3) consist of two variables: the x and y coordinates. They are independent so we can handle them separately.
		//Below, a1 is substitution a where the x coordinate of each point is used, like so:  a1 = 0.5 * (-p0.x + 3*p1.x - 3*p2.x + p3.x)
		//Below, a2 is substitution a where the y coordinate of each point is used, like so:  a2 = 0.5 * (-p0.y + 3*p1.y - 3*p2.y + p3.y)
		//The same logic applies for substitutions b, c, and d.

		float a1 = 0.5f * (-p0.x + 3f * p1.x - 3f * p2.x + p3.x);
		float a2 = 0.5f * (-p0.y + 3f * p1.y - 3f * p2.y + p3.y);
		float b1 = 0.5f * (2f * p0.x - 5f * p1.x + 4f * p2.x - p3.x);
		float b2 = 0.5f * (2f * p0.y - 5f * p1.y + 4f * p2.y - p3.y);
		float c1 = 0.5f * (-p0.x + p2.x);
		float c2 = 0.5f * (-p0.y + p2.y);
		float d1 = 0.5f * (2f * p1.x);
		float d2 = 0.5f * (2f * p1.y);

		//We now have two Cubic functions. One for x and one for y.
		//Note that a, b, c, and d are not vector variables itself but substituted functions.
		//x = a1 * t^3 + b1 * t^2 + c1 * t + d1
		//y = a2 * t^3 + b2 * t^2 + c2 * t + d2

		//Line formula, standard form:
		//Ax + By + C = 0
		float A = linePoints[0].y - linePoints[1].y;
		float B = linePoints[1].x - linePoints[0].x;
		float C = (linePoints[0].x - linePoints[1].x) * linePoints[0].y + (linePoints[1].y - linePoints[0].y) * linePoints[0].x;

		//Substituting the values of x and y from the separated Spline formula into the Line formula, we get:
		//A * (a1 * t^3 + b1 * t^2 + c1 * t + d1) + B * (a2 * t^3 + b2 * t^2 + c2 * t + d2) + C = 0

		//Rearranged version:		
		//(A * a1 + B * a2) * t^3 + (A * b1 + B * b2) * t^2 + (A * c1 + B * c2) * t + (A * d1 + B * d2 + C) = 0

		//Substituting gives rise to a Cubic function:
		//a * t^3 + b * t^2 + c * t + d = 0
		float a = A * a1 + B * a2;
		float b = A * b1 + B * b2;
		float c = A * c1 + B * c2;
		float d = A * d1 + B * d2 + C;


		//This is again a Cubic equation, combined from the Line and the Spline equation. If you solve this you can get up to 3 line-spline cross points.
		//How to solve a Cubic equation is described here: 
		//https://www.cs.rit.edu/~ark/pj/lib/edu/rit/numeric/Cubic.shtml
		//https://www.codeproject.com/Articles/798474/To-Solve-a-Cubic-Equation

		int crossAmount;
		float cross1;
		float cross2;
		float cross3;
		float crossCorrected;

		//Two different implementations of solving a Cubic equation.
		//	SolveCubic2(out crossAmount, out cross1, out cross2, out cross3, a, b, c, d);
		SolveCubic(out crossAmount, out cross1, out cross2, out cross3, a, b, c, d);

		//Get the highest and lowest value (in range 0 to 1) of the current section and calculate the difference.
		float currentSectionLowest = (float)i / (float)numSections;
		float currentSectionHighest = ((float)i + 1f) / (float)numSections;
		float diff = currentSectionHighest - currentSectionLowest;

		//Only use the result if it is within range 0 to 1.
		//The range 0 to 1 is within the current segment. It has to be converted to the range of the entire spline,
		//which still uses a range of 0 to 1.
		if (cross1 >= 0 && cross1 <= 1) {

			//Map an intermediate range (0 to 1) to the lowest and highest section values.
			crossCorrected = (cross1 * diff) + currentSectionLowest;

			//Add the result to the list.
			list.Add(crossCorrected);
		}

		if (cross2 >= 0 && cross2 <= 1) {

			//Map an intermediate range (0 to 1) to the lowest and highest section values.
			crossCorrected = (cross2 * diff) + currentSectionLowest;

			//Add the result to the list.
			list.Add(crossCorrected);
		}

		if (cross3 >= 0 && cross3 <= 1) {

			//Map an intermediate range (0 to 1) to the lowest and highest section values.
			crossCorrected = (cross3 * diff) + currentSectionLowest;

			//Add the result to the list.
			list.Add(crossCorrected);
		}
	}

	//Convert the list to an array.
	crossings = list.ToArray();

	return crossings;
}

//Solve cubic equation according to Cardano. 
//Source: https://www.cs.rit.edu/~ark/pj/lib/edu/rit/numeric/Cubic.shtml
private static void SolveCubic(out int nRoots, out float x1, out float x2, out float x3, float a, float b, float c, float d) {

	float TWO_PI = 2f * Mathf.PI;
	float FOUR_PI = 4f * Mathf.PI;

	// Normalize coefficients.
	float denom = a;
	a = b / denom;
	b = c / denom;
	c = d / denom;

	// Commence solution.
	float a_over_3 = a / 3f;
	float Q = (3f * b - a * a) / 9f;
	float Q_CUBE = Q * Q * Q;
	float R = (9f * a * b - 27f * c - 2f * a * a * a) / 54f;
	float R_SQR = R * R;
	float D = Q_CUBE + R_SQR;

	if (D < 0.0f) {

		// Three unequal real roots.
		nRoots = 3;
		float theta = Mathf.Acos(R / Mathf.Sqrt(-Q_CUBE));
		float SQRT_Q = Mathf.Sqrt(-Q);
		x1 = 2f * SQRT_Q * Mathf.Cos(theta / 3f) - a_over_3;
		x2 = 2f * SQRT_Q * Mathf.Cos((theta + TWO_PI) / 3f) - a_over_3;
		x3 = 2f * SQRT_Q * Mathf.Cos((theta + FOUR_PI) / 3f) - a_over_3;
	}

	else if (D > 0.0f) {

		// One real root.
		nRoots = 1;
		float SQRT_D = Mathf.Sqrt(D);
		float S = CubeRoot(R + SQRT_D);
		float T = CubeRoot(R - SQRT_D);
		x1 = (S + T) - a_over_3;
		x2 = float.NaN;
		x3 = float.NaN;
	}

	else {

		// Three real roots, at least two equal.
		nRoots = 3;
		float CBRT_R = CubeRoot(R);
		x1 = 2 * CBRT_R - a_over_3;
		x2 = CBRT_R - a_over_3;
		x3 = x2;
	}
}

//Mathf.Pow is used as an alternative for cube root (Math.cbrt) here.
private static float CubeRoot(float d) {

	if (d < 0.0f) {

		return -Mathf.Pow(-d, 1f / 3f);
	}

	else {

		return Mathf.Pow(d, 1f / 3f);
	}
}

In case of the gauge, we need to make a horizontal line (y) at the location of the scale value we want to find the needle angle for. This will give us the intersection (t). This is not a coordinate yet, but if you simply plug this value (t) in the spline function, it will give a point with values x (needle angle, yay!), and y (scale value). The scale value was already known but it can be used to check the result.

Here is an implementation in Unity which calculates the intersection between a line and a spline:
spline bend Unity

The Unity project can be found here:
https://drive.google.com/file/d/0Bwk4bDWv3jAceUl0cm5ENHFFQ2M/view?usp=sharing
Note that the Unity project contains both implementations of solving a Cubic function which was used to verify the result.

The red cubes are the control points of the spline. The yellow cubes create a straight line. The magenta cubes are the intersection points between the line and the spline. The green cube can be moved along the spline by moving the slider. To use, press Run, then move the cubes in the Scene window.

Another closely related application is to make a gauge follow a non-linear animation, for example the EGT of a jet engine during startup. A video of a the event would be recorded and used to capture sample points consisting of EGT vs time. The time (x) and EGT (y) values would then be used to create a spline, allowing smooth interpolation between the original sample points. The line-spline intersection function can then be used to get the EGT for any point in time.

So there you have it. A real world use case of finding the intersection points between a line and a spline by solving a Cubic equation. Learning mathematics was not a waste of time after all 😉

I added the spline and spline solve functions in the Math3D Unity wiki too:
http://wiki.unity3d.com/index.php/3d_Math_functions
The functions are called GetPointOnSpline() and GetLineSplineIntersections()

Real time EFIS vector graphics

after start small

This is a tutorial on how to create a real time rendering system for a PFD, ND, ECAM, MCDU, or LCD display. This can be done two different ways:

Mesh
-Create all graphics as separate meshes.
-Place the meshes at different heights relative to each other, simulating layers.
-Render it with a separate orthographic camera into a render texture.
-Assign the render texture to the display material.

Vector graphics
-Create an SVG vector graphics file containing the graphics.
-Render the vector graphics directly into a render texture.
-Assign the render texture to the display material.

The latter is much easier to maintain, easier to animate, and much faster to render. In order to render vector graphics, a 3rd party tool called NoesisGUI is used. Unlike the name suggests, it can be used to render anything xaml based, not just a GUI. It can be found here:
http://noesisengine.com

The vector graphics can be created in a vector graphics drawing program like Inkscape, but this is not designed for precision which makes the workflow very cumbersome. I tried simply eyeballing the design using a perspective corrected photo as a background, but even with perspective and barrel distortion removed, a photograph is not accurate enough.

Instead, I decided to create an initial sketch with a CAD program. The constraint based parametric workflow is a joy to work with, and much faster and accurate than using a freehand vector based program. It is best to use QCAD as this can export a good quality SVG file. However, I already know Autodesk Inventor, so I used that to create the sketch instead.

Here are some screenshots of the CAD drawings. They only contain sketches and no solid geometry. Everything was physically measured in the aircraft so all dimensions are correct.  Note that I used two different sketches because the large amount of constraints in a single sketch made the sketch unstable and slow. In addition, the ISO drawing information symbols (info box on bottom right and edge outline) are removed. This tutorial assumes the drawing is made in mm.

The CAD drawing contains no fill data, line width, colors, and layers. This will be added later using Inkscape. The purpose of the CAD drawing is just to place lines and text at the correct location.

Next, a few conversion steps have to be performed in order to get the CAD drawing into Inkscape:
-Go to Inventor->File->Save As->Save Copy As->DXF. Note that Inkscape can import DXF files, but this is buggy. As a workaround, QCAD is used to convert the DXF into an SVG file.
-QCAD->File->Open. Select the Inkscape exported DXF file.
-QCAD->File->Advanced SVG Export: select “Preserve Geometry” (to prevent text being converted to a path).

A few settings in Inkscape have to be changed to make sure the SVG coordinates are the same as in the CAD drawing. This makes it easier to make modifications.

-Inkscape->Edit -> Preferences -> Behavior -> Transforms-> Store transformation = Optimized.
-Inkscape->Edit -> Preferences -> Input / Output -> SVG Output -> Path Data -> Path string format = absolute.
-Inkscape->Edit->Preferences->Behavior->Snapping->Delay = 0.
-Inkscape->File->Document Properties->Page->General->Display Units-> mm.
-Inkscape->File->Document Properties->Page->Page size->Custom Size-> mm.
-Set the Custom Size to the size of the display and make sure it is square. For example 158, 158.
-Set the scale to 1.
-Save the file and keep a copy as a template for future designs.
-Close Inkscape, open the svg file in a text editor, and remove the translate transform of all layers (transform=”translate), caused by page resize.

Import the converted CAD drawing into Inkscape:
-Start Inkscape and open the template file. Inkscape->File->Import->SVG
-Position the drawing so it fits nicely in the middle of the viewbox.
-Select the imported object, then go to Object->Ungroup.
-Select all, then ungroup again. Do this a few times until there are no more groups.

Now the SVG file is ready to be modified so it looks exactly like the real display. There are a few operations which must be performed.

All lines are imported into separate path segments. If a shape needs to have a fill, the line segments need to be stitched together. Below Is what a shape looks like when it consists of separate path segments. Note that you can’t see that it consists out of separate path segments.

lines before
Below is what the shape looks like if all  individual path elements are selected. Now it is clear that it is not one single shape.separate
Select all individual path segments as show above. Then go to Path->Combine (or Ctrl-K). Now all segments are fused into a single object which looks like this:
combined
Even though the path segments are fused into a single object, is not possible to add a Fill yet. This is because the nodes of the line segments are not joined together. To do this, select the object, then select the “Edit paths by nodes” tool (icon just below arrow select tool). With the object selected, drag to select all nodes at the same time. After this operation, it is not evident that all nodes are selected, but they are. Now click on the icon called “Join selected nodes”, or press Shift-J. After the nodes are joined together, the shape looks like this:
joined
The diamonds on the corners are an indication that the join operation was successful. Now the Fill or any cutting operations work correctly.

Repeat the process for all applicable shapes. Even if a shape does not need a fill, it is still recommended to fuse path segments together where it makes sense. For example, all pitch lines for the attitude scale are fused together into one single path. This makes it easier to manage (set layers, change colors, change stroke settings, etc.) The end result should look something like the screenshot below. Note the two diagonal lines at the right. They are guide lines, used to align shapes.

PFD inkscape full

Note that the layout looks very messy. This is because all available symbols of the A320 PFD are present. The state of the symbols (color, text, number, position, etc) will be set in code (C#) at a later stage. Alternatively, you can create a copy of the SVG and delete/hide certain elements if you only want to make screenshots of certain display states.

Even if all elements are shown, it only uses two draw calls (set pass calls) in Unity, so NoesisGUI renders it very fast.

Here is a screenshot of a more realistic display state:

after start

Because vector graphics are used, it is possible to zoom infinitely while maintaining quality:

PFD full zoom

Note the small black outline on some of the symbols. This is used for added contrast, a feature which the real PFD has too. It is not possible to add an outline to a shape which does not consists out of closed line segments. So to add the black outline, a duplicate is created, the color changed to black, the stroke width set a bit bigger, and moved to a z-order just below the original. The two paths are then grouped together.

Here is a closeup photo of the real display where you can see the black contrast outlines too. Fun fact: by counting pixels and measuring the size of the display, you can figure out the resolution of the screen. It is about 768×768. Not exactly a Retina display but the size is only 158 mm square so the pixel density is quite high, especially for its time when it was designed. Right click->View Image to enlarge.

photo closeup

Once the display is rendered, it is not possible to zoom in with the camera and maintain visual quality, but the same vector graphics can be added to a higher resolution texture to achieve the same effect.

Once the SVG file is done, it has to be exported to an xaml file because this is the format used by NoesisGUI. Unfortunately the xaml exporter from Inkscape is very buggy and is unusable. Luckily there is a standalone converter available which creates high quality xaml files. It is called ViewerSVG and is available here:
http://www.ab4d.com/ViewerSvg.aspx

To convert the SVG to xaml with ViewerSVG do the following:
-Drag and drop the SVG file onto ViewerSVG.
-Select the Export icon (bottom left corner).
-On the top right corner change Target Platform to Silverlight XAML.
-On the bottom right corner change New Width to 1024 (assuming the texture you want to create for NoesisGUI is this size).
-Click on the Transform button.
-Click Save.

viewerSVG

Now the xaml file is ready to be used by NoesisGUI. We will use Unity to render the result but NoesisGUI also has a native C++ SDK so you can use it in a different game engine.

In order to use the xaml file in Unity, do the following:
-Create new Unity project.
-Import the NoesisGUI unitypackage.
-Before adding the XAML to Unity, open it and modify all FontFamily lines so that a # character is in front of it. For example: FontFamily=”#Arial”.
-Drag and drop all fonts used in the xaml to the same directory as where the xaml file will be placed in Unity.
-Drag and drop the xaml file in the same directory as the fonts. When the xaml file is imported into Unity, it will automatically generate an asset file. This asset file is the one used by NoesisGUI, not the xaml file. Updating the xaml file will not re-import the asset file so it is best to delete the xaml file in the Unity folder.

To render the xaml to a mesh plane, do the following:
-Add a NoesisView component to the display Game Object (should be a square mesh, UV mapped correctly).
-Add the XAML asset file to the NoesisView component (not the xaml file but the .asset file which was automatically generated).
-Disable keyboard, mouse, and touch checkboxes.
-Set anti aliasing to PPAA (GPU).
-Create a render texture (no anti aliasing, and Depth Buffer set to 24 bit with stencil).
-Add the render texture to the appropriate texture slot on the material from the display Game Object.
-Press Play and check if the xaml file is rendered correctly.

Here are some screenshots from Unity. I use the standard specular shader with a slight red specular tint to simulate the anti reflective coating. The render texture is added to the Emission slot only. The Emission color is set to gray, otherwise the display is too bright. The Emission color can be changed in code to simulate display brightness change.

PFD dayPFD night

The following code can be used to animate the vector graphics. The c# file has to be placed on the display Game Object.

Add to top of C# script:

using Noesis;

Get a handle to a path:

NoesisView panel = GetComponent<NoesisView>();
Path obj = (Path)panel.Content.FindName("line6866");

Get a handle to a group:

NoesisView panel = GetComponent<NoesisView>();
Noesis.Canvas obj = (Noesis.Canvas)panel.Content.FindName("g865");

Enable transformations:

RotateTransform rotateTransform = new RotateTransform();
TranslateTransform translateTransform = new TranslateTransform();
TransformGroup transformGroup = new TransformGroup();
transformGroup.Children.Add(rotateTransform);
transformGroup.Children.Add(translateTransform);
obj.RenderTransform = transformGroup;

Move a path:

translateTransform.X = 5f;

In order to rotate a shape around the pivot point, a RenderTransformOrigin property has to be present in the xaml. The RenderTransformOrigin uses the range 0 to 1 and is based around 4 properties: Width, Height, Canvas.Left, and Canvas.Right. These properties must be present in the xaml shape and set to the shape bounding box. Additionally, this property has to be added: Stretch=”Uniform”. For example:

RenderTransformOrigin="0.008,0.989" Width="307.24" Height="214.098" Canvas.Left="508.8" Canvas.Top="628.8" Stretch="Fill"

When the required xaml code is present, the shape can be rotated around the pivot point using this code:

rotateTransform.Angle = 30f;

If an object has a MatrixTransform in the xaml code, you can’t use rotateTransform, otherwise you will get scaling issues. In that case use the code below. Bear in mind though that the RotateAt pivot point coordinates are absolute canvas coordinates, not in the relative 0-1 range as with RenderTransformOrigin used by rotateTransform.Angle. If you don’t want to use MatrixTransform, you need to wrap the shape or group around another group (without a matrix transform) and use rotateTransform.Angle instead.

NoesisView panel = GetComponent<NoesisView>();
Noesis.Canvas obj = (Noesis.Canvas)panel.Content.FindName("g840");
MatrixTransform matrixTransform = (MatrixTransform)obj.RenderTransform;
Transform2 matrix = matrixTransform.Matrix;
matrix.RotateAt(0.5f, 232, 490);
matrixTransform.Matrix = matrix;

Hide a path:

obj.Visibility = Visibility.Hidden;

Set the color of a path:

obj.Stroke = new SolidColorBrush(Noesis.Color.FromLinearRGB(255, 255, 0));

Set the stroke thickness of a path:

obj.StrokeThickness = 3f;

Modify an existing path:

string dataString = obj.Data.ToString();
dataString = "M157.626,80.6264L149.626,88.6264";
//Modify the path string here.
StreamGeometry streamGeometry = new StreamGeometry();
obj.Data = streamGeometry;
streamGeometry.SetData(dataString);

Clone an existing path. This requires all relevant properties to be copied. Only Fill, Stroke, and StrokeThickness are shown here:

Path obj2 = new Path();
obj2.Data = obj.Data;
obj2.Fill = obj.Fill;
obj2.Stroke = obj.Stroke;
obj2.StrokeThickness = obj.StrokeThickness;
Noesis.Canvas canvas = (Noesis.Canvas)panel.Content.FindName("layer1");
canvas.Children.Add(obj2);

Create a path which can be drawn using commands instead of a string (does not draw anything yet):

Noesis.Canvas canvas = (Noesis.Canvas)panel.Content.FindName("layer1");
Path shapePath = new Path();
shapePath.Stroke = new SolidColorBrush(Colors.Green);
shapePath.StrokeThickness = 1;
StreamGeometry streamGeometry = new StreamGeometry();
streamGeometry.FillRule = FillRule.EvenOdd;
shapePath.Data = streamGeometry;
canvas.Children.Add(shapePath);

Draw a path using commands instead of a string:

using(StreamGeometryContext ctx = streamGeometry.Open())
{
ctx.BeginFigure(new Point(10, 90), true);
ctx.LineTo(new Point(20, 90));
ctx.ArcTo(new Point(60, 60), new Size(new Point(10, 10)), 0, false,
SweepDirection.Counterclockwise);
}

Change text:

TextBlock fdText = (TextBlock)panel.Content.FindName("text7246");
fdText.Text = "ABC";

Show/hide text:

TextBlock fdText = (TextBlock)panel.Content.FindName("text7246");
fdText.Visibility = Visibility.Hidden;
fdText.Visibility = Visibility.Visible;

Change text color:

TextBlock fdText = (TextBlock)panel.Content.FindName("text7246");
fdText.Foreground = new SolidColorBrush(Noesis.Color.FromLinearRGB(255, 255, 0));

Add xaml code from another xaml file:

NoesisView panel = GetComponent<NoesisView>();

Noesis.Canvas root = (Noesis.Canvas)panel.Content.FindName("layer1");
Noesis.Canvas xaml = (Noesis.Canvas)Noesis.GUI.LoadXaml("Assets/file1.xaml");

//Place new xaml content in current xaml.
root.Children.Add(xaml);

Replace the entire xaml file with another xaml file by using Resources.Load():

The xaml file which has been converted to an .asset file has to be placed in a folder called Assets/Resources. The file is then referenced in the Resources.Load() function without the extension. For example “Assets/Resources/file1.asset” becomes “file1”.

NoesisView panel = GetComponent<NoesisView>();
NoesisXaml xaml = (NoesisXaml)UnityEngine.Resources.Load("file1", typeof(NoesisXaml));
panel.Xaml = xaml;
panel.LoadXaml(true);

Replace the entire xaml file with another xaml file by using a public variable:

//Drag and drop the xaml file on the Inspector from the script.
public NoesisXaml xaml;

//At Start() function:
NoesisView panel = GetComponent<NoesisView>();			
panel.Xaml = xaml;
panel.LoadXaml(true);

In case you are interested, here are both the SVG and xaml source of the A320 PFD shown in this tutorial:

SVG
XAML

Note that the Back Up Speed Scale (BUSS) is also present but hidden to prevent clutter. Just do a text search for BUSS and you can enable the code manually.

All shapes, text, and groups, have an appropriate ID so you can find them easily.

The green altimeter numbers are hidden by a mask so you can animate them without having to worry about overdraw. The big numbers are called altLeftA, altLeftB, altMidA, altMidB, altRightA, altRightB. Not all numbers can be seen in the original SVG file because of the clipping mask but they are still there. Here is a screenshot with the clipping mask removed, revealing some hidden numbers.

alt clip

In order to animate the speed, altitude, and heading bar, simply move the index notches and numbers, change a number when it is out of view,  and re-position it accordingly.

To animate the Vertical Speed needle, change the start and end point of the line and underlying black contrast shape. Do not use rotate and scale as that can lead to unexpected results. The VS needle only goes to 6000 fpm and then stops. Any higher value is only visible in the VS number box. Near the VS needle is a line called “VSreference”. The right point of the line is the virtual pivot point of the VS needle. So any VS needle deflection must be drawn between that point and the current VS value. The line should only start drawing at the edge of the screen though.

The black outline used on some objects to increase contrast can cause aliasing at low resolution. In that case it is best to disable them.

Note that the attitude pitch angle scale is not linear and this has to be taken into account when setting a pitch angle.

Note that flowed text is not supported by ViewerSVG. QCAD creates regular text from imported CAD drawings but if you create text inside of Inkscape, do not drag to make a text box. Instead just select the text tool, click, and type. This creates regular text which does not create any problems.

Post me a mail or post a comment if you have any questions.

 

Machine Learning and the future of design

I have been pondering whether or not I should write this blog post for a while now. I don’t like to make predictions about the future because if you look back at future predictions made 30+ years ago, most of them look silly. However, I really think what I am going to talk about will happen. I don’t know when, but it is inevitable. And it will question the very essence of what it means to be human.

This vision about the future gained my interest after I finished the A320 cockpit project. I thought it was a good time to reflect on the lessons learned from the past and how to do it differently in the future. In short, I spent many, many hours measuring the cockpit, taking thousands of photos, learning how to do CAD design, learning a new scripting language for CAD conversion, and learning how to do texturing. Granted, I already knew a little bit of CAD, and know how to code, but suffice it to say, it was a lot of work for replicating an existing design. And all of this is just for the 3d model. It doesn’t even scratch the surface of the systems logic.

Currently there is no way around the process of replicating a design in software. Someone has to do the hard work. Of course it will be less work if you don’t have to learn a new skill, but it still requires manual labor. Lots of it. Now the question is, can’t this be automated?

There was a time not so long ago when people performed tasks which have now become obsolete due to the advance of technology. A telephone switch board operator is a good example. But also several jobs in the cockpit, including the radio operator, navigator, and flight engineer. However, all of these jobs were relatively simple. A computer could do the same job.

Reconstructing an entire cockpit with all of the related complexities is a different story. This can never be automated. Or can it? Let’s have a look at the current situation in 2017 (this is going to be fun to read 30 years from now).

There are 3d scanners capable of scanning objects with micron resolution. They can output a point cloud with color information and specialized software can turn this into a textured mesh. Sounds great, but it is not usable for real time rendering. A lot of manual work is still required. This includes cleaning up the mesh, stitching separate scans together, reducing the poly count, and baking the high res model data into textures textures. Not to mention the fact that one of these scanners costs the same as a sports car and that they can’t even scan reflective surfaces.

So back to the original question. Can this be automated? Not currently. But in the future, definitely it can. At some point, scanners will be available which can scan any type of surface using a variety of different techniques and different lighting conditions at the same time. For example, using a camera, laser projected grid, regular lights, and high resolution radar. They can capture all surface properties such as albedo, metallic, roughness, normal, opacity, and anisotropic/microfacet structure.

But the scanner is not what I am here to talk about. The perfect scanner for 3d acquisition would still require a lot of manual work in order to make the data suitable for realtime rendering. The fact that computers will be more powerful in the future is not the answer to overly complex or inefficient data. That is bad use of resources which is better spent elsewhere.

So even with the perfect scanner, there will be one missing link. Artificial Intelligence. in order to turn scan data into something usable, the data has to be seen in context. It has to be compared with both design drawings, and millions of photos, and videos of moving parts. On top of that, it has to be able to take user input when it gets something wrong. A simple voice instruction such as “this surface should be perfectly flat, and that edge should have a 1 mm chamfer” should be easily understood and implemented. It should then automatically understand that the same user applied rule applies to all similar geometry, ask for a confirmation, then execute. It should know the strengths and weaknesses of current rendering hardware and create textures, meshes, and animations, which are optimized for the hardware which is available at the time.

Now that we have the Artificial Intelligence capable of understanding real world physical objects, we can take it one step further. Let the AI read all aircraft manuals such as the FCOM, FCTM, AFM, and maintenance manuals. Of course having access to the original design documents from the aircraft manufacturer would be nice but let’s not be too optimistic. That will never happen. Not to worry though. Reverse engineering AI to the rescue.

When our AI deep neural network can read all available aircraft documentation, it should be able to get a solid understanding of the systems logic and the aircraft capabilities. Feeding it with thousands of hours of Level-D simulator video data will further enhance the result. The AI should be able to ask a human questions if things are not clear or contradict itself. The AI should generate a test environment where the systems can be tried out, taking corrective input in the form of voice instructions.

There will be no more need for actual flight test data for the flight dynamics model. When scan data from the outer aircraft is used, AI can figure out the flight dynamics model using fluid dynamics. The only hard part is to find out what the angular momentum constant is, because this requires knowledge of the location, size, and weight distribution of every single part in the aircraft. It is unlikely this kind of information is publicly available and it will require a fleet of nano-bots to scan. But AI can take cockpit flight video data with includes aircraft weight, CG, and sidestick position to make comparisons and make a good enough estimate.

It should be mentioned that the type of AI I am talking about requires an obscene amount of computing power for today’s standards. To put it into perspective, currently in 2017 you need the latest most expensive desktop hardware in order to teach AI how to recognize a cat in an image. The most expensive cloud based AI voice recognition system cannot recognize the phrase “Flaps 1” without a contextual setting. We are very far from achieving the ultra deep neural network speed our AI needs for this type of machine learning. At the current rate of hardware advancement it is going to take too long. It requires a new type of computing technology. Perhaps based on light, or based on electrons but structured like a nano neural network and able to re-configure itself.

Whatever hardware may be developed in the future, one thing is for certain. AI will come. In fact, it is already here, albeit not so “intelligent”, and most definitely not self aware. And when the day of capable AI comes, many jobs will disappear. If you are doing repetitive tasks which require no original input, your job is the first to disappear. But even jobs requiring ingenuity will eventually disappear, because it just requires more computing power. This brings me to the next section. What does it mean to be human?

It may seem like a rather strange thing to say. We are talking about technology after all. But it is not strange at all, because it will affect you if you are young enough to see the singularity happen. But it is not all doom and dystopia. Quite the opposite. Do you really want to be that telephone switchboard operator? Work as a cashier? Fly that plane? Maybe yes, just for the fun of it, or for the social interaction. But not so much.

If AI and it’s physical extension (robotic-bionic technology) can replace most of our jobs, wouldn’t the world economy collapse and widen the poverty gap even more? Not at all. Why do you have a job in the first place? To make money. Why make money? So you can do the things you enjoy. What if everything you need can be made by AI? Especially software will be better, instantly customizable, and have no bugs. But hardware created by AI will be better too because it has no flaws. Even if a 3d printer can’t print an apple, you can still have a farming bot. You will have every thing, and every service you need. Money will be meaningless because there are no goods or services to trade. This economy cannot crash because there is no economy, just like there was no economy 60.000 years ago and people had everything they needed.

So if a piece of software living in a bot can do everything a human can, what does it mean to be human? You might say, but I am creative. I can think of something out of thin air, something which did not exist yet. Surely a machine can’t do that. Actually they can. And they can do it better because they can either take a random seed and start to create from there, or try every possible configuration and come up with something new. Given enough computing power and AI can be much, much more creative than humans.

So what is the difference between you and AI? That depends on what what goes on in your mind and what you do with it. After the argument of ingenuity fails, the word consciousness quickly comes up. But if all consciousness means being aware of yourself, AI can be aware of itself too. That is not so hard. Perhaps AI doesn’t act on it’s own and always follows instructions? Well, let’s hope that is true because if it isn’t, things could get complicated. How about feeling? Can AI feel? That depends on how you define feeling. If it is the physical sensation you get when you are in love or your conscious telling you not to steal that cookie, then AI can have that too. It could be just a parallel program running in the background, outside of the main loop, out of reach.

I think that given enough computing power, AI can be definitely everything a human is, and more. But it could help humanity by freeing us from the economic vicious circle and allow nature to recover. But it could also become self aware and get a mind of its own. And if every computing device in the world is hooked up to the internet and is contributing to it’s computing power, it is not so easy to shut off either.

I guess the real question we should be asking is not “can AI think like a human” but “can a human stop thinking like AI”. Are you really conscious or is that feeling just a program in your mind? Do you do repetitive tasks without thinking much? Is the same dialog running in your mind over and over again?

AI is coming and there is no stopping it. Conversations like “we should do this to prevent that” are futile because there is no collective. Look around you. But in order for AI to keep working for you instead of against you, you need to be more than what AI is. What this means is to evolve and stay ahead of what AI is capable of on a conscious level. Initially there will be resistance when AI takes your job, but in the end it will be better because you didn’t want that job anyway. And even later, you don’t need that job anymore. At some point, AI will have evolved so far that we can have a conversation with it and we can discover what it really means to be human. At that point, we ourselves will evolve to the next level.

 

 

New panels

I made 2 new panels for the A320 CAD cockpit in Unity: an older Honeywell ADIRS, and a 2-bottle cargo fire panel. Here are some screenshots. The panels can be easily swapped using a configuration menu. Note that the shadow caster count in the stats window is excessive as I didn’t optimize it yet. The finished model has only a few shadow casters.

Metal edge wear in Substance Painter

Here is an experiment with different layers of materials in Substance Painter. The bottom layer is aluminium, followed by a paint primer, surface paint, and dust on top.

The different layers are revealed using hand painted mask but you can use a mask generator as well.

Click on the picture to view it fullscreen.