Machine Learning and the future of design

I have been pondering whether or not I should write this blog post for a while now. I don’t like to make predictions about the future because if you look back at future predictions made 30+ years ago, most of them look silly. However, I really think what I am going to talk about will happen. I don’t know when, but it is inevitable. And it will question the very essence of what it means to be human.

This vision about the future gained my interest after I finished the A320 cockpit project. I thought it was a good time to reflect on the lessons learned from the past and how to do it differently in the future. In short, I spent many, many hours measuring the cockpit, taking thousands of photos, learning how to do CAD design, learning a new scripting language for CAD conversion, and learning how to do texturing. Granted, I already knew a little bit of CAD, and know how to code, but suffice it to say, it was a lot of work for replicating an existing design. And all of this is just for the 3d model. It doesn’t even scratch the surface of the systems logic.

Currently there is no way around the process of replicating a design in software. Someone has to do the hard work. Of course it will be less work if you don’t have to learn a new skill, but it still requires manual labor. Lots of it. Now the question is, can’t this be automated?

There was a time not so long ago when people performed tasks which have now become obsolete due to the advance of technology. A telephone switch board operator is a good example. But also several jobs in the cockpit, including the radio operator, navigator, and flight engineer. However, all of these jobs were relatively simple. A computer could do the same job.

Reconstructing an entire cockpit with all of the related complexities is a different story. This can never be automated. Or can it? Let’s have a look at the current situation in 2017 (this is going to be fun to read 30 years from now).

There are 3d scanners capable of scanning objects with micron resolution. They can output a point cloud with color information and specialized software can turn this into a textured mesh. Sounds great, but it is not usable for real time rendering. A lot of manual work is still required. This includes cleaning up the mesh, stitching separate scans together, reducing the poly count, and baking the high res model data into textures textures. Not to mention the fact that one of these scanners costs the same as a sports car and that they can’t even scan reflective surfaces.

So back to the original question. Can this be automated? Not currently. But in the future, definitely it can. At some point, scanners will be available which can scan any type of surface using a variety of different techniques and different lighting conditions at the same time. For example, using a camera, laser projected grid, regular lights, and high resolution radar. They can capture all surface properties such as albedo, metallic, roughness, normal, opacity, and anisotropic/microfacet structure.

But the scanner is not what I am here to talk about. The perfect scanner for 3d acquisition would still require a lot of manual work in order to make the data suitable for realtime rendering. The fact that computers will be more powerful in the future is not the answer to overly complex or inefficient data. That is bad use of resources which is better spent elsewhere.

So even with the perfect scanner, there will be one missing link. Artificial Intelligence. in order to turn scan data into something usable, the data has to be seen in context. It has to be compared with both design drawings, and millions of photos, and videos of moving parts. On top of that, it has to be able to take user input when it gets something wrong. A simple voice instruction such as “this surface should be perfectly flat, and that edge should have a 1 mm chamfer” should be easily understood and implemented. It should then automatically understand that the same user applied rule applies to all similar geometry, ask for a confirmation, then execute. It should know the strengths and weaknesses of current rendering hardware and create textures, meshes, and animations, which are optimized for the hardware which is available at the time.

Now that we have the Artificial Intelligence capable of understanding real world physical objects, we can take it one step further. Let the AI read all aircraft manuals such as the FCOM, FCTM, AFM, and maintenance manuals. Of course having access to the original design documents from the aircraft manufacturer would be nice but let’s not be too optimistic. That will never happen. Not to worry though. Reverse engineering AI to the rescue.

When our AI deep neural network can read all available aircraft documentation, it should be able to get a solid understanding of the systems logic and the aircraft capabilities. Feeding it with thousands of hours of Level-D simulator video data will further enhance the result. The AI should be able to ask a human questions if things are not clear or contradict itself. The AI should generate a test environment where the systems can be tried out, taking corrective input in the form of voice instructions.

There will be no more need for actual flight test data for the flight dynamics model. When scan data from the outer aircraft is used, AI can figure out the flight dynamics model using fluid dynamics. The only hard part is to find out what the angular momentum constant is, because this requires knowledge of the location, size, and weight distribution of every single part in the aircraft. It is unlikely this kind of information is publicly available and it will require a fleet of nano-bots to scan. But AI can take cockpit flight video data with includes aircraft weight, CG, and sidestick position to make comparisons and make a good enough estimate.

It should be mentioned that the type of AI I am talking about requires an obscene amount of computing power for today’s standards. To put it into perspective, currently in 2017 you need the latest most expensive desktop hardware in order to teach AI how to recognize a cat in an image. The most expensive cloud based AI voice recognition system cannot recognize the phrase “Flaps 1” without a contextual setting. We are very far from achieving the ultra deep neural network speed our AI needs for this type of machine learning. At the current rate of hardware advancement it is going to take too long. It requires a new type of computing technology. Perhaps based on light, or based on electrons but structured like a nano neural network and able to re-configure itself.

Whatever hardware may be developed in the future, one thing is for certain. AI will come. In fact, it is already here, albeit not so “intelligent”, and most definitely not self aware. And when the day of capable AI comes, many jobs will disappear. If you are doing repetitive tasks which require no original input, your job is the first to disappear. But even jobs requiring ingenuity will eventually disappear, because it just requires more computing power. This brings me to the next section. What does it mean to be human?

It may seem like a rather strange thing to say. We are talking about technology after all. But it is not strange at all, because it will affect you if you are young enough to see the singularity happen. But it is not all doom and dystopia. Quite the opposite. Do you really want to be that telephone switchboard operator? Work as a cashier? Fly that plane? Maybe yes, just for the fun of it, or for the social interaction. But not so much.

If AI and it’s physical extension (robotic-bionic technology) can replace most of our jobs, wouldn’t the world economy collapse and widen the poverty gap even more? Not at all. Why do you have a job in the first place? To make money. Why make money? So you can do the things you enjoy. What if everything you need can be made by AI? Especially software will be better, instantly customizable, and have no bugs. But hardware created by AI will be better too because it has no flaws. Even if a 3d printer can’t print an apple, you can still have a farming bot. You will have every thing, and every service you need. Money will be meaningless because there are no goods or services to trade. This economy cannot crash because there is no economy, just like there was no economy 60.000 years ago and people had everything they needed.

So if a piece of software living in a bot can do everything a human can, what does it mean to be human? You might say, but I am creative. I can think of something out of thin air, something which did not exist yet. Surely a machine can’t do that. Actually they can. And they can do it better because they can either take a random seed and start to create from there, or try every possible configuration and come up with something new. Given enough computing power and AI can be much, much more creative than humans.

So what is the difference between you and AI? That depends on what what goes on in your mind and what you do with it. After the argument of ingenuity fails, the word consciousness quickly comes up. But if all consciousness means being aware of yourself, AI can be aware of itself too. That is not so hard. Perhaps AI doesn’t act on it’s own and always follows instructions? Well, let’s hope that is true because if it isn’t, things could get complicated. How about feeling? Can AI feel? That depends on how you define feeling. If it is the physical sensation you get when you are in love or your conscious telling you not to steal that cookie, then AI can have that too. It could be just a parallel program running in the background, outside of the main loop, out of reach.

I think that given enough computing power, AI can be definitely everything a human is, and more. But it could help humanity by freeing us from the economic vicious circle and allow nature to recover. But it could also become self aware and get a mind of its own. And if every computing device in the world is hooked up to the internet and is contributing to it’s computing power, it is not so easy to shut off either.

I guess the real question we should be asking is not “can AI think like a human” but “can a human stop thinking like AI”. Are you really conscious or is that feeling just a program in your mind? Do you do repetitive tasks without thinking much? Is the same dialog running in your mind over and over again?

AI is coming and there is no stopping it. Conversations like “we should do this to prevent that” are futile because there is no collective. Look around you. But in order for AI to keep working for you instead of against you, you need to be more than what AI is. What this means is to evolve and stay ahead of what AI is capable of on a conscious level. Initially there will be resistance when AI takes your job, but in the end it will be better because you didn’t want that job anyway. And even later, you don’t need that job anymore. At some point, AI will have evolved so far that we can have a conversation with it and we can discover what it really means to be human. At that point, we ourselves will evolve to the next level.

 

 

Advertisements

New panels

I made 2 new panels for the A320 CAD cockpit in Unity: an older Honeywell ADIRS, and a 2-bottle cargo fire panel. Here are some screenshots. The panels can be easily swapped using a configuration menu. Note that the shadow caster count in the stats window is excessive as I didn’t optimize it yet. The finished model has only a few shadow casters.

Metal edge wear in Substance Painter

Here is an experiment with different layers of materials in Substance Painter. The bottom layer is aluminium, followed by a paint primer, surface paint, and dust on top.

The different layers are revealed using hand painted mask but you can use a mask generator as well.

Click on the picture to view it fullscreen.

Texture assign for Unity

Ever wondered why every Substance Painter to Unity workflow tutorial shows the Unity scene with all textures already applied? Because this process is incredibly tedious and time consuming. Each material has 4 or 5 textures which you need to find, then drag and drop onto the material.

The TextureAssign editor script solves this problem by automating assigning the textures to the materials in Unity. It works by name-matching textures with materials.

textureassign

Get it here:
https://share.allegorithmic.com/libraries/2396

Image

A320 cockpit CAD model renders

Here are some new renders I made from the A320 cockpit CAD model. It now includes all panel text, which is embossed into the geometry. View the screenshots at full size by clicking on the thumbnail and then selecting “view full size” on the bottom right.

The CAD model is available for purchase. Please use the contact form at www.airlinetools.com for inquiries.

Using sprites for button states

Changing the state of a button in Unity (ON, OFF, FAULT, etc.) can be done in a few different ways. The easiest is to make a few different materials and change the material at runtime. However, this is not good for performance.

There is a better way. Use a texture atlas (sprite sheet) and shift the UV’s. This way the operation runs entirely on the GPU which is many times faster. It is not as easy to set up though, so here is a detailed description how to do this..

First you need to render separate emissive textures for each button state. In most cases there are 4 states: no lights, ON light, FAULT light, ON and FAULT light. This is great because each texture can be stored in a corner of the main texture making it both efficient and easy to set up.

The textures can be rendered in Substance Painter by creating a layer for each button state, each with different emissive materials placed using the ID map. Then disable all emissive layers except one and export the textures. Rename the emissive texture and export again with another button state enabled. Do this for each button state until you have 4 separate textures. Click on the thumbnail for a better view. I wrote a plugin for Substance Painter which makes exporting the textures more easy. You can find it here:

https://share.allegorithmic.com/libraries/2319

Once the plugin is installed, just press on the “Export Emissive” button and it will automatically save the emissive channel and rename the texture as necessary.

Below you can see the 4 exported emissive channel textures. Note that the orientation is on its side. This is due to the automatic UV unwrapping from Unwrella. It might not look nice but it is completely irrelevant in our workflow.

emission-textures

The next step is to place each of the 4 textures in the corner of a new, bigger texture. This can be done in Gimp using a plugin called “fuse layers”. You can find the plugin here:
http://registry.gimp.org/node/25129

Place the plugin in the following directory:
C:\Program Files\GIMP 2\share\gimp\2.0\scripts\

Once the plugin is installed, fuse the 4 textures into a single one.
File->new-> set the same resolution of the input image. The resolution of the final image will be automatically increased accordingly.
Set Image->mode to RGB.
File->Open as layers-> select all 4 images.
Delete the background layer.
Filters->Combine-Fuse layers. Set x = 2.
To save, use file->export.

Now we have a single texture containing a button state in each corner:

emissive-combined-example

This texture can’t be used as-is because the UV mapping needs to be changed in code. This is what happens when you apply the texture in Unity without any UV modifications:

emissive-unity-default

To fix this, the Unity standard shader needs to be modified. Here is how to do that:

-Download build in shaders.

-Copy “Standard.shader”, rename to “StandardShift.shader”, and put into project in the same folder called Shaders.

-Copy the following files and put into project in a folder called Shaders. Note that only the file “UnityStandardInput.cginc” from this list will be modified, but all other files are needed, otherwise it won’t work.

UnityStandardInput.cginc
UnityStandardCore.cginc
UnityStandardCoreForward.cginc
UnityStandardCoreForwardSimple.cginc
UnityStandardMeta.cginc

-Open StandardShift.shader.
-Modify the line —–Shader “Standard”—– at the beginning of the shader to:

Shader "StandardShift"

Place this code below the line —–_DetailNormalMap(“Normal Map”,—–

_EmissionTileOffset("EmissionTileOffset", Vector) = (1,1,0,0)

Note: because the programmers at WordPress think it is a good idea to change the quote format (“), you might not be able to find a line of code using copy-paste-search.  Just search for a single word instead.

-Open UnityStandardInput.cginc.
-Place this code just below the line —–sampler2D  _EmissionMap;—–

half4   _EmissionTileOffset;

-Search for this function:
—–half3 Emission(float2 uv)—–
-Place this code just above the line —–return tex2D(…—–

uv.x *= _EmissionTileOffset.x;
uv.y *= _EmissionTileOffset.y;
uv.x += _EmissionTileOffset.z;
uv.y += _EmissionTileOffset.w;

-Create a material and set the shader to StandardShift.
-Add the material to an object.
-Place the texture with the 4 button state in the emissive slot.
-Create a script and add some code to change the button state using SetVector(). Here is an example:

//The vector format is:Tile X, Tile Y, Offset X, Offset Y
Renderer rend = GetComponent();
//Bottom left.
rend.material.SetVector("_EmissionTileOffset", new Vector4(0.5f, 0.5f, 0f, 0f));
//Bottom right.
rend.material.SetVector("_EmissionTileOffset", new Vector4(0.5f, 0.5f, 0.5f, 0f));
//Top left.
rend.material.SetVector("_EmissionTileOffset", new Vector4(0.5f, 0.5f, 0f, 0.5f));
//Top right.
rend.material.SetVector("_EmissionTileOffset", new Vector4(0.5f, 0.5f, 0.5f, 0.5f));

Now we can cycle through the different button states using a script in Unity. Here is the result:

If it doesn’t work, make sure all required files are copied to the Shader folder. Then go to Unity->Assets->Reimport All. After that, select the StandardShift shader -> Inspector -> compile and show code.

Edit:
Added a fix so it now works in Unity 5.5+ and is tested using Unity 2017.3