Machine Learning and the future of design

I have been pondering whether or not I should write this blog post for a while now. I don’t like to make predictions about the future because if you look back at future predictions made 30+ years ago, most of them look silly. However, I really think what I am going to talk about will happen. I don’t know when, but it is inevitable. And it will question the very essence of what it means to be human.

This vision about the future gained my interest after I finished the A320 cockpit project. I thought it was a good time to reflect on the lessons learned from the past and how to do it differently in the future. In short, I spent many, many hours measuring the cockpit, taking thousands of photos, learning how to do CAD design, learning a new scripting language for CAD conversion, and learning how to do texturing. Granted, I already knew a little bit of CAD, and know how to code, but suffice it to say, it was a lot of work for replicating an existing design. And all of this is just for the 3d model. It doesn’t even scratch the surface of the systems logic.

Currently there is no way around the process of replicating a design in software. Someone has to do the hard work. Of course it will be less work if you don’t have to learn a new skill, but it still requires manual labor. Lots of it. Now the question is, can’t this be automated?

There was a time not so long ago when people performed tasks which have now become obsolete due to the advance of technology. A telephone switch board operator is a good example. But also several jobs in the cockpit, including the radio operator, navigator, and flight engineer. However, all of these jobs were relatively simple. A computer could do the same job.

Reconstructing an entire cockpit with all of the related complexities is a different story. This can never be automated. Or can it? Let’s have a look at the current situation in 2017 (this is going to be fun to read 30 years from now).

There are 3d scanners capable of scanning objects with micron resolution. They can output a point cloud with color information and specialized software can turn this into a textured mesh. Sounds great, but it is not usable for real time rendering. A lot of manual work is still required. This includes cleaning up the mesh, stitching separate scans together, reducing the poly count, and baking the high res model data into textures textures. Not to mention the fact that one of these scanners costs the same as a sports car and that they can’t even scan reflective surfaces.

So back to the original question. Can this be automated? Not currently. But in the future, definitely it can. At some point, scanners will be available which can scan any type of surface using a variety of different techniques and different lighting conditions at the same time. For example, using a camera, laser projected grid, regular lights, and high resolution radar. They can capture all surface properties such as albedo, metallic, roughness, normal, opacity, and anisotropic/microfacet structure.

But the scanner is not what I am here to talk about. The perfect scanner for 3d acquisition would still require a lot of manual work in order to make the data suitable for realtime rendering. The fact that computers will be more powerful in the future is not the answer to overly complex or inefficient data. That is bad use of resources which is better spent elsewhere.

So even with the perfect scanner, there will be one missing link. Artificial Intelligence. in order to turn scan data into something usable, the data has to be seen in context. It has to be compared with both design drawings, and millions of photos, and videos of moving parts. On top of that, it has to be able to take user input when it gets something wrong. A simple voice instruction such as “this surface should be perfectly flat, and that edge should have a 1 mm chamfer” should be easily understood and implemented. It should then automatically understand that the same user applied rule applies to all similar geometry, ask for a confirmation, then execute. It should know the strengths and weaknesses of current rendering hardware and create textures, meshes, and animations, which are optimized for the hardware which is available at the time.

Now that we have the Artificial Intelligence capable of understanding real world physical objects, we can take it one step further. Let the AI read all aircraft manuals such as the FCOM, FCTM, AFM, and maintenance manuals. Of course having access to the original design documents from the aircraft manufacturer would be nice but let’s not be too optimistic. That will never happen. Not to worry though. Reverse engineering AI to the rescue.

When our AI deep neural network can read all available aircraft documentation, it should be able to get a solid understanding of the systems logic and the aircraft capabilities. Feeding it with thousands of hours of Level-D simulator video data will further enhance the result. The AI should be able to ask a human questions if things are not clear or contradict itself. The AI should generate a test environment where the systems can be tried out, taking corrective input in the form of voice instructions.

There will be no more need for actual flight test data for the flight dynamics model. When scan data from the outer aircraft is used, AI can figure out the flight dynamics model using fluid dynamics. The only hard part is to find out what the angular momentum constant is, because this requires knowledge of the location, size, and weight distribution of every single part in the aircraft. It is unlikely this kind of information is publicly available and it will require a fleet of nano-bots to scan. But AI can take cockpit flight video data with includes aircraft weight, CG, and sidestick position to make comparisons and make a good enough estimate.

It should be mentioned that the type of AI I am talking about requires an obscene amount of computing power for today’s standards. To put it into perspective, currently in 2017 you need the latest most expensive desktop hardware in order to teach AI how to recognize a cat in an image. The most expensive cloud based AI voice recognition system cannot recognize the phrase “Flaps 1” without a contextual setting. We are very far from achieving the ultra deep neural network speed our AI needs for this type of machine learning. At the current rate of hardware advancement it is going to take too long. It requires a new type of computing technology. Perhaps based on light, or based on electrons but structured like a nano neural network and able to re-configure itself.

Whatever hardware may be developed in the future, one thing is for certain. AI will come. In fact, it is already here, albeit not so “intelligent”, and most definitely not self aware. And when the day of capable AI comes, many jobs will disappear. If you are doing repetitive tasks which require no original input, your job is the first to disappear. But even jobs requiring ingenuity will eventually disappear, because it just requires more computing power. This brings me to the next section. What does it mean to be human?

It may seem like a rather strange thing to say. We are talking about technology after all. But it is not strange at all, because it will affect you if you are young enough to see the singularity happen. But it is not all doom and dystopia. Quite the opposite. Do you really want to be that telephone switchboard operator? Work as a cashier? Fly that plane? Maybe yes, just for the fun of it, or for the social interaction. But not so much.

If AI and it’s physical extension (robotic-bionic technology) can replace most of our jobs, wouldn’t the world economy collapse and widen the poverty gap even more? Not at all. Why do you have a job in the first place? To make money. Why make money? So you can do the things you enjoy. What if everything you need can be made by AI? Especially software will be better, instantly customizable, and have no bugs. But hardware created by AI will be better too because it has no flaws. Even if a 3d printer can’t print an apple, you can still have a farming bot. You will have every thing, and every service you need. Money will be meaningless because there are no goods or services to trade. This economy cannot crash because there is no economy, just like there was no economy 60.000 years ago and people had everything they needed.

So if a piece of software living in a bot can do everything a human can, what does it mean to be human? You might say, but I am creative. I can think of something out of thin air, something which did not exist yet. Surely a machine can’t do that. Actually they can. And they can do it better because they can either take a random seed and start to create from there, or try every possible configuration and come up with something new. Given enough computing power and AI can be much, much more creative than humans.

So what is the difference between you and AI? That depends on what what goes on in your mind and what you do with it. After the argument of ingenuity fails, the word consciousness quickly comes up. But if all consciousness means being aware of yourself, AI can be aware of itself too. That is not so hard. Perhaps AI doesn’t act on it’s own and always follows instructions? Well, let’s hope that is true because if it isn’t, things could get complicated. How about feeling? Can AI feel? That depends on how you define feeling. If it is the physical sensation you get when you are in love or your conscious telling you not to steal that cookie, then AI can have that too. It could be just a parallel program running in the background, outside of the main loop, out of reach.

I think that given enough computing power, AI can be definitely everything a human is, and more. But it could help humanity by freeing us from the economic vicious circle and allow nature to recover. But it could also become self aware and get a mind of its own. And if every computing device in the world is hooked up to the internet and is contributing to it’s computing power, it is not so easy to shut off either.

I guess the real question we should be asking is not “can AI think like a human” but “can a human stop thinking like AI”. Are you really conscious or is that feeling just a program in your mind? Do you do repetitive tasks without thinking much? Is the same dialog running in your mind over and over again?

AI is coming and there is no stopping it. Conversations like “we should do this to prevent that” are futile because there is no collective. Look around you. But in order for AI to keep working for you instead of against you, you need to be more than what AI is. What this means is to evolve and stay ahead of what AI is capable of on a conscious level. Initially there will be resistance when AI takes your job, but in the end it will be better because you didn’t want that job anyway. And even later, you don’t need that job anymore. At some point, AI will have evolved so far that we can have a conversation with it and we can discover what it really means to be human. At that point, we ourselves will evolve to the next level.

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s