How A.I. will change the 3D industry - Andrew Price
During my industry research, I found this very interesting video of a talk by Andrew Price (the creator of Blender Guru and Poliigon) about the future of the 3D industry and how will A.I. affect it.
As he said in the beginning, we have always thought that unlike manual labour jobs, computers can't replace artists because they can't replicate art or be creative. But lately, it is proven not to be true, as he showed us in the video.
The main point of any future technology prediction is that "any technology that will make things better, faster and cheaper will eventually become a standard" (A. Price, 2018).
The fact is, that making AAA games cost more and more money and they are already more expensive than making a feature film. According to research by Raph Koster, the price of Triple-A games goes up 10 times every ten years. The main cost is making assets. He gave a great example of creating a building in 3D, just one building on a street. It needs modelling, which takes approximately 12 hours, texturing for basically the same time and that makes first pass total 22 hours. Of course, people in charge are rarely happy with the first outcome so it will need at least 2 or 3 revisions, which makes it 66 hours together for one building. And as he said, for $60/hour it costs $3900 to make just one building. In the second picture, we can see approximate prices of everything in a scene of a street in a game. The price of $200,000 is already crazy and that is just one scene.
As he said in the beginning, we have always thought that unlike manual labour jobs, computers can't replace artists because they can't replicate art or be creative. But lately, it is proven not to be true, as he showed us in the video.
The main point of any future technology prediction is that "any technology that will make things better, faster and cheaper will eventually become a standard" (A. Price, 2018).
The fact is, that making AAA games cost more and more money and they are already more expensive than making a feature film. According to research by Raph Koster, the price of Triple-A games goes up 10 times every ten years. The main cost is making assets. He gave a great example of creating a building in 3D, just one building on a street. It needs modelling, which takes approximately 12 hours, texturing for basically the same time and that makes first pass total 22 hours. Of course, people in charge are rarely happy with the first outcome so it will need at least 2 or 3 revisions, which makes it 66 hours together for one building. And as he said, for $60/hour it costs $3900 to make just one building. In the second picture, we can see approximate prices of everything in a scene of a street in a game. The price of $200,000 is already crazy and that is just one scene.
The problem of this is static workflow - artists have to create each thing manually with ratio 1:1. Of course, you can just take that building and alter it a little bit, but that still takes hours of work and re-texturing it. What Andrew showed us in the video is the future of 3D modelling - procedural workflows. It is already happening, no made up sci-fi theories, we already have software Houdini, where you give it all the parameters and information it needs and Houdini generates different buildings. This is something that will become standard, the same way it happened with texturing. Before, texturing was similar to what we did in the Maya modelling module. Using photo textures or hand-painted textures specifically for that one mesh, altering them so they are not repetitive. Then artists were introduced to substance designer, that has digital textures and it is very easy to generate variations of them. You also just create material and it generates it anywhere you put it. We also know substance painter, where you easily apply the textures made in substance designer. They use procedural materials and they are huge money savers. Now, they are industry standard. The same thing will happen to modelling.
Houdini also uses procedural level design. Instead of putting all elements by hand, which takes a very long time and you have to alter it everytime you make a change there, in Houdini you create an ecosystem on a small space, as we can see in the top left picture. You just set rules to the ecosystem, for example, you define where big and where small trees grow, what foliage is under the trees, what grows near a lake when there is one etc. and it generates this whole environment that will be automatically updated with any changes you make in the environment, like roads or building and for making those you also have easy tools.
So the prediction is, that procedural will become standard in modelling, creating materials, texturing, and world building. It will save a lot of time, it is fast and intelligent, takes away hours of manual labour which can be frustrating.
A second thing that is going to happen is, what he called, machine creep. He compares how machine learning works differently than traditional software and the key advantage is that now machines have the ability to learn and improve. The issue, why we are not using it everywhere now, is that machine learning requires huge datasets and fast hardware, which is still very expensive but it will improve over time. I still remember how a few years ago my dad got 50 MB USB and we were all amazed. How I said that if my phone has 2 GB memory card I am happy with it. As years go, data need more and more space, as they become bigger and have better quality. The bigger the resolution, the more space it needs. We went from HD to 8K in a few years, so getting huge datasets and faster hardware to enable machine learning will be here is a couple of years.
Machine learning can be used for denoising, which allows us to render things faster. The faster you render, the more noise you get, but if you use A.I. to re-sample all the noise, you will get a clear image with almost the same quality as the original model. Andrew tested it on his 3D scene of a kitchen where he rendered it 50% smaller than used up-rezing to get it back to the original size using A.I. sampling and the rendering was 380% faster with basically the same quality as if he rendered it with 100% scale.
Another use for machine creep is in mo-cap. As we can see, it is already able to guess where the people are and trace them, so motion-capture suits are not going to be needed in a few years. They tried it on various videos with complicated backgrounds and it all worked, the A.I. was able to guess all the shapes and movements.
Another test they did was filming a dog with a mo-cap suit for a certain period of time and the A.I. learned how it moves and then they were able to use any of those movements for the animation of the dog with just simple controls.
So his second prediction is that machine learning will become standard in software for denoising, uprezing, mo-cap and animation when we will get fast hardware with huge datasets. I think it will be a massive difference, especially in mo-cap and animation, where we will just need to film how people and animals move and the A.I. will guess it, rig it and let us use simple controls to move it around, which sounds like proper sci-fi crazy theory but we can see they already used it.
And finally, the third leap he predicts is machine-assisted creativity. As I mentioned in the beginning, we all thought machines cannot replace creativity. But as we can see in the picture below, all we need is a picture of a thing, its outline and the computer just generates a lot of different samples of that thing.
Here the input was the basic layout of buildings and it just generated realistic-looking houses.
It can also use a photo of an environment and create environment based on it but with comletely different light or wearther conditions.
As we can see, it just needs a scribble and then creates something that, as Andrew said: "is not a finished product but definitely a starting point for concept artists" (A.Price, 2018).
It is also able to generate unique faces based on different real faces, which can be used for creating NPCs for games. Or, as we can see in the pictures of bedrooms, that are all computer generated, you can just give the computer a picture of a bedroom, a moodboard, and it creates for you mane different ideas.
Something that sounds even crazier is describing an idea to the computer and the A.I. goes through its whole database, for example, all birds it knows and then creates unique birds based on the descriptions you give to it. None of them is real, so it basically creates new species of birds.
This A.I. is so good that when it comes to style transfer, where you give it a painter's style and a photo and it creates "a painting" from that photo, it fooled 39% of art historians that they were looking at an actual painting. It can do the same thing with video (the horse) when it just transfers it to animation of any style we want.
So the last prediction Andrew gives us is that artists will use machine learning to explore new ideas, which will also make it easier for people who cannot draw but have ideas, to show them to artists.
In conclusion, the changes he can see in the next 5 years are that procedural workflows will become standard to make the work easier and faster, machine learning replaces many technical jobs, and we will have creative assistance from A.I. to helps us to generate ideas.
The question is: Are we being replaced? Are artists not going to be needed in a few years because machines can be creative, generate ideas and do it all faster than we can? Andrew used a very good example for this when in 90's they thought it was the end of chess when a computer defeated human for the first time. But it was not the end of chess. First, they allowed people to use the whole chess database when playing against a computer, to make it even, because the computer can use it too and, what is the most important fact, since then the amount of chess grandmasters has doubled.
So no, it is not going to make artists redundant. The machines will still need human to put the intent to it and then to decide what looks good and what is going to be used. In the picture below we can see which jobs are at risk and which are not. The fact is, that actual art jobs are safe. Jobs that need critical thinking will be still needed. Jobs that are going to be replaced by A.I. are labour intensive, repetitive jobs that actually no one even enjoys, like retopology, mesh cleanup, mocap cleanup or rotoscoping. Or even "make 100 of these" which is an idea generation but you also have to spend the actual time making 100 of things and then use one. And even with this, as we can see in the pictures below, the forecast for game art and 3D industry is that within 5 years it will double and quadruple by 2025. That means 25.5% growth per year by the year 2025. We can just expect that our creative jobs will be easier when it comes to the boring, manual labour part of it.
References
ArtStation. (2018). Andrew Price. [online] Available at: https://www.artstation.com/andrewprice.
Hacker Noon. (2018). How Machine Learning Developed the Face of MCU’s Thanos. [online] Available at: https://hackernoon.com/how-machine-learning-developed-the-face-of-mcus-thanos-82f98ef4f381.
Raph's Website. (2018). The cost of games. [online] Available at: https://www.raphkoster.com/2018/01/17/the-cost-of-games/.
Sidefx.com. (2018). Introduction to Houdini. [online] Available at: http://www.sidefx.com/docs/houdini/basics/intro.html.
YouTube. (2018). The Next Leap: How A.I. will change the 3D industry - Andrew Price. [online] Available at: https://www.youtube.com/watch?v=FlgLxSLsYWQ.































Comments
Post a Comment