Procedural Generation in games
Specifically Terrain Generation (sort of)
I want to start this off by saying that I am not a professional in the industry (I'm a college student that is loosely studying this for my minor so I'm not even an amateur yet) so take all of this with a few fists full of salt (and maybe a teaspoon of sugar). Hopefully though, after all of this is said and done you will have learned something new. I will have sources for some of the topics I mention but a lot of it might be me speculating on the topic or waxing poetic.
Anyway, let's get back to the main topic...
As environments in games get larger and more diverse, Developers have had to start pulling more and more tricks out of their magical little bags in order to make a gaming experience that is fresh and enticing. One of these "tricks" that could allow for potentially infinite worlds/replayability is procedural generation. The reason I say "potentially infinite worlds/replayability" is because in the end, it's all up to the game developer to set the limits of what they can/want to do and what they can't/not want to do. This will hopefully make more sense later on when I am feeling less cryptic.
Finally, Procedural Generation, What is it?
Simply put, Procedural Generation (specifically in videogames) is the use of a procedure (usually some sort of algorithm or process) to generate different elements of an environment. This description is sort of vague but that's on purpose, procedural generation can be applied to so many different elements in a game that saying anything more specific would start to leave different things out. An example of what I'm talking about could be the generation of landscapes for a game (a pretty common way of using it). Games like Minecraft, Terraria, and No Man's Sky apply procedural terrain generation to great effect, but they apply the principle to more than just the terrain. They use it to generate things like structures, textures, and even things like object stats (tools in the case of No Man's Sky).
|
Minecraft |
|
Terraria |
|
No Man's Sky |
Each of these games look vastly different and have vastly different gameplay mechanics, but each of them uses procedural generation on various levels to build their worlds. A major point I want to get across here is that the game developer didn't handcraft each of these worlds directly (that would take a few lifetimes for some of these games). Instead, each one used a procedure to generate these worlds, cutting down the time it took to make these environments from lifetimes to however long it took to code the "procedure" and run it on your computer.
one last note...
These "elements" I talk about can be any part or aspect of a game you want them to be. landscape, structure formation (how the house is built), structure placement (where the house is built), object/entity placement, object/entity models, object/entity stats, textures, music (good luck on that one), game objectives, etc.. you could probably combine all of them together and procedurally generate a whole game if you wanted (please don't do this it would put me out of a potential job)
Why Use it?
Usually, you would use procedural generation because the game you're making would benefit from it. "It" being a potentially infinite/diverse gameplay experience that can change from instance to instance. To elaborate with an extreme example, the alternative to an entirely procedurally generated environment is a handcrafted environment where the developer has complete control over every element. This isn't a bad thing, in fact, there are a lot of benefits to these sorts of "handcrafted" environments.
(I am about to make a lot of generalizations so try not to focus too hard on these points. For every point I make there are probably hundreds of examples where I am proven wrong)
Handcrafted environments usually have the potential to be a lot prettier as well as being more functional (at the same time) than generated ones. And if you stick with me for a minute I hope I can explain my thinking behind this. Handcrafted environments are usually able to do this because they are purpose-built for the games they belong to. If you want a directed game experience (setting, mood, themes, plot) then you will need to start getting hands-on with the environment. That's not to say that you cant generate a similar experience (No Man's Sky recently added some generated horror elements that are pretty nifty), but if you start to restrict the "procedure" you're using in order to get the same level of control you would have over a handcrafted one then you start to lose the benefits of using a generated one (diverse and dynamically created environment/gameplay). A lot of the best environments in games are handcrafted since they are able to serve a specific purpose and lend themselves in their entirety to their games.
That's not to say that handcrafted environments are perfect either. Since they require a developer to develop them... they are usually more time consuming to produce, and because they are more time consuming they usually end up being limited in their space/scale. If the environment is too big then it's hard to give the same level of attention to detail that you could get in a smaller handcrafted environment. That's not to say that you cant make a huge handcrafted environment, just think about the studios that bank on their large-scale/detailed environments (Rockstar, Bethesda, Ubisoft, etc.) but it is indeed harder when you scale things up.
The flipside to all of this is a procedurally generated environment. As mentioned before a procedurally generated environment can be created as fast as you can generate it, and since it's generated from scratch it can be potentially infinite in its scale and diversity. When I say infinite I really do mean it too. Games like Minecraft make claims of "infinite worlds" (it's not actually infinite since hardware/software limitations come into play) that can go on forever since they use a functional algorithm to generate its world (it will generate as far as you can go). Games like No Man's Sky use procedural generation to make game worlds that are the size of a galaxy (they don't need to store each and every planet, just the method for generating them). And when I say diversity I'm not just talking about the potential for an environment that never repeats, I am also talking about the fact that (depending on your implementation) with some minor tweaks to a seed or some variables you can generate some entirely different elements. This sounds magical (and in some ways it is) but there are still some cons to think about when it comes to procedural generation.
(The Gamasutra has a great article talking about
this sort of stuff as it relates to a game called
Overland)
Some of the downsides to procedural generation are direct counterpoints to the benefits of handcrafted environments. For example, where it might be easy to tweak specific elements of a handcrafted environment (location of a tree for example) it would be a lot harder to tweak something similar in a generated environment without having that reflect itself somewhere else. You have essentially an extra layer of abstraction to deal with since you are managing a process/algorithm instead. Sort of going hand in hand with this is the inability to fine-tune a gameplay experience. I'm not talking about the overarching gameplay experience (the experiences in games like Minecraft and Terraria are very fine-tuned) but rather the moment to moment experience (scene 1 to scene 2, plot dependent stuffs). After all, how can you finely direct an experience if it changes each time? A topic that is mentioned in the Gamasutra article above "The pros and cons of procedural generation in Overland" mentions the lack of control you might have over things like difficulty (one environment might have a super easy gameplay experience where everything is accidentally handed to them while another might get nothing but bad luck that ruins an experience) it's potentially a flip of the coin.
With all of this said (I hope you are still here) there are exceptions to everything (especially the points I made above). I didn't once mention the fact that you can very easily have handcrafted elements inside of a procedurally generated environment or vice versa. A beautiful thing about life and game development is that nothing needs to be black and white. You can take the best parts of a handcrafted environment and shove some procedurally generated stuff in between. By doing this you can get a vast world that looks diverse and have key points or moments highlighted by a handcrafted environment that sets the mood when you need it.
Methods (Terrain Generation)
(Skip to here if you want to learn about procedural terrain generation)
Up until this point, I have been talking pretty generally/abstractly about procedural generation (the what/why of procedural generation as I know it) but now it's time to talk about Terrain generation, its application in games, and some downsides too.
This section leans pretty heavily on
a blogpost from Runevision, a blog by Rune Skovbo Johansen that talks about some great approaches to terrain generation and how it applies to games. Rune talks about three methods for terrain generation (simulated, functional, and planned). I think these concepts do a great job of generalizing the idea of terrain generation while also talking about some of the downsides. After that, I'll talk about how I implemented some procedural terrain generation.
Functional
The first method that Rune talks about in his post is the simulated method of generating terrain, but I wanted to start with the functional method first since (A) it's what I used to generate terrain, and (B) it can be used as a foundation for a lot of other methods.
The primary focus of the functional method is to use a function to generate data. This data can apply directly to how you generate your terrain, an example of that might be a function that returns an index to a tileset based on some x/y coordinates, or it can be used to generate a value that is used modify a ground mesh (that is what I did). These functions can be whatever you want them to be, but you usually want them to generate data that is diverse (no clear repeats) and in some way continuous(smooth transitions between randomness). An example that is mentioned in Runes blog and quite well known for these sorts of applications is Perlin Noise. It's a type of noise that while being random like white noise, has some continuity between the randomness.
|
Normal Perlin Noise Rendered on a sphere |
As you can see this Perlin noise is quite random, but since it's the result of a function you can actually stack Perlin noise with "finer"(noisier) Perlin noise to get more detail out of it.
|
Layered Perlin Noise rendered on a sphere
|
Now isn't that prettier? coincidentally it also produces better-looking terrain
(but ill talk about that later).
Using this you can modify the height of points on a mesh to produce some nice terrain, since it's a mathematical function too it can be used to generate infinite data as well. depending on the function you use, you can generate terrain using this method in real-time and infinitely. This makes it good for really any platform and games that are open-world/free roam. One thing that Rune mentions in his blog while talking about terrain generation in general is the fact that the terrain generated isn't always traversable so it's well suited for games like Minecraft (where you can break through a mountain if it's in your way) but not so much for basically any game where you cant "make a way". One complaint you might have about this terrain is the unrealistic nature of it, but rest assured that is where the second method comes into play.
Simulated
As I mentioned before this is the first method that Rune talks about in his blog. The goal of this method is to take a piece of terrain (using a functional base is common from what I have seen) and simulating the natural effects of weather/erosion on that terrain (rain, wind, glaciers, floods, if you can simulate it you can put it here). As you can imagine, this can be a pretty demanding process (it took a lot of years to make the earth as scraggly and weathered as it is) so simulating it takes a lot of "steps". since its a multi-step process it's not really good for real-time applications but the results it produces look as close to real-life as you can get. This method would be good for open-world games but not ones where you want infinite scale in real-time. You would probably want to use it when you have a lot of small-scale applications that can take advantage of load times to simulate the terrain (or maybe even simulate a piece of large terrain once and use it for the whole game). some functional methods emulate simulated terrain but it's just that, an emulation. This method also struggles with traversability (natural terrain ain't always so kind so why would it be kind in a simulation) but as mentioned above looks the best when done properly.
This is a paper that talks about generating terrain by simulating it (a nifty read). One cool thing that they talk about is using different functional method layers to produce a good base mesh for simulation.
Planned
This is the third "method" that Rune talks about in his blog, the Planned method. I personally think this method works best when applied as a design concept for procedural generation and used in tandem with other methods of terrain generation. This method involves using "level design principles" to generate the terrain rather than a function or nature. depending on how you apply this method, it can be the most likely to produce a traversable terrain since your level design principle can be something along the lines of "make a path for the player between these two points and generate hills along the sides". This method is good for similar types of games and depending on how you implement it can provide the best structure for directed gameplay out of the two, if you do keep the gameplay and player in mind you can even put it in games where you cant "make a way" (amazing). I like to think of this method as creating a set of rules for the algorithms to work around. I ended up using the bones of this method when I generated my terrain (as you will see now)
Implementation
When I decided to implement some form of terrain generation I knew from the start that I would end up using the functional method. My computer is pretty great but I didn't want to (A) program my own version of rain erosion (I tried, I failed), and (B) wait for my results. The functional method allowed me to code my implementation of procedural terrain generation and run it to instantly see my results. On top of using the functional method, I used a semi-planned method to make small bodies of water and generate some towers to plop down on my hills.
|
The terrain I generated |
I coded this in Blender using the built-in python scripting tools. I decided to use Blender because they have built-in APIs for both creating/generating Perlin noise and creating a mesh from data (some of this will have blender/python specific bits but I'll try to keep it as conceptual as possible while I break down my process.Step 1: Creation of the Points (vertices)
I started by creating a point class since that is what I would be modifying and meshing together to make my terrain.
The point class I made is really simple, I gave it X, Y, and Z coordinates in the form of 3 floats and a member function that would pack them into a list ([X, Y, Z]) for use with the Blender API.
The next step I took was to create a Point collection class. I made this in order to create and manage the points so it only stored a 2d array of points, but it had functions for getting a point by index as well as rescaling each point by a given amount.
The first step towards actually making the mesh actually started in the constructor of the point collection class where it creates a 2d plane of points and stores them in its 2d array of points.
Step 2: Use of the Function
one thing I would like to mention here is that I could have used Blenders Perlin noise function when I created the points and used the returned value to set their world height right then and there (it would have been faster and probably what I would have done if I was seriously making a game). But I created a method that takes a point collection and modifies each point afterward so I could instead run the points through the function multiple times to get multiple layers of Perlin noise.
the first step towards passing the points through the method however was to actually scale them down. I did this so I could make the large scale noise for the mountains and then scale it back to normal for the small scale detailed noise.
Here I also applied a bit of the Planned method by setting a minimum height for the water.
Step 3: Planning it out
So now I have a 3d cloud of points that when meshed together will look like some mountains with bodies of water scattered about. But I found these mountains to be barren and plain, so I decided to add some towers to complete the aesthetic.
I took a bare minimum approach to this so started by first grabbing a random point out of my collection.
I made a "climb" method that would take this random point and recursively look at its for a higher point and climb to the highest one. once it was done I would mark the location as a potential spot for a tower.
here is where you would want to check that the tower locations don't overlap because next, I was going around and flattening each point around the tower location within a set radius. I would then smooth out the edges by doing a weighted average of the distance from the "hard radius" of the tower and the distance from a "soft radius" that was slightly further out. the result was a gradual shift from the tower foundation to the originally generated terrain that looked pretty good in the end.
|
Example of Tower and shift from planned to functional terrain generation
Step 4: Mesh and BuildThis step is more specific to me since I used the Blender API to mesh my points and place my towers but The process could be similar on any platform.
3d models nowadays are made up of thousands (if not millions) of polygons and each polygon is made up of tiny little triangles called tris. In order to turn my 3d point cloud into a mesh, I would need to give Blender a list of these tris. Luckily a tri is just made up of 3 points so all I needed to do was go through my point collection and make of list (three points) of each tri that existed (2 per square) | Example of a square broken into tries |
|
After that, I was able to pass the list of points and tris to the blender API and build the mesh. in order to place the towers I used the blender API to select and duplicate an object I labeled "Tower" and proceeded to place them at every location on the list of tower locations
finally, I was able to put a procedurally generated texture on the terrain mesh and call it complete. If I was to apply this method to a videogame then I would probably mix it with a system similar to Minecrafts chunk system where they load in the "chunks" of land around the player and only generate new ones if the chunk within a certain radius has never been generated before. Since I used a decently simple functional method to generate my terrain I could probably do this both infinitely and in real-time as the player explored the world.
Well, that's all he wrote. (I might come back and edit/update the post if anything new happens)
until next time,
written by Adam Currier
12/18/20
Comments
Post a Comment