//

The post I made last week provoked a number of interesting responses, some of which I expected, some of which I did not. The curious thing about stating a difficulty in regards to a technical problem, especially one stated broadly, is that one often gets, sometimes quite forcefully stated, a number of suggestions packaged with assertions about what one ‘actually’ needs. My reaction to this sort of situation tends to be a combination of genuine gratitude, and general trepidation about how forcefully to respond “no actually, you’re quite missing the point of what I’m trying to do”.

Folks working in the games industry rarely acknowledge the degree to which their ideas and processes are deeply informed by what a game maker ‘typically’ wants. Meaning, there are problem spaces, and value trees that are very much the standard to the way most video games are produced, and when advice is given, the assumption is that one is also following that pattern.

Let’s take for a moment the question of simulation that has some sort of spatialized quality to it, like an air simulation. It was mentioned to me from about half a dozen sources last week that yes, I could indeed run sort of GPU simulation of gas, quite fast. This would be a fine suggestion, were it not for the fact that:

– I’m simulating a very large area of space, ~4x30x4 kilometers of twisting, cavernous, non-orthagonal space with complex collision meshes.
– I don’t really care about well… drawing it.

That second point is worth emphasizing, because it speaks to the difference in how I wish to be using the simulation data, and the fact that I’m very much fine with a incredibly coarse abstraction of things. Simulations which informed drawn elements of a game are very much a natural fit for the gpu. We ship some data over, parallel-compute the fuck out of it, bind it into some shader (or send it over to some other simulated system such as a gpu particle system/path traced effect) *dust off hands*. Maybe we sample some point in our simulation at a fairly low tick (as was suggested several time).

This is patently overkill for what I need however, and mal-suited for the size of the space and topology of what’s being simulated, which is why I hadn’t considered gpu-based strategies in the first place.

My lament over the api access of the light probe system was that a great deal of very great engineering (the sort I’m not qualified to perform frankly) had been done on a system that was wonderful for inhomogenous, coarse, spatialized, user-placeable datasets. Something that could naturally perform double/triple duty for storing data of various kinds about a soft region, serve as a pathing grid with some modification, etc.

Anywho, I wanted to write this quite note in part to stem the time of repeat suggestions for things I have indeed parsed out and investigated, and in part make a simple request. If you’re going to start aggressively throwing assertions of what I ‘need’ for something, please first take the time to ask a little more about what I’m trying to accomplish.

 

Leave a Reply

Your email address will not be published. Required fields are marked *