Tuesday, July 31, 2018

Devstream: two types of simulation

From the transcript of the Devstream:

I'm actually in the middle of developing a design for a mobile football game and so I've had to deal
with some of these issues. I'm dealing with them right now and it's a difficult question, because on the one hand, simulation for process tends to allow a more open solution. What simulation for process is, you set up a series of rules to simulate the events, so in the case of a sports game you set up the various rules for how the players are gonna play, you set up the various rules for what the different variables are going to be and how they're going to interact with each other, and then you just step back and you let it run.

Believe it or not, this is actually in some ways an easier approach, because you know you're not responsible, you know you're not trying to dictate any one result. Yeah, you just put in the ingredients, you mix them up and what happens happens. Now that's the way that you would have seen the old statistics in the Madden football games, for example. You know when you do that, when you use simulation for process, you almost always have a situation where the results are not going to be realistic. The process is complicated and it is intrinsically inaccurate. It doesn't matter whether you're talking about an AI attempt to replicate human intelligence, whether you're talking about an attempt to replicate an infantry firefight, or whether you're dealing with something like a football or soccer game, in all of those cases you're dealing with multiple layers of abstraction, and every abstraction, every assigned variable is going to be different than the real world

Even if you build a very complicated model using very accurate statistics, the small errors, the small
differences, are going to multiply so that by the time that you get to the end result, you're not going to end up with very realistic numbers.


  1. That Devstream topic choice was inspired. I made that decision without really thinking about it. Fortunately I was in a position where I can simulate "you win/lose" then simulate "X died", etc. I could really use my mind being expanded regarding simulation for effect, I tend to use it to tidy some stuff up but I default to about 80% process and 20% effect. This was really eye opening, I clearly hadn't given simulation for effect enough attention. (It's sad, I'm in the process of making a game that's pure management so the only part that's not an options menu is a simulation.) Thank you so much for this.

  2. This is pretty broadly applicable really. Look at complex models versus simple regression in statistics; that's what gave us "runaway global warming"... a modelling artifact.

  3. If you extend the CFD/FEA reasoning, simulation for effect means you start with a model of the entire system which gives accurate results, and then break it into smaller and smaller subcomponents until you run out of money or time, or you can't get an accurate result from the next layer of complexity.

    You could also use this process in reverse. Your default level of simulation may be units on a battlefield, but you could abstract that up and just simulate the results of the Battle with a simpler model.

    Rubber banding in a sense is an effort to force a desired result on a simulation that's too complicated/lazily designed to give accurately calibrated thrills on it's own.

  4. Simulation for result is also important as player interaction increases, because players will figure out how to exploit the degeneracies in "simulation by process", particularly against the AI.

  5. Vox, can you go more in-depth on this? I’d be curious to see a direct comparison of the equations or mechanics rather than the comparison of their results. Maybe in sticking with ASL and showing what it would look like as a proper sim-pro vs. sim-ef. Right now, after watching this twice, my take away is that sim-process misses on micro interactions that multiply and have an effect on the macro level, while sim-eff is more scenario level than macro level calculations that factors in the more important micro effects such as morale to render a less open but more satisfyingly realistic simulation.

  6. Durandel

    For an introduction, see "Lanchester's Laws" aka "Lanchester's Equations" for linear and ranged-weapon combat.

    And there are two versions of each -- a discretized version of each for simulations in time blocks, and a (differential equation) continuous version which is appropriate for use in an "real-time" simulation.

    The equations have a couple variables:
    1: size "n" of each force
    2: defensive factor (ranging from 0.0 -- completely ineffective -- to 1.0 -- perfectly effective) for each side's ability to survive attack by the opponent's weaponry.

    The Iraq "small force" invasion was based on a doctrine that presumes U.S. forces having extremely high lethality rates against OPFOR, while OPFOR's weaponry would be extremely inneffective against US forces. It was generally true from 2003 to 2005. By that time, all the stupid OPFOR were killed off, and Iran was supplying OPFOR with better weaponry (EFPs being the most significant). The effectiveness ratings changed, which meant a requirement for more manpower, hence the massive reinforcement in early 2007.