Simulation and Wargaming. Группа авторов
Чтение книги онлайн.
Читать онлайн книгу Simulation and Wargaming - Группа авторов страница 23
Aggregate Simulations
These simulations typically array forces linearly in a series of sectors (often referred to as “pistons”) where, in each sector, algorithms will assess if an attack will occur, and if so who will be the attacker. In each sector, the simulation calculates a combat power score (also known as a firepower score) for each force to assess the combat power ratio that exists between the forces. The first simulations that used combat power scores calculated the combat power comparison assuming that each side had perfect information on all the forces, friendly and adversary, in that sector. In other words, not only did each side have perfect intelligence on its adversary’s force composition, but each side also had perfect communications because it knew the status of each and every friendly unit in that sector. Also note that the combat power assessment assumed that the opposing commanders each had an identical assessment of the combat power value of each of the systems of all the forces. That is, there was no modeling of a commander’s misperception that the adversary’s force is more or less formidable than the specified combat power values. Also note that surprise could not be modeled with this construct. Each side was omniscient with respect to its adversary, so there was no way for a commander to maneuver an unseen force to a position of advantage to attack the enemy from an unexpected direction. Quite simply, if a force has a 3 : 1 or better advantage in combat power over their adversary, then that force would attack that adversary. Attrition of each side’s combat power was then assessed based on the calculated combat power ratio, and each side’s combat power was then decremented accordingly. Movement of both forces was then assessed based on the amount of combat power lost and the type of terrain the sector consists of. Movement may have been mitigated so that a unit’s movement in one sector did not expose the flank of a friendly unit in an adjacent sector. As simulations became more sophisticated, combat power scores were modified. The Marine Corps’ Tactical Warfare Simulation, Evaluation, and Analysis System (TWSEAS) and MAGTF (Marine Air Ground Task Force) Tactical Warfare Simulation (MTWS) took into account “perceived combat power,” which limited each side’s calculation to only its current knowledge of the opposing force. Some simulations calculated dynamic combat power values, updating values based on the remaining forces on the battlefield after each time step, so, for example, an air defense weapon might have no combat power once all opposing aircraft had been destroyed.
Entity Simulations
In an entity simulation where individual combatants (systems or personnel) are engaging, each entity is assigned to travel from a starting position, via a series of waypoints, to a destination that it will reach if it survives. If the entity detects an adversary’s entity, and current rules of engagement (ROE) permit, it will fire at that entity. An algorithm will then assess the probability that the entity hit the adversary’s entity (P(hit)), and if it did hit, the amount of damage the hit inflicted (P(kill/hit)) where “kills” are typically categorized as “catastrophic,” “mobility,” “firepower,” or “mobility and firepower.”
Simulations and Prediction
The late, great Air Force analyst, Clayton Thomas, described simulation‐based analysis as an IF‐THEN statement. Both the simulation and the data comprise the IF side. The simulation and the data are then used to produce the simulation’s output, the THEN side. If the simulation represented reality and if the data were precise, then the result would be an accurate prediction.30 In general, neither is true. In the following paragraphs, we will examine how well our simulations represent reality and how precise our data are.
Standard Assumptions
There are standard assumptions for most closed‐loop computer simulations. Human factors, such as leadership, morale, combat fatigue, and training status are typically not explicitly represented in these simulations. If not represented explicitly, then the implicit assumptions are that both sides have exactly the same characteristics with respect to human factors. While there have been attempts to represent human factors such as training status and morale, the value of such explicit representation needs to be justified. If the simulation is attempting to assess the value of a new weapon system and the study team is using a scientific‐method approach, having outcomes differ due to human factors makes it more difficult to determine the true cause of the difference between two sets of replications. In most combat simulations, the human factors are either ignored or standardized and mirrored so as to make the basis of comparison as free from confounding factors as possible.
Data
The simulations described above usually need between three to six months to instantiate a new scenario, and will cost around one million US dollars to get the simulation ready to run (terrain and performance data developed, quality controlled, and input, and scheme of maneuver developed, instantiated, and tested). Data is always challenging. Performance data must be developed to account for every interaction that could happen between all systems that will be represented on the battlefield. Performance data development can be especially challenging when examining future scenarios with emerging technology. Even developing data to simulate today’s forces comes with challenges. The US Army has perhaps one of the most robust processes to develop performance data, but even that process uses only about 10% of actual data. This data is collected from ranges such as Aberdeen Proving Grounds where, in a controlled environment, US Army weapon systems are fired at captured enemy systems to determine their vulnerability to US weapons. They also use captured enemy systems to fire at actual US Army systems to determine their vulnerabilities. Often several US ground combat vehicles are rolled off the production line with the expressed intent of testing their vulnerabilities to enemy systems. After test firing is conducted, engineers determine the damage caused and record that information, which becomes the basis for the performance data that is generated for ground combat simulations. The other 90% of the data is then “surrogated,” that is, interpolated, extrapolated, or otherwise estimated from that existing test data. This data is often developed using engineering‐level simulations. Ground combat weapon systems are relatively inexpensive and numerous so testing their vulnerabilities can be done given the availability of the appropriate enemy weapon systems and ammunition. The Navy and the Air Force are challenged to come up with test data that can be used to develop performance data against their platforms. Firing captured adversary anti‐ship missiles at a multibillion‐dollar Ford class aircraft carrier to see how many hits it can withstand before sinking just is not possible, so often the data used is more of an educated guess than a mathematical approximation. One of the biggest threats to today’s naval vessels is the anti‐ship missile (ASM), but there have been less than 300 recorded instances of ASM hits on vessels that could be used to develop data.31
Simulating the Reality of Combat
Many, if not most, of today’s computer‐based combat simulations are extraordinarily complex (the Concepts Evaluation Model, a theater‐level deterministic closed‐loop combat simulation used by the US Army’s Concepts Analysis Agency in the late twentieth century was over 250 000 lines of computer code). However, that complexity does not translate into a model that can accurately predict the outcome of combat. This complexity has given rise to two different schools of thought. Simulation skeptics refer to them as “black boxes,” which means that the users of these simulations have little to no understanding of the simulation’s processes