Simulation and Wargaming. Группа авторов
Чтение книги онлайн.
Читать онлайн книгу Simulation and Wargaming - Группа авторов страница 22
As the conflicts in Iraq and Afghanistan approached their first decade, the US DoD began to realize that both wargames and closed‐loop combat simulations have important and distinctly different roles in the analytic process. The United States’ involvement in Iraq and Afghanistan has highlighted that counterinsurgency and stability operations cannot be modeled well in existing closed‐loop combat simulations. While agent‐based simulations show promise for modeling human behavior in regions of conflict,27 there are no closed‐loop IW simulations that parallel the quantitative analytic capability of those used by DoD to assess major kinetic combat operations. In historical terms, modern‐day wargames are much like the Prussians’ Free Kriegspiel, while today’s closed‐loop combat simulations are more similar to Rigid Kriegspiel. Each tool has its purposes, and in most cases those purposes are not overlapping. Wargames should not be used for quantitative assessments and closed‐loop combat simulations cannot replicate human commander’s decision‐making processes.
Wargames Today
As analytic wargames began to regain some traction in the 2010‐time frame, they were attacked by combat simulation advocates. Analysts who teethed on closed‐loop combat simulations derided wargames as “a simulation of one replication” or a “sample size of one,” noting that you could not run a particular wargame multiple times, varying random variable values to generate quantitative output for statistical analysis. What they failed to understand was that a wargame’s focus is on qualitative data, decisions produced by human players, while the computer‐based closed‐loop combat simulations are focused on quantifying the attributes of a force engaged in high‐end kinetic combat.
Running a series of wargames to generate multiple replications or running wargames to compare and contrast different concepts or technologies is problematic. Running any simulation for multiple replications is typically done by holding most variables fixed and introducing randomness for certain, identified random variables (such as system‐on‐system probability of hit) so the statistics of the multiple replications can be calculated to examine the range of expected outcomes, given the introduced randomness. It is difficult, if not impossible, to produce multiple replications of a wargame because of the learning effect that the players experience, so the difference in replications is confounded by the players learning more about the operating environment and the opponent’s method of prosecuting combat with every subsequent replication. If the first replication of the wargame produced clear winners and losers, would the loser use the exact same strategy for the second replication? Would the winner expect the loser to not learn and try the same course of action? You could attempt to eliminate the learning bias by having different players play each replication, but that would assume you could find a large supply of players with identical experiences. In practice, some organizations that utilize wargames do play them multiple times with the same players, but these cannot be considered multiple replications, at least not in the traditional sense. When the TRADOC Analysis Center ran the H‐I‐T‐L simulation JANUS to develop a concept of operations (CONOPS), for later instantiation in the closed‐loop simulation CASTFOREM, the blue side had around 30 “pucksters” plus a command and staff, and the red side had around 10 pucksters and their own command element. Pucksters typically maneuvered and fought elements of the force, such as a tank company in a brigade operation. In order to obtain a “record run” that could be instantiated in the simulation, they would need four to five runs to familiarize all the pucksters with the operational plans. One or both sides would usually petition to move the starting locations of their forces between runs, so these cannot be considered “replications” in the pure sense. Only the results of the final record run were used to inform the CONOPS instantiation in the simulation.28
At the Office Secretary of Defense (OSD) Office of Net Assessment, players play a wargame multiple times where the players’ learning is one of the subjects of study. Understanding how a command and staff’s thinking evolves when combating an adversary’s new technology or concept in a wargame allows the evolution of doctrine without putting forces at risk.29
In DoD today, there are few in uniform who can design, develop, conduct, and analyze a wargame. What used to be an integral part of a professional military officer’s education and experience is now an afterthought, at best. This has caused a fair amount of “BOGGSAT” wargaming to be conducted, a Bunch of Guys and Gals Sitting Around a Table. The term BOGGSAT is a pejorative term that implies that a group of people tasked to conduct a wargame produces results that meet the tasker with minimal rigor and resource expenditure. There are two reasons BOGGSATs occur. The first is that the command does not give the wargaming team the resourcing to do a proper wargame. The second is that no one on the wargaming team has any wargame design experience, so the team simply improvises as best it can. In many cases, both lack of resources and experience spur the occurrence of BOGGSATs. We often hear of BOGGSATs being used to conduct planning wargaming at some US Combatant Commands (CCMDs).
Some of our CCMDs have contracted out some of their wargaming requirements to make up for the lack of uniformed wargamers. This can present a challenge. Some contracting organizations have their own methods of doing a wargame; so if a command’s wargaming requirements do not quite match the method of the contracted wargaming organization, the organization may only wargame the part of the required wargame that their methods can accommodate. Most wargaming requirements are unique, and a wargaming best practice is to design the wargame around the organization’s wargaming requirement, instead of trimming the requirements of the organization to fit a predetermined wargaming method.
Wargames have multiple points of failure. Wargames fail when the wargaming team and the sponsor do not come to an agreement of the wargame’s objective and key issues. This often occurs when the wargaming sponsor is a senior official whose subordinates are reluctant to force the official to clarify and refine the initial wargaming tasking. The best‐designed wargame can be a failure if the wargaming team cannot secure the appropriate players. Wargames can also fail if not executed properly. Keeping the players immersed in the wargaming environment, ensuring the game stays on schedule, managing the game’s adjudication and data collection, and solving the inevitable glitches that often occur require an experienced and adaptive wargaming team. Analytic wargames depend on accurate and detailed data collection, so a well‐designed wargame with the best players can still be a failure if the data collection effort is flawed. Finally, a wargame may be well designed, flawlessly executed with clear and concise data collected, and the game’s analysts may fail to conduct useful analysis.
In conclusion, there are many more wargames being conducted since 2015 in DoD than before, thanks to the reinvigoration spawned by the Office of the Secretary of Defense’s stewardship. However, more does not necessarily mean better or useful. Wargames designed by teams with no wargaming experience or education will most likely encounter two or more of the points of failure enumerated above. If wargaming is to again become a part of the US DoD culture, wargaming education and wargaming experience must be directed and driven by DoD leadership.
Simulations Today
Introduction
Closed‐loop simulations provide the means to assess the combat capabilities of a collection of entities (weapon systems and formations) given that the decision that those forces will engage in battle has been already been made. These simulations are not wargames, as there are no dynamic human decisions that impact the flow of events of the operations simulated in the computer model. While it is true that there are algorithms in closed‐loop simulations that represent some decisions that humans make in combat, they are rudimentary, IF‐THEN type of decisions.
Simulation