Oil Spill Modeling Introduction and Analysis

The bottom line is that science is never truly satisfied until the last observation can be made, tested for accuracy and recorded. With an oil spill of the magnitude of the BP oil spill, there is simply no way to observe, record and verify each and every detail that goes into the movement, dispersal and landfall of the oil/dispersant mass.

Some parts of the oil/dispersant mass are in unreachable zones of the ocean. Other parts are in such microscopic or gaseous form that they cannot be measured, except by identifying bits and pieces of the puzzle, then estimating what is going on with the rest of the mass. Other parts are subjected to change, starting out in one observable form and changing into a variety of other forms before the next observations can be made.

The Gulf of Mexico is a vast place. It is a marine environment with huge chemical and biological complexity that involves salinity, naturally occurring oil seeps, temperature, ever changing atmospheric conditions and complex tides and currents. Add in the complexity of storms during what is anticipated (by models) to be a very active hurricane season.

The oil/dispersant mix is a vast entity. Unprecedented volumes of oil, poorly measured, along with unprecedented amounts of a dispersant that has been outlawed in other countries, creates a causative factor for a host of effects that have not even been completely identified, because they may have not yet occurred.

As a result, modeling is the only way to estimate the effects, movement and changes that will occur with the oil/dispersant mix. The current models require inputting of massive amounts of tidal, current, atmospheric, weather, marine and chemical data, then running that data through series of algorithms that contain expectations that are based on known factors to create a moving image of the expected movement, dispersal and flow of the mass.

The output from the models is then used by all types of agency, organization and individual who needs to make plans, develop budgets and to adapt to the challenges of oil arriving various points of land and sea.

The most prominent models include:

The National Oceanographic and Atmospheric Administration (NOAA) has the General NOAA Operational Modeling Environment, or GNOME. This is a trajectory model that uses data about wind, currents and other factors would make it possible to predict the trajectory of the oil. The model is customizable, meaning that scientists can enter various data or data estimates to do “what if” analysis. “What if” analysis allows changes to various data elements, then running a model or program to see what the hypothetical results would be.

Humans can view the GNOME, and most other models’ outputs in the form of “movies” that show the actual projected movement of the oil mass.

The Los Alamos Labs “Parallel Ocean Program” (POP), is another prominent model.

There are many oil spill models, in fact there can be a different named model that is designed for each particular spill. An Arctic or cold water ocean spill model will look a lot different than a warm water oceanic spill like the BP oil spill. Some models are made just for looking at a single spill, with no goal of generalizing the data to all spills in general.

In summary, while the models allow predictions of the trajectory of oil spills that may or may not be perfectly accurate, they do allow collection of data about the inaccuracy! Much of the modeling is to help to deal with the unpredictability and the major gaps in information that exist when dealing with the oceans and seas. For the average person, it is critical to know whether to plan for volunteering, for a vacation or cruise, to plan for closing a business, to plan for losing a livelihood or to look out for health problems. Oil spill models can provide enough information to help people in responding to disaster.

NOAA, “GNOME”

“Parallel Ocean Program”