Harvard Reports in Physical/Interdisciplinary

Ocean Science




Allan R. Robinson and Pierre F.J. Lermusiaux

The Division of Engineering and Applied Sciences

Harvard University

Cambridge, Massachusetts

August 2000



Data assimilation is a novel, versatile methodology for estimating oceanic variables. The estimation of a quantity of interest via data assimilation involves the combination of observational data with the underlying dynamical principles governing the system under observation. The melding of data and dynamics is a powerful methodology, which makes possible efficient, accurate and realistic estimations which might not otherwise be feasible. It is providing rapid advances in important aspects of both basic ocean science and applied marine technology and operations. The following sections introduce concepts (Sect. 2), describe purposes (Sect. 3), present applications to regional dynamics and forecasting (Sect. 4), overview formalism and methods (Sect. 5), provide a selected range of examples (Sect. 6) and conclude (Sect. 7).


Ocean science, and marine technology and operations require knowledge of the distribution and evolution in space and time of the characteristics of the sea. The functions of space and time, or state variables, which characterize the state of the sea under observation, are classically designated as fields. The determination of the distribution or evolution of the state variables poses problems of state estimation or field estimation in three or four dimensions. The fundamental problem of ocean science may be simply stated as follows: given the state of the ocean at one time, what is the state of the ocean at a later time? It is the dynamics, i.e., the basic laws and principles of oceanic physics, biology and chemistry that evolve the state variables forward in time. Thus, also from a practical viewpoint, predicting the present and future state of oceanic variables for applications is intimately linked to fundamental ocean science.

A dynamical model to approximate nature consists of a set of coupled nonlinear prognostic field equations for each state variable of interest. The fundamental properties of the system appear in the field equations as parameters (e.g., viscosities, diffusivities, representations of body forces, rates of earth-rotation, grazing, mortality, etc.). The initial and boundary conditions necessary for integration of the equations may also be regarded as parameters. In principle the parameters of the system can be estimated directly from measurements. In practice, directly measuring the parameters of the interdisciplinary (physical-acoustical-optical-biological-chemical-sedimentological) ocean system is difficult because of sampling, technical and resource requirements. Data assimilation however provides a powerful methodology for parameter estimation.

The physical state variables are usually the velocity components, pressure, density, temperature and salinity. Examples of biological and chemical state variables are concentration fields of nutrients, plankton, dissolved and particulate matter, etc. Because of the complexity of the marine biogeochemical systems, the number of possible state variables is extremely large, and the limitation to a finite subset of critical state variables is an important contemporary research problem. Another important complexity is associated with the vast range of phenomena and the multitude of interactive scales in space and time. This complexity has two essential consequences. First, state variable definitions relevant to phenomena and scales of interest need to be developed from the basic definitions. Second, approximate dynamics which govern the evolution of the scale restricted state variables, and their interaction with other scales, must be developed from the basic equations. A familiar example consists of decomposing the basic ocean fields into slower and faster time scales, and shorter and longer space scales, and averaging over the faster and shorter scales. The resulting equations can be adapted to govern synoptic/mesoscale resolution state variables over a large-scale oceanic domain, with faster and smaller scale phenomena represented as parameterized fluctuation correlations (Reynolds stresses). There is, of course, a great variety of other scale-restricted state variables and approximate dynamics in ocean science. We refer to scale-restricted state variables and approximate dynamics simply as "state variables" and "dynamics."

The use of dynamics is of fundamental importance for efficient and accurate field and parameter estimation. Today and in the foreseeable future, data acquisition in the ocean is sufficiently difficult and costly so as to make field estimates by direct measurements, on a substantial and sustained basis, and for state and parameter variables, sampling rates, spatial domains and time intervals of interest, essentially prohibitive. However, data acquisition for field and parameter estimates via data assimilation is feasible, even though substantial resources must still be applied to obtain enough observations.

The general process of state and parameter estimation is schematized in Fig. 1. Measurement models link the state variables of the dynamical model to the sensor data. Dynamics interpolates and extrapolates the data. Dynamical linkages among state variables and parameters allow all of them to be estimated from measurements of some of them, i.e., those more accessible to existing techniques and prevailing conditions. Error estimation and error models play a crucial role. The data and dynamics are combined with weights inversely related to their relative errors. The final estimates should agree with the observations within data error bounds and should satisfy the dynamical model within model error bounds. There are many important feedbacks in the generally nonlinear data assimilation system or ocean observing and prediction system (OOPS) schematized in Fig. 1 which illustrates the system concept and two feedbacks. Prediction provides the opportunity of efficient sampling adapted to real time structures, events and errors. Data collected for assimilation also used for ongoing verification can identify model deficiencies and lead to model improvements.

Data assimilation system schematic

Figure 1 - Schematic of the general process of state and parameter estimation

(click for full size figure)

A data assimilation system consists of three components: a set of observations, a dynamical model, and a data assimilation scheme or melding scheme. Modern interdisciplinary OOPS generally have compatible nested grids for both models and sampling. An efficient mix of platforms and sensors is selected for specific purposes.

Central to the concept of data assimilation is the concept of errors, error estimation and error modeling. The observations have errors arising from various sources: e.g. instrumental noise, environmental noise, sampling, and the interpretation of sensor measurements. All oceanic dynamical models are imperfect, with errors arising from: the approximate physics (or biology or chemistry) which govern the explicit evolution of the scale-restricted state variables, the approximate physics which parameterizes the interaction of the state variables; with smaller and larger, and faster and slower, scales; and the discretization of continuum dynamics into a numerical model. An aspect common to all melding schemes is that the quantitative basis of melding is the relative uncertainties of the dynamics and observations. Thus, the melded estimate does not degrade the reliable information of the observational data but rather enhances that information content.

A rigorous quantitative establishment of the accuracy of the melded field and parameter estimates or verification is highly desirable but may be difficult to achieve because of the quantity and quality of the data required (e.g., oversampling). Such verification involves all subcomponents: the dynamical model, the observational network, the associated error models and the melding scheme. The concept of validation is the establishment of the general adequacy of the system and its components to deal with the phenomena of interest. For example, avoid the use of a barotropic model for baroclinic phenomena, and avoid the use of an instrument whose threshold is higher than the accuracy of the required measurement. In reality, validation issues can be quite subtle. Calibration involves the tuning of system parameters to the phenomena and regional characteristics of interest. Quantitative verification today requires research on the definition of skill metrics. Regional forecast system verification requires metrics for specific dominant regional variabilities, but also for generic processes. Throughout the verification process, sensitivity studies are extremely important.

At this point it is useful to classify types of estimates with respect to the time interval of the data input to the estimate for time t. If only past and present data are utilized, the estimation is a filtering process. A nowcast via filtering provides an optimal initialization for a real-time forecast. After the entire time series is available for (0,T), the estimate for any time t (0 < t < T) is best based on the entire data set (0,T). Future as well as past data is then utilized, and the estimation is a smoothing process. In oceanography, an estimate for all t in the interval (0,T) by either a filtering or a smoothing process is often referred to as a data driven simulation.

Ocean data assimilation is only a few years old (see bibliography). Methodologies in use today have their roots in engineering and meteorology, and are generally based on estimation theory or control theory (Sect. 5). Important estimation theory methods include the Kalman Filter and Smoother, and the Optimal Interpolation (Sect. 5.1). Important control theory schemes include the generalized inverse and adjoint methods (Sect. 5.2). Techniques that directly minimize the cost function are also employed, especially for parameter estimation (Sect. 5.3). Stochastic methods and hybrid suboptimal methods, i.e. a combination of approximate methods (Sect. 5.4) are utilized for advanced and efficient estimates. In particular, computations can often be efficiently reduced by focusing on the largest errors, as in error subspace methods.


The specific uses of data assimilation depend upon the relative quality of data sets and models, and the desired purposes of the field and parameter estimates. These uses include the control of errors for state estimates; the estimation of parameters; the elucidation of real ocean dynamical processes; the design of experimental networks; and ocean monitoring and prediction.

First consider ocean prediction for scientific and practical purposes, which is the analogue of numerical weather prediction. In the best case scenario, the dynamical model correctly represents both the internal dynamical processes and the response mechanisms to external forcings. Additionally, the observational network is well designed and adequate to efficiently provide initialization data of desired accuracy. The phenomenon of loss of predictability nonetheless inhibits accurate forecasts beyond the predictability limit for the region and system. This limit for the global atmosphere is one to two weeks and for the midocean eddy field of the Northwest Atlantic on the order of weeks to months. The phenomenon is associated with the nonlinear scale transfer and growth of initial errors. The early forecasts will accurately track the state of the real ocean, but longer forecasts, although representing plausible and realistic synoptical dynamical events, will not agree with contemporary nature. However, this predictability error can be controlled by the continual assimilation of data, with adequate spatial coverage and at intervals less than the predictability limit. This is a major use of data assimilation today.

Next, consider the case of a field estimate with adequate data but a somewhat deficient dynamical model. Assimilated data can compensate for the imperfect physics so as to provide estimates in agreement with nature. This is possible if dynamical model errors are treated adequately. For instance, if a barotropic model is considered perfect (e.g. adjoint method), and baroclinic real ocean data is assimilated, the field estimate will remain barotropic. In more realistic situations, forecast errors are formed of both model deficiencies and predictability errors. Even though melded estimates with deficient models can be useful, it is of course important to attempt to correct the model dynamics.

Parameter estimation via data assimilation is making an increasingly significant impact on ocean science via the determination of both internal and external parameter values. Regional field estimates can be substantially improved by external boundary condition estimation. For interdisciplinary studies, parameter estimation is particularly promising. Biological modelers have been hampered by the inability to measure directly in situ rates, e.g. grazing and mortality. With parameter estimation, feasible measurements of quantities, such as concentration fields of planktons, together with realistic interdisciplinary model, can be used for in situ rate estimation.

Data driven simulations can provide four-dimensional time series of dynamically adjusted fields which are realistic. These fields, regarded as (numerical) experimental data, can thus serve as high resolution and complete data sets for dynamical studies. Balance of terms in dynamical equations and overall budgets can be carried out to determine fluxes and rates for energy, vorticity, productivity, grazing, carbon flux, etc. Case studies can be carried out, and statistics and processes can be inferred for simulations of sufficient duration. Of particular importance are Observation System Simulation Experiments (OSSEs), which first entered meteorology almost thirty years ago. By subsampling the simulated "true" ocean, future experimental networks and monitoring arrays can be designed to provide efficient field estimates of requisite accuracies. Data assimilation and OSSEs develop the concepts of data, theory and their relationship beyond those of the classical scientific methodology. For a period of almost three hundred years, scientific methodology was powerfully established on the basis of two essential elements: i) experiments/observations and ii) theory/models. Today, due to powerful computers, science is based on three fundamental concepts: experiment, theory, and simulation. Simulation with validated dynamics provides numerically generated databases for serious dynamical studies. Furthermore, since our best field and parameter estimates today are based on data assimilation, our very perception and conceptions of nature and reality require philosophical development.

It is apparent from the above discussion that marine operations and ocean management must depend on data assimilation methods. Data driven simulations should be coupled to multipurpose management models for risk assessments and for the design of operational procedures. Regional multiscale ocean prediction and monitoring systems, designed by OSSEs, are being established to provide ongoing nowcasts and forecasts with predictability error controlled by updating. Both simple and sophisticated versions of such systems are possible and relevant.


In this section, the issues and concepts introduced in the preceding sections are illustrated in the context of real time predictions carried out in 1996 for NATO naval operations in the Strait of Sicily and for interdisciplinary multiscale research in 1998 in Massachusetts Bay. The Harvard Ocean Prediction System (HOPS) with its primitive equation dynamical model was utilized in both cases. In the Strait of Sicily (Fig. 2), the observational network with platforms consisting of satellites, ships, aircraft, and Lagrangian drifters, was managed by the NATO SACLANT Undersea Research Centre. In Massachusetts Bay (Fig. 3), the observational network with platforms consisting of ships, satellites, and autonomous underwater vehicles, was provided by the Littoral Ocean Observing and Prediction System (LOOPS) project within the U.S. National Ocean Partnership Program. The data assimilation methods used in both cases were the HOPS OI (Sect. 5.1) and ESSE (Sect. 5.4) schemes. In both cases the purposes of data assimilation were to provide a predictive capability; to control loss of predictability; and to infer basic underlying dynamical processes.

Strait of Sicily
Massachusetts Bay
Figure 2 - Strait of Sicily
Figure 3 - Massachusetts Bay

The dominant regional variabilities determined from these exercises and studies are schematized in Figs. 2a and 3a. The dominant near surface flow in the Strait is the Atlantic Ionian Stream (black and dotted white lines) and dominant variabilities include the location and shapes of the Adventure Bank Vortex (ABV), Maltese Channel Crest (MCC), Ionian Shelfbreak Vortex (ISV) and Messina Rise Vortex (MRV), with shifts and deformations of 0(10-100 km) occurring in the 0(3-5 days). The variability of the Massachusetts Bay circulation is more dramatic. The buoyancy flow-through current which enters the Bay in the north from the Gulf of Maine may have one, two or three branches, and together with associated vortices (which may or may not be present), can reverse directions within the Bay. Storm events shift the pattern of the features which persist inertially between storms. Actual real-time forecast fields are depicted on Figs. 2b, 3b.

The existence of forecasts allows adaptive sampling, i.e. sampling efficiently related to existing structures and events. Adaptive sampling can be determined subjectively by experience or objectively by a quantitative metric. The sampling pattern associated with the temperature objective analysis error map (Fig. 2c) reflects the flight pattern of an aircraft dropping subsurface temperature probes (AXBTs). The data was assimilated into a forecast in support of naval operations centered near the ISV (Fig. 2a). The sampling extends to the surrounding meanders of the AIS which will affect the currentĘs thermal front in the operational region. The multiscale sampling of the Massachusetts Bay experiment is exemplified on Figs. (3c,d) by ship tracks adapted to the interactive submeso-, meso-, bay-, and large-scales. Note that the tracks of Fig. 3d are superimposed on a forecast of the total temperature forecast error standard deviation. The shorter track is objectively located around an error maximum. The longer track is for reduction of velocity error (not shown). Eigendecomposition of the variability fields helps dynamical interpretations. The first temperature variability eigenmodes for the Strait and the Bay are depicted on Figs. 2d and 3e, respectively. The former is associated with the dominant ABV variability and the latter with the location, direction and strength of the inflow to the Bay of the buoyancy current from the Gulf of Maine.

A qualitative skill score for the prediction of dominant ABV, MCC and ISV variabilities indicated correct predictions75% of the time. It was obtained by validation against new data for assimilation and independent satellite sea surface temperature data shown on Fig. 2e for the forecast of Fig. 2b. An important kinematical and dynamical interconnection between the eastern and western Mediterranean is the deep flow of salty Levantine Intermediate Water, which was not directly measured but was inferred from data assimilative simulations (Fig. 2f). The scientific focus of the Massachusetts Bay experiment was plankton patchiness, in particular the spatial variability of zooplankton and its relationship to physical and phytoplankton variabilities (Figs. 3b,g). The smallest scale measurements in the Bay were turbulence measurements from an AUV (Fig. 3f), which were also used to research the assimilation in real time of subgridscale data in the primitive equation model.


By definition (Sect. 1), data assimilation in ocean sciences is an estimation problem for the ocean state, model parameters, or both. The schemes for solving this problem have different backgrounds. They often relate to either estimation theory or control theory (Sects. 5.1 and 5.2), but some approaches like direct minimization, stochastic and hybrid methods (Sects. 5.3 and 5.4) can be used in both frameworks. Several schemes are theoretically optimal, while others are approximate or suboptimal. Although optimal schemes are preferred because they should provide better estimates, suboptimal methods are generally the ones in operational use today. Most schemes are related in some fashion to least-squares criteria which have had great success. Other melding criteria, such as the maximum likelihood, minimax criterion or associated variations might be more appropriate when data are very noisy and sparse, and probability density functions are multi-modal (Sect. 5.4). Parameters are here assumed to be included in the vector of state variables. For more detailed discussions on the methods overviewed next, we refer to the article published by Robinson, Lermusiaux and Sloan in 1998.

5.1 Estimation Theory

Estimation theory encompasses theories used to estimate the state of a system by combining, usually with a statistical approach, all available reliable knowledge of the system including measurements and theoretical models. The a priori hypotheses and melding or estimation criterion are crucial in the estimation process since they determine the influence of dynamics and data onto the state estimate.

At the heart of estimation theory is the scheme derived by Kalman in 1960, or Kalman Filter, which is a simplification of Bayesian estimation for the case of linear systems. For linear models, the Kalman Filter is the sequential, unbiased, minimum error variance estimate based upon a linear combination of all past measurements and dynamics. There are two steps in its sequential algorithm: (i) the forecast of the state vector and of its error covariance, and (ii), the data-forecast melding and error update, which include the linear combination of the dynamical forecast with the difference between the data and model predicted values for those data (i.e. data residuals). The matrix weighting these data residuals is called the Kalman gain matrix. The main advantage of the Kalman Filter in ocean applications is that it quantitatively generates its own error forecast and analysis.

The Kalman Smoother is also a linear, unbiased, minimum error variance estimate. However, it solves a smoothing problem: to estimate the ocean state at a certain time, it uses the data available before and after that time. A common version of this scheme first computes the Kalman Filter estimate. The smoothing is then carried out by propagating the future data information backward in time, correcting the Kalman Filter estimate using error covariance and adjoint dynamical transition matrices. This implies that both the Kalman Filter state and error covariances need to be stored at all data-correction times, which is usually demanding on memory resources.

In a large part because of the linear hypothesis and costs of these two optimal filtering and smoothing approaches, a series of approximate or suboptimal estimation schemes have been derived and employed for ocean applications. They are now described, from the simplest to the most complex.

Direct Insertion consists of replacing the forecast values by the observed ones, at all data points. The direct insertion estimate is data at observation points. The a priori statistical hypothesis is that data are exact. The Blending estimate is a scalar linear combination of the forecast and data values at all data points, with user-assigned weights. The Nudging or Newtonian Relaxation Scheme "relaxes" the dynamical model towards the observations. To do so, terms proportional to the difference between the data and state variables (i.e. data residuals) are added to the dynamical model. The coefficients in the relaxation can vary in time but cannot be too large to avoid model disruptions. They should be related to dynamical scales and a priori estimates of model and data errors. From the filtering point of view, the nudging scheme simplifies the Kalman Filter by assigning a diagonal Kalman gain, of usually constant elements.

Optimal interpolation (OI), also referred to as statistical interpolation in meteorology, is a simplification of the Kalman Filter. The data-forecast melding or analysis step is still a linear combination of the dynamical forecast with the data residuals, but in the OI scheme, the matrix weighting these residuals or gain matrix is empirically assigned. In the Kalman Filter, the gain is computed and updated internally. If the assigned OI gain is diagonal, the OI and above nudging scheme can be equivalent. However, the OI gain is usually not diagonal, but a function of empirical correlation and error matrices.

The method of Successive Corrections, instead of correcting the forecast only once as in previous methods, performs multiple but simplified linear combination of the data and forecast. Conditions for convergence to the Kalman Filter have been derived, but in practice only two or three iterations are often performed. The final estimate then varies with the finite sequence of weights used in the iterations. Frequently, the scales or processes of interest are connected one after the other, e.g. large-scale first, then mesoscale.

5.2 Control Theory

All control theory or variational assimilation approaches perform a global time-space adjustment of the model solution to all observations and thus solve a smoothing problem. The goal is to minimize a cost function penalizing the time-space misfits between the data and ocean fields, with the constraints of the model equations and their parameters. The misfits are interpreted as part of the unknown controls of the ocean system. The model parameters can also be optimized, as in estimation theory. An advantage is that constraints can be simply added to the cost function, like time-space smooth field constraints. Estimation theory requires a priori statistical assumptions for the model noise and data errors. Similarly, control theory results depend on the a priori control weights and penalties added to the cost function. The dynamical model can be either considered as a strong or weak constraint. Strong constraints correspond to the choice of infinite weights for the model equations; the only free variables are the initial conditions, boundary conditions and/or model parameters. A rational choice for the weights and form of the cost function is important. Statistically, a logical selection is to choose the dynamical model (data) weight inversely proportional to an a priori specified model (data) error.

In an "Adjoint method" the dynamical model is a strong constraint. The cost function is usually the sum of two penalties. One penalty weights the uncertainties in the initial conditions, boundary conditions and parameters with their respective a priori error covariances. The other is the sum over time of all data-model misfits at observation locations, weighted by measurement error covariances. A classical approach to solve this constrained optimization is to use Lagrange multipliers. Setting to zero the derivatives with respect to the free variables then yields Euler-Lagrange equations, one of which is the so-called adjoint equation. An iterative algorithm for solving these equations has often been termed the "adjoint method." At each iteration the forward and adjoint equations are integrated successively, and the gradient of the cost function is computed. Minimization along the gradient's direction (line search) at the end of each iteration leads to new initial conditions, boundary conditions and parameter values. Another forward and backward iteration can then be started, and so on, until the gradient is small enough.

Expanding classic inverse problems to the weak constraint fit of both data and dynamics leads to Generalized Inverse Problems, as reviewed by Bennett in 1992. The best fit is still often defined in a least-squares sense: the penalty to be minimized is then as in adjoint methods, except that a third term now consist of dynamical model uncertainties weighted by a priori model error covariances. In the Euler-Lagrange equations, the dynamical model uncertainties thus couple the state evolution with the adjoint evolution. Without further manipulations, this coupling renders the iterative solution of the backward and forward equations difficult. The Representer Method is an algorithm for solving generalized inverse problems. It was originally derived for linear systems and, in that case, leads to the same solution as the Kalman Smoother (Sect. 5.1). With nonlinear systems, the Representer Method and Kalman Smoother are usually iterated.

5.3 Direct Minimization Methods

Such methods directly minimize cost functions similar to these of generalized inverse problems, but often without using the Euler-Lagrange equations. Descent methods iteratively determine directions locally "descending" along the cost function surface. At each iteration, a line search minimization is performed along the current direction and a new descending direction is found. With the steepest descent, this direction is the opposite of the local cost function gradient. This is the simplest but slowest method (converges linearly). The search directions of the conjugate-gradient method are orthogonal with respect to the local Hessian matrix (second derivatives of the cost function). Only gradients of the cost function are evaluated and stored; conjugate-gradient methods have low storage requirements for good convergence rates. Newton and quasi-Newton methods iteratively approximate the cost function by a local quadratic form and explicitly try to minimize that approximation. They can have quadratic rates of convergence. A drawback for descent methods is that they are initialization sensitive. For sufficiently nonlinear cost functions (e.g. nonlinear dynamics), they are restarted to avoid local minima. Nonlocal schemes like simulated annealing and "genetic" algorithms then become attractive.

Simulated annealing schemes are based on an analogy to the way slowly cooling solids arrange themselves into a state of perfect crystal, with a minimum global energy. To simulate this relatively random process, an algorithm generates a sequence of states such that new states with lower energy (lower cost) are always accepted, while new states with higher energy (higher cost) are accepted with a certain probability. The cost function plays the role of energy and temperature is replaced by a control parameter. The advantages are the origin in theoretical physics, which leads to convergence criteria, and the relative independence on the specifics of the cost function and initial guess, which allows nonlocal searches. The main disadvantages are the large computer requirements and, in practice, uncertain convergence to the global minimum.

Genetic algorithms are based upon searches generated in analogy to the genetic evolution of natural organisms. At each iteration or "generation" of the search, the genetic scheme keeps a population of approximate solutions. The population is evolved by manipulations of past populations that mimic genetic transformations such that the likelihood of producing better data-fitted generations increases for new populations. Genetic algorithms allow nonlocal minimum searches, but convergence to the global minimum is not assured due to the limited theoretical base.

5.4 Stochastic and Hybrid Methods

Stochastic Methods, based on nonlinear stochastic dynamical models and stochastic optimal control, are beginning to be employed for ocean data assimilation. Instead of using brute force like descent algorithms, they try to solve the conditional probability density equation (Fokker-Planck equation) associated with ocean models. Minimum error variance, maximum likelihood or minimax estimates can then be determined from this probability density function. No assumptions are required, but for large ocean systems, parallel machines are usually employed since the Fokker-Planck integration is often implemented by Monte Carlo ensemble calculations.

Hybrid methods are combinations of previously discussed schemes, for both state and parameter estimation. An example are Error Subspace Statistical Estimation (ESSE) schemes. The main assumption of such schemes is that the error space in most ocean applications can be efficiently reduced to its essential components. Smoothing problems based on Kalman ideas, but with nonlinear stochastic models and using Monte Carlo ensemble calculations, can then be solved. Usually, the filtering problem with nonlinear stochastic dynamics is first solved, leading to a suboptimal but data-corrected estimate of the evolution of the state variables. This estimate then initializes the smoothing problem, which is solved by statistical approximation. Combinations of variational and direct minimization methods are other examples of hybrid schemes.


In this Section, we present a series of recent results that serve as a small but representative sample of the wide range of research carried out as data assimilation was established in physical oceanography. Insofar as possible, the data utilized, the a priori dynamical and error assumptions, the data assimilation schemes and melding criteria (Sect. 5), and the results obtained are discussed for every example.

6.1 Global versus local data assimilation via nudging

Malanotte-Rizzoli and Young in 1994 investigated the effectiveness of varied data sets to correct and control errors via data assimilation. They focused on two data sets of different types and resolutions in time and space, and on simulations of the Gulf Stream region at mesoscale resolutions and for periods of the order of 3 months, over a large scale domain referred to as global scale.

One objective was to assimilate data of high-quality but collected at localized mooring arrays and to investigate the effectiveness of such data in improving the realistic attributes of the simulated ocean fields obtained. If successful, such estimates allow for dynamical and process studies. To assess performance, the authors compared the assimilation of global versus local data sets at both the global and local scales. The global data consisted of biweekly fields of sea surface dynamic height, and of temperature and salinity in three-dimensions, over the entire region, as provided by the Optimal Thermal Interpolation Scheme (OTIS) of the U. S. Navy Fleet Numerical Oceanography Center. The local data mainly involved daily velocities from two array moorings of the Synoptic Ocean Prediction (SYNOP) program. The dynamical model was the Semispectral Primitive Equation Model (SPEM) of Rutgers University, and the assimilation algorithm, a suboptimal nudging scheme (Sect. 5.1).

The global data and local data were first assimilated alone, and then together. Results show that the coefficients in the nudging need to be adjusted to the dynamics and data (see Sect 5.1). The "gentle" assimilation of the spatially dense global OTIS data was necessary for the model to remain on track during the three months period. The "strong" assimilation of the daily but local SYNOP data was required to achieve local dynamical accuracies, especially for velocities. Locally, the density field could be recovered from the assimilation of the velocity data.

6.2 Small-scale convection and data assimilation

In 1994, Miller, Zaron and Bennett addressed some of the issues involved in using variational or control theory approaches (Sect. 5.2) to assimilate data into dynamical models with convective adjustment. Because of limited data and computer requirements, most practical regional and global ocean models cannot resolve motions that result from static instabilities of the water column; these motions and effects are therefore parameterized. A common parameterization is the so-called convective adjustment. This consists of assigning infinite mixing coefficients (e.g. heat and salt conductivities) to the water at a given level that has higher density than the water just below. This is carried out by setting the densities of the two parcels to the same unique value in such a way that heat and mass are conserved. In a numerical model, at every time step, all water points are checked and all statistically unstable profiles replaced by stable ones.

The main issues of using such convective adjustment schemes with variational data assimilation approaches (Sect. 5.2) are that: (i) the dynamics is no longer governed by smooth equations, which often prevents the simple definition of adjoint equations; (ii) the optimal ocean fields may evolve through "nonphysical" states of static instability; and (iii), the optimization involves the solution of a nonlinear problem, even if the dynamics is linear. Ideally, the optimal fields should be forced to be statistically stable. This introduces a set of inequality constraints that have to be satisfied. Considering a simple "toy" problem, the authors presented these issues and suggested several solutions, so as to provide guidance for realistic situations. They show that one of the simplest variational formulation has several minima and produces at times evolutions with unphysical behavior. They then present modifications that lead to more meaningful solutions and suggest algorithms for practical and realistic models. One option is the "weak" static stability constraint (Sect. 5.2): a penalty functional that ensures approximate static stability is added to the cost function with a small error or large weight. In that case, the statistic stability can be violated, but in a limited fashion. Another option is the "strong constraint" form of static stability which can be enforced via Lagrange multipliers. Convex programming methods which explicitly account for inequality constraints could also be utilized.

6.3 General circulation from inverse methods

The problem of determining the ocean general circulation by inverse methods was exemplified by Martel and Wunsch in 1993. As reviewed in detail by Wunsch in 1996, the central idea is to combine the equations governing the oceanic motion and relevant oceanic tracers with all available noisy observations, so as to estimate by inversion the large-scale steady-state ocean properties and their errors. Martel and Wunsch considered the three-dimensional circulation of the North Atlantic and its uncertainty (Fig. 4a), for the 1980-85 period and with a resolution of 1° in latitude and longitude. The observations available consisted of: objective analyses of temperature, salinity, oxygen and nutrients data; climatological ocean-atmosphere fluxes of heat, water vapor and momentum (wind-stress); climatological river runoffs; and current meter and float records. These data were obtained with various sensors and platforms, on various resolutions, as illustrated by Fig. 4b. A set of steady-state equations were assumed to hold a priori, up to small unknown noise terms. The tracer dynamics was assumed to be advection and diffusion. The velocities in the tracer equations were assumed in geostrophic thermal-wind balance except in the top layer where an Ekman transport was added. Vertical velocities were subject to hydrostatic balance and mass continuity. The problem was linearized by assuming known derivatives of tracer fields, computed from the local field observations. The problem is inverse because tracers and velocities (i.e. the state variables) are known up to some noise; the unknowns are the fields of reference level velocities, vertical velocity, and tracer mixing coefficients.

General circulation from inverse methods

Figure 4 - Example of the calculation of general circulation from inverse methods

(click for full size figure)

Numerically, equations were written in a finite-difference form approaching the resolution of circulation dynamical models. The discrete equations were then integrated over a set of nested grids of increasing resolutions, from the full domain to boxes of size shown on Fig. 4a. The flows and fluxes at the boundaries of these ocean subdivisions were computed from the data, at 1° resolution. The resulting discrete system contained about 29000 unknowns and 9000 equations. It was solved using a tapered (normalized) least-squares method with a sparse conjugate-gradient algorithm (see Sect. 5.3). The estimates of the total flow field and of its standard error are plotted on Figs. 4c and 4d. The Gulf Stream is, for example, visible along the east coast of North America, although it is somewhat weak and broad away from the data sections (e.g. Fig. 4). Several recirculation cells and the Labrador current are also in accord with previous results. In 1993, such a rigorous large-scale, dense and eclectic inversion was an important achievement. However, as the authors mentioned, the use of steady-state dynamics combined with non-synoptic data may have reach its useful limit in the example. Dynamical models combined with data of higher resolution appear necessary for more detailed investigations.

6.4 Global ocean tides estimated from generalized inverse methods

In 1994, Egbert, Bennett and Foreman estimated global ocean tides using a generalized inverse scheme with the intent to remove these tides from the altimetric data collected by the TOPEX/POSEIDON mission and so allow the study of sub-tidal ocean dynamics.

An illustrative inversion was first carried out, combining linear dynamics with a set of 80 harmonically analyzed open-ocean tide gauge data for a single tidal constituent. After cross-validations on this initial computation, a scheme for the inversion of the TOPEX/POSEIDON crossover data for multiple tidal constituents was then derived and applied to 38 cycles of the satellite data, leading to global estimates of the four principal tidal constituents (M2, S2, K1 and O1) at about 1° resolution (on a 512 x 256 grid). The dynamical model was the linearized, barotropic shallow water equations, corrected for the effects of ocean self-attraction and tidal loading. The state variables were the horizontal velocities and sea surface height fields. The data sets were linked to adequate measurement models, including error estimates. Error models for the hydrodynamics were also derived. The penalty functional consisted of an error-weighted least-squares fit of the data and hydrodynamics (Sect. 5.2).

The generalized inverse tidal problem was solved by the representer method which involves a linear combination of the modeled state evolution with representer functions for each datum (Sect. 5.2). Representer functions are related to Green's functions: they link a given datum to all values of the state variables over the period considered. The calculation of these representers was accomplished by solving the Euler-Lagrange equations repeatedly, by parallel computations on a Connection Machine CM5. The size of the problem was reduced in two steps. The full set of 6350 TOPEX/POSEIDON cross-overs was winnowed to an evenly spaced subset of 986 cross-over points (see Fig. 5a). The resulting representer matrix was then reduced using a singular value decomposition approach.

The resulting amplitude and phase of the M2 constituent are shown on Fig. 5b. The M2 fields are qualitatively similar to previous results, clearly indicating dominant components of the M2 tide, e.g. the equatorial Pacific 3/2 wave. The positions of amphidromes are also consistent. However, when compared to previous tidal model estimates, the inversion result is noticeably smoother and in better agreement with altimetric and ground truth data.

Global ocean tides estimated from generalized inverse methods

Figure 5 - Global ocean tides estimated from generalized inverse methods

(click for full size figure)


The melding of data and dynamics is a powerful, novel and versatile methodology for parameter and fielded estimation. Data assimilation must be anticipated both to accelerate research progress is complex, modern multiscale interdisciplinary ocean science, and to enable marine technology and maritime operations otherwise not possible.


We are grateful to Dr. Patrick Haley, Mr. San Liang, Ms. Patricia Moreno and Dr. Paul Palmer for their comments on the manuscript. We thank Ms. Gioia Sweetland, Ms. Peggy Zaldivar, Mr. Wayne Leslie and Ms. Maureen Armstrong for manuscript preparation, and the Office of Naval Research for partial support.


Anderson, D. and J. Willebrand (eds.), 1989. Oceanic Circulation Models: Combining Data and Dynamics. Kluwer Academic Publishers, Dordrecht, The Netherlands.

Bennett. A.F., 1992. Inverse Methods in Physical Oceanography. Cambridge Monographs on Mechanics and Applied Mathematics, Cambridge University Press.

Brasseur, P. and J.C.J. Nihoul (eds.), 1994. Data Assimilation: Tools for Modelling the Ocean in a Global Change Perspective. In NATO ASI Series, Series I: Global Environmental Change, Vol. 19, 239 pp.

Egbert, G.D., A.F. Bennett and M.G.G. Foreman, 1994. TOPEX/POSEIDON Tides Estimated Using a Global Inverse Model. J. Geophys. Res., 24, 821-24, 852.

Haidvogel, D.B. and A.R. Robinson (eds.), 1989. Special issue on data assimilation. Dyn. Atmos. Oceans, 13, 171-517.

Lermusiaux, P.F.J., 1999. Estimation and Study of Mesoscale Variability in the Strait of Sicily. Dyn. of Atmos. Oceans, Special Issue in honor of Prof. A.R. Robinson, 29, 255-303.

Lermusiaux, P.F.J. and A.R. Robinson, 1999. Data Assimilation via Error Subspace Statistical Estimation, Part I: Theory and Schemes. Mon. Weather Rev. (7), 1385-1407.

Malanotte-Rizzoli, P. and R.E. Young, 1995. Assimilation of Global Versus Local Data Sets into a Regional Model of the Gulf Stream System: I. Data Effectiveness. J. Geophys. Res., 24, 773-24, 796.

Malanotte-Rizzoli, P. (ed.), 1996. Modern Approaches to Data Assimilation in Ocean Modeling, Elsevier Oceanography Series, 455 pp.

Martel, F. and C. Wunsch, 1993. The North Atlantic Circulation in the Early 1980s ū An Estimate from Inversion of a Finite-Difference Model. J. Phys. Oceanogr., 23, 898-924.

Miller, R.N., E.O. Zaron and A.F. Bennett, 1994. Data Assimilation in Models with Convective Adjustment. Mon. Weather Rev., 2607-2613.

Robinson, A.R., P.F.J. Lermusiaux and N.Q. Sloan, III, 1998. Data Assimilation, in THE SEA: The Global Coastal Ocean I, Processes and Methods (K.H. Brink and A.R. Robinson, editors), Volume 10, John Wiley and Sons, New York, NY.

Robinson, A.R., 1999. Forecasting and Simulating Coastal Ocean Processes and Variabilities with the Harvard Ocean Prediction System, in Coastal Ocean Prediction, C.N.K. Mooers, Ed., AGU Coastal and Estuarine Study Series, 77-100.

Robinson, A.R and J. Sellschopp, 2000. Rapid Assessment of the Coastal Ocean Environment, in Ocean Forecasting: Conceptual Basis and Applications, (N. Pinardi and J.D. Woods, Ed.).

Robinson, A.R and P.F.J. Lermusiaux, 2000. Interdisciplinary Data Assimilation. Harvard Reports in Physical/Interdisciplinary Ocean Science, No. 63.

Robinson, A.R and P.F.J. Lermusiaux, 2000. Models: data assimilation physical/interdisciplinary. Encyclopedia of Ocean Sciences. In press.

Wunsch, C., 1996. The Ocean Circulation Inverse Problem, Cambridge University Press, 456 pp.


Prepared for html by W.G. Leslie 29 November 2000