Interpreting Pressure and Flow Rate Data from Permanent Downhole Gauges Using Data Mining Approaches


Yang Liu







File Size:


View File:

Access Count:



The Permanent Downhole Gauge (PDG) is a promising resource for real time downhole measurement. However, a bottleneck in utilizing the PDG data is that the commonly applied well test methods are limited (practically) to short sections of shut-in data only and thus fail to utilize the long term PDG data. Recent technology developments have provided the ability for PDGs to measure both flow rate and pressure, so the limitation of using only shut-in periods could be avoided, theoretically. In practice however it is still difficult to make use of the combined flow rate and pressure data over a PDG record of long duration, due to the noise in both of the signals as well as uncertainty with respect to the appropriate reservoir model over such a long period.

The successful application of data mining in computer science shows great potential in revealing the relationship between variables from voluminous data sets. This inspired us to investigate the application of data mining methodologies as a way to reveal the relationship between flow rate and pressure histories from PDG data, and hence extract the reservoir model.

In this study, nonparametric kernel-based data mining approaches were studied. The data mining process was conducted in two stages, namely learning and prediction. In the learning process, the reservoir model was obtained implicitly in a suitable functional form in the high-dimensional kernel Hilbert space (defined by the kernel function) when the learning algorithm converged after being trained to the pressure and flow rate data. In the prediction process, a pressure prediction was made by the data mining algorithm according to an arbitrary flow rate history (usually a constant flow rate history for simplicity). This flow rate history and the corresponding pressure prediction revealed the reservoir model underlying the variable PDG data. In a second mode, recalculating the pressure history based on the measured flow rate history removed noise from the pressure signal effectively. Recalculating the pressure based on a denoised flow rate history removed noise from both signals.

In the work, a series of data mining methods using different kernel functions and input vectors were investigated. Methods A, B, and C utilized simple kernel functions. Method A and Method B did not require the knowledge of breakpoints in advance. The difference between the two was that Method A used a low-order kernel function with a high-order input vector, while Method B used a high-order kernel function with a low-order input vector. Method C required the knowledge of the breakpoints. Nine synthetic test cases with different well/reservoir models were used to test these methods. The results showed that all three methods have good pressure reproduction of the training flow rate history and pressure prediction of the constant flow rate history. However, each of them has limitations in different aspects.

The limitation of the simple kernel methods led us to a reconsideration of kernelization and superposition. In the simple kernel methods, the kernelization was deployed over the superposition which was reflected as the summation in the input vector. However, the architecture of superposition over kernelization would be more suitable to capture the essence of the transient, and this approach was implemented by using a convolution kernel in Method D. The convolution kernel was invented and applied in the domain of natural language machine learning. In the original linguistic study, the convolution kernel decomposed words into parts, and evaluated the parts using a simple kernel function. This inspired us to apply the convolution kernel method to PDG data by decomposing the pressure transient into a series of pressure responses to the previous flow rate change events. The superposition was then reflected as the summation of simple kernels (hence superposition over kernelization). 16 synthetic and real field test cases were tested using this approach. The method recovered the reservoir model successfully in all cases. By comparison, Method D outperformed all simple kernel methods for its stability and accuracy in all test cases without knowing the breakpoints in advance.

This study also discussed the performance of Method D working under complicated data situations, including the existence of significant outliers and aberrant segments, incomplete production history, unknown initial pressure, different sampling frequencies, and different time spans of the data set. The results suggested that: 1) Method D tolerated a moderate level of outliers and aberrant segments without any preprocessing; 2) Method D might reveal the reservoir/well model with effective rate correction and/or optimization on initial pressure value when the production history was incomplete and/or when the initial pressure was unknown; and 3) an appropriate sampling frequency and time span of the data set were required to ensure the sufficiency of the basis functions in the Hilbert kernel space.

In order to improve the performance of the convolution kernel method in dealing with large data sets, two block algorithms, namely Methods E and F, were also investigated. The two methods rescaled the original kernel matrix into a series of block matrices, and used only some of the blocks to complete the training process. A series of synthetic cases and real cases illustrated their efficiency and accuracy. The comparison of the performance between Methods D, E, and F was also conducted.

Press the Back button in your browser.

Copyright 2013, Yang Liu: Please note that the reports and theses are copyright to their original authors. Authors have given written permission for their work to be made available here. Readers who download reports from this site should honor the copyright of the original authors and may not copy or distribute the work further without the permission of the author, Yang Liu.

Accessed by: ec2-3-236-18-161.compute-1.amazonaws.com (
Accessed: Monday 28th of November 2022 02:56:18 AM