27 Sep Rocket science vs. data entry: the neglected problems that matter in oil testing
This post is the first in our series on estimating human errors in crude quality and measurement. It can be easy for an oil company to invest too little in reducing human errors not because they are not important, but because they are hard to measure. The examples in the series will outline rough calculations measurement supervisors can use to decide whether a specific source of human error is worth further investigation.
When it comes to measurement error, is your company investing in the improvements that matter the most or are they investing in the ones that are the easiest to measure?
Testing errors are very costly to oil companies (How costly? Use this calculator to find out). Errors matter because the price paid for a shipment of oil is derived from the quantity and quality (e.g. density, sulphur content, etc.) measured by testing equipment and procedures.
Most companies know this, and spend a lot of money trying to improve the accuracy of their testing. But how do you know what to spend on? It is easy to estimate an expected return on investment (ROI) if you know how much it will improve the accuracy. You maximize ROI by finding the smallest investments that make the biggest increase in accuracy. However, companies may be more likely to focus instead on the improvements that are easiest to measure, because they are what measurement and product quality managers can clearly justify to the executives approving budgets.
Paradoxically, this bias for transparency may lead to companies over investing in direct upgrades to testing equipment and under-investing in initiatives that reduce human error. For example, suppose I can get a benchtop digital density meter with an instrument accuracy of 0.1kg/m3 for around $20,000, but for an extra $15,000, I can get an accuracy 0.01kg/m3. Put this improvement into the calculator and I find that I could make back the investment in 1 year at a facility processing ~6,000 bpd or more.
“Even if it is not possible to accurately measure human errors, it is worthwhile to make a rough estimate of their magnitude.”
However, this calculation assumes that there is no human error, and I actually get those accuracies from those instruments. Even if it is not possible to accurately measure human errors, it is worthwhile to make a rough estimate of their magnitude. Especially if you think they might be large, they will have a significant impact on the effectiveness of programs designed to improve testing. For example, if sampling error leads to a total error of 1kg/m3 (a very realistic number for many field operations, as we will discuss in a future post in this series) then I will get no benefit at all from the equipment upgrade, and I would be far better off investing in reduced sampling error, even for up to 10x the expense!
The actual error arising from human processes will vary from company to company, from site to site. However, we find first-order approximations like this often predict large errors in operations, which can be much higher than instrument errors. This suggests that human error may be worth further investigating before inefficient capital investments are made to upgrade measurement accuracy. In this series, we will walk you through some back-of-the envelope methods that quality or measurement supervisors could use to estimate the likely magnitude of human errors in their operations. The next two posts will focus on how to estimate data-entry errors and sampling errors.
How do manual processes and operator error affect your company’s bottom line?