FAQs:<41> Balances and <1251> Weighing on an Analytical Balance

1. Is a repeatability test involving 10 measurements required as part of daily performance checks?

There is no requirement for performing a daily repeatability test. The frequency of all performance checks, including repeatability, should be determined by the laboratory based on a thorough risk-analysis. The primary purpose of the repeatability test is to identify the random error of the balance and to calculate the balance’s minimum weight. It is important to note that random error significantly affects precision when weighing small samples. Since minimum weight and random error may change over time, periodic monitoring through repeatability performance checks is essential to ensure ongoing compliance with the requirements outlined in <41>.

2. Regarding the test load selection, what is the difference between calibration and the accuracy test?

Calibration is an activity described by international standards, e.g. EURAMET cg-18 and ASTM E898 and serves to maintain the metrological traceability of a balance. Calibration includes the calculation and assessment of measurement uncertainty. Balance calibration is a comprehensive procedure comprising repeatability test, eccentricity test, and error of indication test (combined linearity and sensitivity test; largest error of indication test point is used to assess sensitivity), ultimately leading to the determination of the balance measurement uncertainty. 

Typically, the first point is at zero load (0 g or no load applied), followed by at least four additional test points that cover the measurement range of the balance. For example, an analytical balance with a measurement range of 220 g includes test points at 0 g, 50 g, 100 g, 150 g and 220 g, aligning with the calibration guidelines. Applying a zero-load calibration point ensures all subsequent measurements on the balance are bracketed by this initial point (0 g) and subsequent test points, thus covering the entire operational range.

The accuracy test is different from calibration, as measurement uncertainty is not calculated. The accuracy test is a performance check and focuses on the assessment of the systematic error of the balance. While the systematic error in modern balances is typically increasing through the measurement range, it is still usually minimal and hidden within the repeatability of the balance at the lower end of the measurement range. Therefore, a meaningful accuracy test is carried out by using a test load between 5% and 100% of the balance's capacity.

3. How should the formula Result = | I - m | / m be interpreted in accuracy test, and how is the right choice of the test weight confirmed?

The accuracy test is carried out using one test load and is done independently from the repeatability test. Typically, an accuracy test involves a test load between 5% and 100% of the balance's maximum capacity, whereas the repeatability test utilizes a test load of a few percent of the capacity. When selecting the mass (m) for the accuracy test, it is generally acceptable to use the nominal value, provided the maximum permissible error (MPE) of the mass is smaller than 1/3 of the acceptance criterion of the accuracy test when using that mass. 

For example, for a balance with a capacity of 220 g, a test load of 200 g is applied for assessing the sensitivity. If the indication of the balance is 200.00034 g, the absolute sensitivity offset at 200 g would be 0.00034 g, and thus the relative sensitivity offset is 0.00017 % (0.00034 g ÷ 200 g). This is in conformance with the limit value of 0.05 % of 200 g which is 0.1 g. If a mass of accuracy class OIML F2 is used, the MPE of a 200 g weight is 3.0 mg. Therefore, its relative maximum permissible error is 0.0015% (3.0 mg ÷ 200 g), which is smaller than 1/3 of the acceptance criterion of the accuracy test when using that mass, i.e., 33.3 mg (1/3 × 0.05 % × 200 g). Therefore, it is appropriate to use the nominal mass value (200 g) directly for the accuracy test. The usage of weights of a higher accuracy class, e.g., OIML E2 or its ASTM equivalent (ASTM 1 or better), is unnecessary in this example. 

4. Why is it suggested to use a test load up to 5% of the balance’s capacity in a repeatability test?

The standard deviation remains nearly constant across the lower measurement range of the balance. Thus, utilizing a test load at approximately 5% of the balance’s maximum capacity ensures testing the balance in the lower measurement range, eliminating the need to use a test load as small as the material weighed on the balance during normal usage. Selecting higher test loads as > 5% of the capacity suggests that in this (higher) region, the standard deviation might no longer be nearly constant anymore, which is typically beyond this threshold. In addition, using a test load up to 5% of the balance’s capacity also ensures mitigating handling errors that could arise when using very small test loads.

5. What tests comprise calibration of the balance?

Balance calibration is a comprehensive procedure comprising repeatability test, eccentricity test, and error of indication test (combined linearity and sensitivity test; largest error of indication test point is used to assess sensitivity). Quite frequently, the term “linearity” is taken to be synonymous to “error of indication”, and the error of indication at the largest test point is taken to be synonymous to “sensitivity”. The two most widely accepted standard balance calibration procedures, EURAMET cg-18 and ASTM E898, describe the consistent calibration methodologies and identical processes to estimate measurement uncertainty.

6. What does "periodically carried out in between calibrations" mean for sensitivity and repeatability tests?

The frequency of the performance checks shall be defined by the user using a risk-based approach. <41> states that sensitivity is the most important property of accuracy, which shall be periodically assessed along with repeatability. Therefore, the mandatory requirement set forth by <41> involves periodically assessing both sensitivity and repeatability. Additional performance tests beyond these are optional and may be conducted at the user's discretion, as outlined in <1251>, Table 1. However, during calibration, all parameters influencing repeatability and accuracy (such as sensitivity, linearity, and eccentricity), should be thoroughly assessed. Note that during the error of indication test of calibration, both linearity and sensitivity are assessed (the highest error of indication test point is used to assess sensitivity).

7. If we do a daily check for sensitivity, is that sufficient? Or does repeatability need to be performed on a daily basis as well (at the same frequency as the sensitivity)?

The frequency of the performance checks shall be determined by the user on a risk-based approach or applicable regulatory requirements. These performance checks are to be carried out between scheduled calibrations. Neither <41> nor <1251> requires daily sensitivity or repeatability tests. However, specific regulatory requirements may prescribe the frequencies of certain performance checks, and such regulatory requirements would supersede frequencies solely based on risk assessment. Note that the performance checks of the balance are part of the performance qualification as described in “Performance Qualification” section of <1251>.

8. Can I increase the weight (e.g., by adding weighing paper) to meet the minimum weight requirement?

Minimum weight requirement does not include the weight of the tare vessel, regardless of the tare vessel material. The quantity of test material to be weighed on the balance is determined as the difference between two readings: one taken before adding the material into the tare vessel, and the other taken after. This resulting difference shall meet the minimum weight requirement.

9. Do precision balances fall under the scope of USP 41>, and if not, what shall the acceptance criteria be?

The scope of <41> is defined by the USP General Notices, specifically “8.20. About” and “6.50.20. Solutions”. Solutions for quantitative measures shall be prepared using accurately weighed or accurately measured analytes. If the procedure specifies "accurately weighed," compliance with the requirements outlined in USP <41> is mandatory. Thus, applicability of <41> is determined by the weighing application itself, rather than by the type (e.g., analytical or precision) of balance. For applications outside the scope of <41>, balance’s repeatability and accuracy criteria should align with the specific requirements of the intended use, allowing users to establish suitable acceptance criteria based on their particular application.

10. Could you please clarify the definition of the term "test weight value", as referenced in the Accuracy section of 41>?

A test weight is a physical artifact (material measure) that has a defined mass value. This test weight value represents the mass assigned to the object based on a calibration performed by a calibration laboratory. For example, a “20 g test weight” is an artifact with an associated mass value of 20 gram. In the context of <41>, the term "test weight value" refers to this assigned mass, which is used during balance accuracy checks.

11. The acceptance criterion for accuracy is 0.10%, however when executing a sensitivity test, the acceptance criterion is 0.05%. Why is the acceptance criterion for the sensitivity test different from the accuracy acceptance criterion.

Sensitivity most significantly influences accuracy, but linearity and eccentricity can also contribute to systematic errors. Therefore, when assessing an individual balance property (such as sensitivity, linearity and eccentricity) that influences accuracy, 0.05 % is taken as limit value to ensure that the combination of all systematic errors still meets the 0.10 % requirement. <1251> provides more details on this mathematical approach in its section “Performance Qualification”. As sensitivity influences accuracy most, it shall be included in routine performance checks, while other parameters that influence accuracy should be assessed when the balance is calibrated.

12. How should I select the test points for a linearity test? Shall they cover the entire balance range?

When assessing the linearity of a balance, usually four test points are selected across the balance’s entire measurement range. For example, on a balance with a maximum capacity of 220 g, commonly selected test points are 50 g, 100 g, 150 g, and 220 g. As linearity testing specifically assesses the accuracy of the balance over its operational range, test points below 5 % of the capacity are not permitted. Usually test points are selected to cover the balance’s entire measurement range, ensuring the balance’s usage is not unnecessarily limited to only those specific measured points.

13. How large should the smallest net weight be in relation to the minimum weight to continuously confirm compliance with 41>?

The “minimum weight” is a calculated value and describes the performance of the balance at the specific time when the repeatability test is carried out. The “smallest net weight” is a user-defined threshold which is usually constant over time and should be larger than the calculated “minimum weight”, to ensure compliance with the repeatability requirements established in <41>. <1251> describes factors that influence repeatability and emphasizes the importance of maintaining the smallest net weight above the calculated minimum weight.

To safeguard adherence to repeatability requirements, a safety factor, defined as the ratio between “smallest net weight” and “minimum weight”, can be applied. This safety factor accounts for a potential variability in the balance’s performance and ensures continuous compliance, even when the “minimum weight” may vary due to balance’s variability and subsequent difference in repeatability measurements at different times. 

For stable laboratory conditions with trained operators, a safety factor of 2 is typically efficient to mitigate the influence of repeatability factors while the balance is in use. For automated weighing procedures (e.g., gravimetric dosing), a smaller safety factor, e.g., 1.5, may be appropriate. The safety factor can be monitored over the balance’s life cycle to identify critical performance changes that could critically affect the reliability of the user-defined “smallest net weight”. 

14. How do I calculate the desired smallest net weight?

The “smallest net weight” is not a calculated value; rather, it is defined based on applicable user requirements, e.g., requirements detailed in a monograph or a specific weighing procedure. The “minimum weight” of the balance is a calculated value based on the repeatability test, and it must be smaller than the “smallest net weight”. A balance’s repeatability can vary over time. As a result, the calculated “minimum weight” can also vary. By setting the user-defined “smallest net weight” value larger than the “minimum weight”, the user reduces the risk that variability in the balance’s repeatability or “minimum weight” will negatively impact the defined weighing requirements.

15. Can I move the balance within the laboratory, and does it require requalification?

As per <1251>, in the case of modifications on the balance, appropriate performance qualifications are carried out to assess the performance of the balance after the modification. Depending on how significant these modifications are with regard to potential performance changes in the balance, the following hierarchy of activities can be considered:

  1. Control of the leveling

  2. Adjustment by means of the built-in weights

  3. Execution of the performance check activities

  4. Execution of calibration

For example, when a balance is relocated within the laboratory, it is recommended to assess whether the environmental conditions at the new location differ from those at the original location. Factors to consider include proximity to ventilation systems, vibration or heat sources, and distance from doors or high-traffic areas (e.g., hallways). If any changes in environmental conditions are identified or suspected, the performance checks (such as accuracy and repeatability) and/or calibration should be carried out, to ensure the balance continues to operate within its qualified parameters.

16. What does "partially replaced" mean when using built-in weights for balance checks?

When built-in weights are periodically used for either testing or adjusting the balance’s sensitivity, the frequency of sensitivity test with an external weight can be reduced. As an example, when the built-in weights are used daily, the frequency of the sensitivity test could be set to weekly or monthly, depending on the risk assessment. However, it is important to note that built-in weights are not metrologically traceable.