Appendix
Appendix I
How good is a measurement?
In science, when we measure something, we report our findings so that other scientists can confirm what we’ve measured. There are three factors that go into reporting a measured value:
 We must report how sensitive our equipment is.
 We must report how confident we are that the measurement was performed correctly. This is a measure of precision.
 We must report how accurate the measurement is by comparing to a known value.
We use significant figures to report our sensitivity; we use a statistical measure – called the 95% confidence limit – to report how precise, or confident, we are in our measurement; we use percent error to report on how accurate our measurement is.
These three factors are summarized nicely when we report our measurement using the following format:
5.2 (±0.6) × 105 s1 with a percent error of 1.9% as compared to the literature value of 5.3 × 105 s1.
This gives us our value (5.2 × 105 s1), that we are 95% confident our measurement will lie within (±0.6) × 105 s1 of the measured value, and that the measured value differs by 2.3% as compared to independent measurements of this value.
It is important to include both the 95% confidence limit and the percent error when reporting your value. The 95% confidence limit tells you how confident you are that you’ve done a good job in making the measurement. The smaller the confidence interval the more precise your measurement is. For example, a 95% confidence limit of ±0.1 isn’t as precise as a 95% confidence limit of ±0.001. Although in both cases we are 95% confident the measured value will lie within the limits, since the second limit is 100 times smaller than the first we can be confident that the second measurement was more precise.
However, what we don’t know from this measurement is if there was a systematic error that we made when collecting the data. A systematic error is an error that shifts our value by a set amount. For example, if you were to weigh yourself every day, but someone left a fivepound weight on the scale that you didn’t notice, then you would have a precise measurement of your weight, but it would always be five pounds heavier than your true weight. This is a systematic error. To account for this, we compare the value we measure with a literature, or known value, using percent error. In this example we would measure our weight using several different scales and then compare the results using percent error. We would quickly be able to determine which scale was incorrect by looking at the percent errors between each scale.
In the following pages you will be provided instructions on how to properly use significant figures (Appendix II), calculate the 95% confidence interval (Appendix IV), and how to calculate percent error (Appendix III).
Page Break
Appendix II
Note on significant figures
Significant figures are critical when reporting scientific data because they give the reader an idea of how sensitive your measurement was. Before looking at a few examples, let’s summarize the rules for significant figures.
 ALL nonzero numbers (1,2,3,4,5,6,7,8,9) are ALWAYS significant.
 ALL zeroes between nonzero numbers are ALWAYS significant.
 ALL zeroes, which are SIMULTANEOUSLY to the right of the decimal point AND at the end of the number, are ALWAYS significant.
 ALL zeroes to the left of a written decimal point and in a number greater than or equal to 10 are ALWAYS significant.
A helpful way to check rules 3 and 4 is to write the number in scientific notation. If you can/must get rid of the zeroes, then they are NOT significant.
Examples: How many significant figures are present in the following numbers?
Number 
# of Significant Figures 
Rule(s) 
48,923 
5 
1 
3.967 
4 
1 
900.06 
5 
1,2,4 
0.0004 (= 4 E4) 
1 
1 
8.1000 
5 
1,3 
501.040 
6 
1,2,3,4 
3,000,000 (= 3 E+6) 
1 
1 
10.0 (= 1.00 E+1) 
3 
1,3,4 
5100. 
4 
1,4 
5100 
2 
1 
***Note***
Conversion factors, such as 1 in = 2.54 cm, are exact. Since conversions are exact they have an infinite number of significant figures. Your significant figures will always be limited by your measurement, never by any conversion you apply.
Page Break
Appendix III
Accuracy – Percent Difference and Percent Error
Scientists want to compare their results with those of others, or with a theoretically derived prediction or literature value. The closer independently measured values are to each other, the more confident we can be that the measured values are correct.
Percent Difference: Applied when comparing two experimental results, E1 and E2, neither of which is an accepted literature value. The percent difference is the absolute value of the difference between E1 and E2 divide by the average of the values times 100. This comparison is used when no literature value is available and you want to compare a result measured in two different ways.
% Difference =
Percent Error: Applied when comparing an experimental quantity, E, with a theoretical or true value, T, which is considered the “correct” value, for example, an accepted literature value. The percent error is the absolute value of the difference between T and E, divided by the “correct” value times 100.
% Error =
Page Break
Appendix IV
If you use a Mac, you can download StatPlus for free to perform the data analysis that Excel does.
Precision – Calculating 95% confidence interval error
A reported data value by itself is meaningless unless it is accompanied by a confidence interval, or the precision to which you are confident you know the value. For example, a grain of sand may be reported to have a diameter of 52 microns but how sure are you of this value? If you used a ruler with 1 mm gradations (1 mm = 1000 microns) then anything less than 1 mm you’re guessing at. If this value were reported with error, it would be 52 (±1000) μm. In this case the 52micron measurement is garbage because it is smaller than our ability to measure it precisely. However, if the measurement were made using red light that has a 1000 nanometer wavelength (~1000 nanometers = 0.01 microns) then the value reported with error would be 52.48 (±0.01) μm; a very precise measure of the diameter of the sand grain!
In science, when an experimental value is reported it is customary to report how confident in the value the experimenter is. The level of confidence that is generally accepted is 95%. Therefore, error is reported to a 95% confidence level. This means that the experimenter is 95% confident that the true value falls within the error range determined experimentally. For example, if you determined experimentally the molar mass of sodium chloride at the 95% confidence level was 59.2 (±0.8) g/mol, you would be 95% confident that the true mass of sodium chloride was between 58.4 and 60.0 g/mol, assuming that there was only random error in your measurements. When compared to the true mass (58.44 g/mol) you would note that the true molar mass falls within your error range and are 95% confident that your experimental method is valid.
To calculate error at the 95% confidence level the following formula is used:
In this formula, σ is the standard deviation and N is the number of measurements. The standard deviation can quickly be calculated using spreadsheet software such as Excel. For example, if you had the following data:

A 
1 
Mass, g 
2 
45.11 
3 
43.97 
4 
44.46 
5 
45.45 
Stdev 
0.66133 
In Excel the standard deviation, σ, would be calculated by typing the following formula into cell A6, =stdev(A2:A5). After hitting enter, the result 0.66133 is displayed in cell A6. The 95% confidence interval would then be 1.96*(0.66133)/sqrt(3) = 0.74837, where N = 4 in this example.Page Break
Appendix V
When to use linear regression versus standard deviation
During the Ideal Gas Law experiment, you will need to perform a linear regression on the data to determine the error in your experimental gas constant. A linear regression is required because the experimental conditions change from run to run. Fortunately, when graphed the data lie along a line. In this case, a linear regression analysis is used to determine the error in the slope of the line that has been fit to the data.
In the case where the experimental conditions do NOT change from each run, like the freezing point depression experiment where you repeat the same measurement several times, the standard calculation for 95% confidence interval described in Appendix IV is used.
To perform a linear regression analysis in Excel, you must use the Data Analysis command within the Analysis group on the Data tab. You may have to load in the Data Analysis ToolPak if you do not see the Analysis group on the Data tab. See your software help for information on how to load in the Data Analysis TookPak.
Once you have access to the ToolPak, select Linear Regression from the list and then choose the appropriate dataset for the dependent (y) and independent (x) variables. Be sure that the confidence interval is set to 95%. Once you click “OK” Excel should output the calculated results to a new workbook. You should see something similar to the following:

Coefficients 
Standard Error 
LCL 
UCL 
t Stat 
plevel 
H0 (5%) rejected? 
Intercept 
0.08292 
0.11036 
0.45429 
0.28845 
0.75133 
0.48629 
No 
Slope 
0.08268 
0.00173 
0.07685 
0.08851 
47.74243 
0. 
Yes 
T (5%) 
3.36493 






LCL – Lower value of a reliable interval (LCL) 





UCL – Upper value of a reliable interval (UCL) 




The slope of the line, which is under the coefficients column, is 0.08268. The LCL and UCL are the lower and upper range of your error, respectively.
To report your error at the 95% confidence level, subtract the LCL from the UCL and divide by 2. For the example, (0.088510.07685)/2 = 0.00583. When reported to the correct number of significant figures, this gives 0.083 (±0.006) Latmmol1K1 (See Appendix VI).
Page Break
Appendix VI
How to report experimental values with 95% confidence level error
When you report your data in a lab writeup, you should report experimentally determined values using the following format:
[experimental value] [(± 95% confidence level)] × [magnitude] [units] @ [temperature]
The experimental value is the value you measure in the experiment. This could be the slope of a linear regression fit or the averaged value of a repeated set of measurements.
For example, if the averaged rate constant, k, was determined to be 5.2338 × 105 s1 with a 95% confidence of 6.3085 × 106 s1 at 23.4 (±0.2) ºC, you should report the result as,
5.2 (±0.6) × 105 s1 at 23.4 (±0.2) ºC
Here, [experimental value] = 5.2
[(± 95% confidence level)] = (±0.6)
× [magnitude] = × 105
[units] = s1
@ [temperature] = 23.4 (±0.2) ºC
There are several key things to note about this format.
 The error is only reported at the 95% confidence level and only to one significant figure. The error can be reported to two significant figures ONLY if the second figure is a 5, for example 0.15.
 The experimental value is reported to the same number of decimal places as the error. For example, 5.2 (±0.6). Both have one decimal place.
 The error value is put into parenthesis directly after the experimental value and both the experimental value and the error are reported to the same magnitude (i.e. the same power of 10).
 If the experimental value has units associated with it, they would be added after the magnitude multiplier.
Examples:
Experimental Value 
95% Confidence Level error 
Correct reporting notation 
4.2246 × 105 
3.9001× 107 (= 0.039001× 105) 
4.22 (±0.04) × 105 
5206 
139 
5200 (±100) or 5.2 (±0.1) × 103 
45.228 
15.26 × 102 (= 0.1526) 
45.2 (±0.2) or 45.23 (±0.15) 
Page Break
Appendix VII
Note on how to make a graph using Excel
Most of the data that you will plot using Excel is what’s called pairwise data. This is data that is collected in pairs, i.e. xy data (like time and temperature). To plot this type of data we will arrange all the x data in a column and corresponding y data in the column just to the right of the x data, making sure to keep the ordering of the data pairs correct. Then from the main menu, select “Insert” followed by “Chart…” from the drop down list. From the options listed, select “Scatter” and from the subset of charts select “Marked Scatter”. A blank graph will appear. Right click inside the graph and choose “Select data…” from the list. Click on the “Add” button under Series and then select the x values and y values that you wish to graph. Click “OK”.
In general, we will always select a scatter plot because our x data will not be in uniform steps of 1, 2, 3, etc. A scatter plot allows for the x data to have nonuniform steps. Also, in general, we don’t want Excel to draw a line through our data, we will do that using a trendline. This allows us to better judge how well our data fits a specific model or theory. It is also good form to use hollow markers to represent your data so that you can see if the trendline goes through the middle of your data points.
To add a trendline, right click on a data point in your graph and choose “Add trendline…” from the list. Choose the type of fit you wish to use (usually linear), and then select “Options” from the lefthand pane. Be sure to select the checkbox next to “Display equation on chart”. In addition, you can extend the line forward or backwards by using the Forecast feature.
Note that a trendline and a linear regression fit are different. A trendline is a visual method that draws a “best fit line” on top of the data. A trendline does not provide the error in the fit. To calculate and report the 95% confidence error, you must perform a linear regress fit (Appendix V).
Page Break
Appendix VIII
Note on how to fit data and subsets of data using Excel
If you only wish to fit a portion of your data, rather than the entire data set, you need to create a series that contains the subset of data that you wish to fit. For example, if you graphed the following data and fit a line through all the points, your fit wouldn’t be very good.
10 
100 
15 
90 
20 
80 
25 
70 
30 
65 
35 
62 
40 
60 
45 
58 
50 
55 
However, we can create a series for each subset of the data we wish to fit. In this example we will create two series: one will contain the first four paired data points, and the second will contain the last five paired data points.
To create a series, right click on a data point in the graph and select “Select data…” from the pop up menu. Under Series in the dialog box that pops up, click “Add”. Now select the x and y values from the data that you wish this series to contain. Repeat this process for additional series if needed. Each series can now have a trendline fitted to it. The graph now looks like this.
Note that the forecast feature was used to extend both trendlines so that they overlap.
If you choose, you can set the markers to be the same so that you can’t tell you have multiple series, however, this is purely aesthetics.
Page Break
Appendix X
Basic Vernier Probe setup using LoggerPro with troubleshooting
(This needs to be generalized) Open LoggerPro on the computer and connect the temperature probe to the Vernier USB interface. The software should automatically detect the temperature probe and a temperature reading should show up in the lower left hand corner of LoggerPro. Go into Settings and set the collection parameters for 1 point per second for 900 seconds.
If the probe is not detected, try unplugging the temperature probe and the Vernier USB hub. First reconnect the Vernier hub. It should play some short audible beeps, and then plug in the temperature probe. If the probe is still not detected alert your instructor.
Calibration of pH meter
In order for the pH electrode to correctly display pH it must be calibrated using a set of solutions with known pH. To calibrate the electrode perform the following steps.
 Open LoggerPro on the computer.
 Once the program is open, plug the pH electrode into the LabPro controller unit. Be sure that you plug into the controller unit that is connected to the computer you are using.
 LoggerPro should autodetect the electrode and pH should be displayed in the lower left hand corner. If the electrode is not autodetected, perform the following steps otherwise skip to step 4:
 Under the Experiment menu, select Set Up Sensors and then select Show All Interfaces.
 A diagram of the LabPro controller unit will appear. Click on the dropdown box corresponding to the channel that the sensor is plugged into and select Add Sensor.
 Choose the pH sensor.
 To start calibration, under the Experiment menu select Calibrate and then select LabPro:pH from the flip out window. A menu will appear.
 To prepare the electrode for calibration perform the following washing procedures EACH TIME THE ELECTRODE IS PLACED INTO A NEW SOLUTION.
 Remove the electrode from the solution it is currently in and rinse the glass bulb at the tip using deionized water from a rinse bottle.
 Gently blot the tip of the electrode with a KimWipe to remove most of the water. You do not need to remove all the water.
 Place the electrode into the new solution. Be sure that the glass bulb at the tip of the electrode is completely submerged in the solution.
 You will be using pH 4.0 and 7.0 standard calibration solutions for this experiment. Place the cleaned electrode into the calibration buffer you have chosen.
 Click Calibrate Now from the displayed menu.
 Gently stir the electrode in the first standard solution while watching until the voltage displayed in the calibration window stabilizes. This should take about 30 seconds. Be sure not to warm the solution with your hand as this will throw off the calibration.
 Once the voltage has stabilized enter the pH value for the standard solution you are using into the box labeled Reading 1 and press Keep.
 Remove the electrode from the standard solution, rinse and dry the tip of the electrode, and repeat steps 6 – 9 for the second standard solution.
After you have completed the calibration procedure you must check the accuracy of the calibration you just performed. Choose any of the standard solutions and insert the electrode into this solution. The reading on LoggerPro should match the labeled value of the buffer within 0.1 pH units. If the readings do not match you must repeat the calibration procedure. You can use these calibration buffers to check your electrode calibration throughout your experiment to ensure accurate pH measurements. The pH electrode must be stored upright in its storage solution. Place the electrode and storage container in a beaker to keep it from falling over when not in use.
Page Break
Appendix XI
How to condition a volumetric pipette
In order to obtain the most precise results for an experiment that uses volumetric glassware, such as a volumetric burette or a volumetric pipette, the glassware must be conditioned before use. Conditioning removes any contaminants in the glassware, it removes any residual water that may dilute your stock solution thereby changing the known concentration, and conditioning ensures accurate delivery of the correct volume.
To condition a volumetric pipette, perform the following steps:
 Fill an appropriate sized beaker with just enough stock solution to cover the bottom of the beaker to a height of 1 mm.
 Swirl the stock solution around in the beaker and then dispose of this solution into the appropriate waste container.
 Fill the beaker with about 1.5 – 2 times the volume of the volumetric pipette you will be using. Note: Do not use too large a beaker or you will aspirate air into the pipette. For example, a 25.00 mL pipette should use a 100 mL beaker.
 Fill the pipette about half full with the stock solution you will be dispensing.
 Roll the pipette around in your hands in order to coat the inside of the glassware.
 Drain the pipette completely, placing the solution into an appropriate waste container, and use a pipette bulb to blow out the rinse solution from the tip of the pipette.
The pipette is now fully conditioned.
Use the following steps to fill and accurately deliver an amount of solution from the pipette.
 Refill the dispensing beaker with 1.5 – 2 times the volume of the volumetric pipette you are using.
 Squeeze the pipette bulb to void it of air and then place it onto the top of the empty pipette. DO NOT jam the bulb onto the top of the pipette. You will need to remove the bulb later and if jammed on it could be difficult to remove.
 Slowly draw up your stock solution into the pipette using the pipette bulb. You must KEEP THE TIP OF THE PIPETTE SUBMERGED in the stock solution. DO NOT allow the tip of the pipette to pull in air or it was aspirate into the pipette bulb and contaminate the bulb and your stock solution.
 It is helpful to keep the tip of the pipette touching the bottom of the beaker and slightly tip the pipette to allow liquid to flow into the pipette. This helps to reduce the chance of aspirating air into the pipette.
 Fill the pipette with solution until the solution is above the etched line in the neck of the pipette. GO SLOW when the solution starts to enter into the neck of the pipette as it will rapidly ascend the narrow tubing. DO NOT allow any solution into the pipette bulb.
 Remove the thumb of your dominant hand from your glove. For example, if you are right handed remove your right thumb only from your glove. Use your thumb to slide the pipette bulb off the pipette and then to quickly cover the top of the pipette so no solution leaks out.
 With the tip of the pipette pressed down against the bottom of the beaker, SLOWLY twist your thumb to allow the solution to drain down until the bottom of the meniscus is just touching the etched line in the neck of the pipette. Remember to keep your eye at the level of the etched line to remove parallax and increase your precision.
 Transfer your pipette to where you wish to dispense the solution and allow it to drain. DO NOT blow out the last drop of solution from the pipette. The pipette is designed to accurately deliver its specified volume with solution remaining in the tip of the pipette.
Appendix XII
How to condition a volumetric burette
To condition and prepare a volumetric burette for use, perform the following steps:
 Fill a 250 mL beaker with just enough stock solution to cover the bottom of the beaker to a height of 1 mm.
 Swirl the stock solution around in the beaker and then dispose of this solution into the appropriate waste container.
 Fill the beaker with about 150 mL of your stock solution. Close the stopcock on the burette and use a funnel to fill the burette about one third full.
 Slowly tip the burette on its side and drain the solution out of the top end of the burette into an appropriate waste container. DO NOT drain the solution through the tip of the burette.
 Place the burette into a burette stand and clamp. Using a funnel, fill the burette to near the top. Check for leaks around the tip of the burette. If the burette is leaking report it to your instructor, get a new burette, and go back to step 3.
 Open the stopcock and allow some solution to drain into an appropriate waste container. WHILE THE SOLUTION IS DRAINING, you should tap the stopcock of the burette with your finger aggressively to dislodge any bubbles from the tip of the burette. You should see bubbles flow down through the burette tip. Failure to remove bubbles from the tip will cause error in your measurement, as the burette will not dispense the correct volume of solution.
 Once all bubbles have been removed from the tip of the burette, close the stopcock. Refill the burette until your solution is ABOVE THE ZERO LINE.
 Remove the funnel from the top of the burette.
 Open the stopcock and drain the burette until the solution is below the zero line. BE SURE TO LOOK AWAY FROM THE BURETTE when you close the stopcock so that you do not set your burette to any specific volume. Setting your burette to a specific volume will bias your results.
 Read the bottom of the meniscus to the hundredths place. For example, your initial burette volume may be 0.17 mL. You must estimate the last digit. You can place a place a piece of white paper behind the burette to help visualize the meniscus and the gradations on the burette. It is important to have your eye at the same level and the burette gradations to minimize parallax in your measurement.