Supplemental Material - Stanford University 2019. 11. 12.آ  1 Supplemental Material 2 Single-blind...

download Supplemental Material - Stanford University 2019. 11. 12.آ  1 Supplemental Material 2 Single-blind Inter-comparison

of 22

  • date post

    06-May-2021
  • Category

    Documents

  • view

    0
  • download

    0

Embed Size (px)

Transcript of Supplemental Material - Stanford University 2019. 11. 12.آ  1 Supplemental Material 2 Single-blind...

  • 1

    Supplemental Material 1

    Single-blind Inter-comparison of Methane Detection Technologies 2

    – Results from the Stanford/EDF Mobile Monitoring Challenge 3

    Arvind P. Ravikumar* 1 , Sindhu Sreedhara

    2 , Jingfan Wang

    2 , Jacob Englander

    2β , Daniel Roda-4

    Stuart 2†

    , Clay Bell 3 , Daniel Zimmerle

    3 , David Lyon

    4 , Isabel Mogstad

    4 , Ben Ratner

    4 , Adam R. 5

    Brandt 2 6

    1 Harrisburg University of Science and Technology, Harrisburg, Pennsylvania, US 7

    2 Stanford University, Stanford, California, US 8

    3 Colorado State University Energy Institute, Fort Collins, Colorado, US 9

    4 Environmental Defense Fund, Washington DC, US 10

    *Corresponding author: aravikumar@harrisburgu.edu 11

    August 16, 2019 12

    β Current affiliation: California Air Resources Board, Sacramento, California, US 13

    † Current affiliation: Alphataraxia Management, Los Angeles, California, US 14

    15

    List of Contents: 16

    SM_Figure 1. Test site configuration at METEC. Site configuration at the Methane Emissions 17

    Technology Evaluation Center (METEC) in Fort Collins, CO, indicating typical oil and gas 18

    equipment arrange in 5 ‘pad’ configurations. 19

    SM_Figure 2. Test site configuration in California. Site configuration at the Northern California 20

    Gas Yard in Knights Landing, CA, showing the approximately locations of the three leak sources. 21

    The red circle indicates the location of the anomalous methane emissions from the facility during 22

    the test week. 23

    SM_Figure 3. Performance results of University of Calgary (drone) in the Stanford/EDF Mobile 24

    Monitoring Challenge. Quantification parity chart between actual and measured leak rates. The 25

    drone technology was tested only on one of the days due to airspace restrictions resulting in a 26

    small sample size. 27

    SM_Figure 4. Wind speed distribution at METEC during the Stanford/EDF Mobile Monitoring 28

    Challenge. Histogram of the 5-minute wind speed collected at METEC during the two weeks of 29

    testing. Winds greater than 15 mph were observed 23% of the time during week, and less than 2% 30

    the time during week 2. 31

    mailto:*Corresponding%20author:%20aravikumar@harrisburgu.edu

  • 2

    SM_Figure 5. An example of weak-interference scenario during Week-1 of testing at METEC. 32

    The 10-minute binary yes/no detection test scenario had a 2 m/s average wind from 202.6° and 33

    leaks on pads 1 and 4. Because pad 4 could experience potential interference from pad 1, the 34

    results from pad 4 were discarded from statistical analysis. Here, the 40° cone contained 80% of 35

    all wind vectors in the 10-minute test period. 36

    SM_Figure 6. An example of strong-interference scenario during Week-2 of testing at METEC. 37

    The 20-minute detection and quantification test scenario had a 1.4 m/s average wind from 58.6° 38

    and leaks on pads 3, 4, and 5. Because pad 2 and pad 4 could experience potential interference 39

    from pad 5, the results from pad 2 and pad 4 were discarded from statistical analysis. Here, the 40

    40° cone did not contain at least 50% of wind vectors in the 20-minute test interval and was 41

    therefore expanded to 49°. 42

    SM_Figure 7. Interference analysis of Heath Consultants’ performance. The fraction of 43

    correctly identified tests for true positives (red), and true negatives (blue) across the three 44

    scenarios considered in this analysis. The error bars correspond to 95% confidence intervals 45

    associated with finite sample sizes. 46

    SM_Figure 8. Interference analysis of Picarro Inc.’s performance. The fraction of correctly 47

    identified tests for true positives (red), and true negatives (blue) across the three scenarios 48

    considered in this analysis. The error bars correspond to 95% confidence intervals associated with 49

    finite sample sizes. 50

    SM_Figure 9. Quantification parity chart for Picarro under the weak- and strong-interference 51

    scenarios. While there was marginal improvement in R 2 , the average error also increased from -52

    0.9 scfh to -1.3 scfh. The parameter values in parenthesis represent base-case scenario. 53

    SM_Figure 10. Interference analysis of SeekOps’ performance. The fraction of correctly 54

    identified tests for true positives (red), and true negatives (blue) across the three scenarios 55

    considered in this analysis. The error bars correspond to 95% confidence intervals associated with 56

    finite sample sizes. 57

    SM_Figure 11. Interference analysis of Aeris Technologies’ performance. The fraction of 58

    correctly identified tests for true positives (red), and true negatives (blue) across the three 59

    scenarios considered in this analysis. The error bars correspond to 95% confidence intervals 60

    associated with finite sample sizes. 61

    SM_Figure 12. Interference analysis of Advisian’s performance. The fraction of correctly 62

    identified tests for true positives (red), and true negatives (blue) across the three scenarios 63

    considered in this analysis. The error bars correspond to 95% confidence intervals associated with 64

    finite sample sizes. 65

    SM_Figure 13. Interference analysis of ABB/ULC Robotics’ performance. The fraction of 66

    correctly identified tests for true positives (red), and true negatives (blue) across the three 67

    scenarios considered in this analysis. The error bars correspond to 95% confidence intervals 68

    associated with finite sample sizes. 69

  • 3

    SM_Figure 14. Interference analysis of Baker Hughes’ (GE) performance. The fraction of 70

    correctly identified tests for true positives (red), and true negatives (blue) across the three 71

    scenarios considered in this analysis. The error bars correspond to 95% confidence intervals 72

    associated with finite sample sizes. 73

    SM_Table 1: Specification of the participating technologies. Self-reported performance 74

    parameters of the participating technologies. These values correspond to specification as available 75

    during the application process for the Stanford/EDF Mobile Monitoring Challenge (October 76

    2017) and can be substantially differently now. Actual field performance may not adhere to these 77

    specifications (see main text for results). 78

    SM_Table 2. Stanford/EDF Mobile Monitoring Challenge overview. Test dates, participating 79

    teams, test locations, and self-reported detection limits from all the teams participating the 80

    Stanford/EDF Mobile Monitoring Challenge 81

    SM_Table 3. Test Protocols in the Stanford/EDF Mobile Monitoring Challenge. General test 82

    protocols in the Stanford/EDF Mobile Monitoring Challenge, increasing in complexity from 83

    simple yes/no detection tests to complex multi-leak detection and quantification tests. 84

    SM_Table 4. Interference results from Heath Consultants. Performance of Heath Consultants in 85

    the weak-interference and strong-interference scenarios, compared to the base-case scenario. The 86

    results are presented as percentages, with the sample size in parenthesis. 87

    SM_Table 5. Interference results from Picarro Inc. Performance of Picarro Inc. in the weak-88

    interference and strong-interference scenarios, compared to the base-case scenario. The results 89

    are presented as percentages, with the sample size in parenthesis. 90

    SM_Table 6. Interference results from SeekOps. Performance of Seek Ops Inc. in the weak-91

    interference and strong-interference scenarios, compared to the base-case scenario. The results 92

    are presented as percentages, with the sample size in parenthesis. 93

    SM_Table 7. Interference results from Aeris Technologies. Performance of Aeris Technologies 94

    in the weak-interference and strong-interference scenarios, compared to the base-case scenario. 95

    The results are presented as percentages, with the sample size in parenthesis. 96

    SM_Table 8. Interference results from Advisian. Performance of Advisian in the weak-97

    interference and strong-interference scenarios, compared to the base-case scenario. The results 98

    are presented as percentages, with the sample size in parenthesis. 99

    SM_Table 9. Interference results from ABB/ULC Robotics. Performance of ABB/ULC Robotics 100

    in the weak-interference and strong-interference scenarios, compared to the base-case scenario. 101

    The results are presented as percentages, with the sample size in parenthesis. 102

    SM_Table 10. Interference results from Baker Hughes GE. Performance of Baker Hughes (GE) 103

    in the weak-interference and strong-interference scenarios, compared to the base-case scenario. 104

    The results are presented as percentages, with the sample size in parenthesis. 105

    106

  • 4

    107

    1. Selection process 108 The selection process for the MMC was undertaken in collaboration with scientists, project 109

    managers, and an industrial advisory board set up to advise on the commercial promise of new 110

    methane leak detection systems. The 10 technologies that participated in the MMC were chosen 111

    in three st