From Halpern et al. 2012 supplemental info:
The ‘Lasting Special Places’ sub-goal focuses instead on those geographic locations that hold particular value for aesthetic, spiritual, cultural, recreational or existence reasons. This sub-goal is particularly hard to quantify. Ideally one would survey every community around the world to determine the top list of special places, and then assess how those locations are faring relative to a desired state (e.g., protected or well managed). The reality is that such lists do not exist. Instead, we assume areas that are protected represent these special places (i.e. the effort to protect them suggests they are important places).
Clearly this is an imperfect assumption but in many cases it will be true. Using lists of protected areas as the catalogue of special places then creates the problem of determining a reference condition. We do not know how many special places have yet to be protected, and so we end up having all identified special places also being protected. To solve this problem we make two important assumptions. First, we assume that all countries have roughly the same percentage of their coastal waters and coastline that qualify as lasting special places. In other words, they all have the same reference target (as a percentage of the total area). Second, we assume that the target reference level is 30% of area protected.
The model for this goal considers the inland coastal zone (up to 1 km inland) independently from, and equally weighted with, the offshore coastal zone (up to 3 nm offshore). The status for this goal is calculated as:
\[X_{LSP} = \frac{\left(\frac{Area_{P}}{Area_{P_{ref}}} + \frac{Area_{MPA}}{Area_{MPA_{ref}}}\right)}{2}\]
where:
v2022 Using updated June 2022 data Switched raster
functions over to their terra
equivalents.
v2021 Using updated February 2021 data
Reference: IUCN and UNEP-WCMC (2022), The World Database on Protected Areas (WDPA) [On-line], June 2022. Cambridge, UK: UNEP-WCMC. Available at: www.protectedplanet.net.
Downloaded: June 6, 2022
Description: Shapefile of World Database on Protected Areas
Time range: 1800 - 2021; some protected areas do not have an associated “status year” and are reported as year 0.
Format: Shapefile
File location:
Mazu:git-annex/globalprep/_raw_data/wdpa_mpa/d2022/WDPA_Jun2022_Public_shp/
The WDPA-MPA dataset comes as a shapefile or geodatabase in WGS84 coordinate reference system.
Once the polygons have been prepped, we rasterize the results to 500 m resolution.
This process is all done in the script:
1_prep_wdpa_rast.Rmd
. After that is complete, move on to
computing zonal statistics.
Comparing the global WDPA raster to the 3 nautical miles offshore and 1 km inland rasters, we can tally the protected area within each region and compare to the total area within each region. Note each cell is 500 m x 500 m, so area is .25 km2, but since we are simply calculating a ratio, this cancels out.
<- c('zonal_3nm' = file.path(dir_goal, 'int', 'zonal_stats_3nm.csv'),
zonal_files 'zonal_1km' = file.path(dir_goal, 'int', 'zonal_stats_1km.csv'),
'zonal_eez' = file.path(dir_goal, 'int', 'zonal_stats_eez.csv'))
<- terra::rast(file.path(dir_goal_anx, 'rast', 'wdpa_2022_moll_500m.tif'))
rast_wdpa
### point to 500 m rasters for 3 nautical mile coastal regions, and 1 km inland coastal regions.
<- file.path(dir_anx, 'spatial/d2014/data/rgn_mol_raster_500m')
dir_zones
<- c(
rgn_rast_list 'zonal_3nm' = file.path(dir_zones, 'rgn_offshore3nm_mol_500mcell.tif'),
'zonal_1km' = file.path(dir_zones, 'rgn_inland1km_mol_500mcell.tif'),
'zonal_eez' = file.path(dir_zones, 'rgn_eez_mol_500mcell.tif'))
### Remove all files in `int` if it's the first time working through this data prep for this assessment
### Filters out finished zonal files: if zonal files don't exist yet, they will be created (comment out to recalculate)
<- zonal_files[!file.exists(zonal_files)]
zonal_files_to_run <- rgn_rast_list[!file.exists(zonal_files)]
rgn_rast_list
### NOTE: The crosstab function returns this warning - does it affect the
### outcomes, or does the function coerce the correct outcome?
# Warning message:
# In FUN(X[[i]], ...) : integer overflow - use sum(as.numeric(.))
### zonal() wouldn't work since we want to track the frequency of each
### year value within each rgn_id value.
<- function(rgn_rast_file, rast_values) {
lsp_crosstab <- terra::rast(rgn_rast_file)
rgn_rast message('Cross tabulating ', rgn_rast_file)
<- terra::crosstab(c(rast_values, rgn_rast), useNA = TRUE) %>%
rast_df as.data.frame() %>%
setNames(c('year', 'rgn_id', 'n_cells')) %>%
mutate(year = as.integer(as.character(year)),
rgn_id = as.integer(as.character(rgn_id))) %>%
arrange(rgn_id, year)
return(rast_df)
}
### Processing & saving zonal statistics for a single raster
<- lsp_crosstab(rgn_rast_list[1], rast_values = rast_wdpa) #~25 minutes to run #3nm
x
<- lsp_crosstab(rgn_rast_list[2], rast_values = rast_wdpa) #19 minutes to run #1km
y
<- lsp_crosstab(rgn_rast_list[3], rast_values = rast_wdpa) #~50 min minutes to run #eez
z
## Save these files to the int folder
write_csv(x, zonal_files_to_run[1])
write_csv(y, zonal_files_to_run[2])
write_csv(z, zonal_files_to_run[3])
Once the WDPA raster is cross-tabulated against the OHI region rasters (both 3 nm offshore and 1 km inland) we have the number of protected cells, identified by year of protection, within each region. NA values are unprotected cells.
<- read_csv(zonal_files['zonal_3nm'])
stats_3nm print(summary(stats_3nm))
<- read_csv(zonal_files['zonal_1km'])
stats_1km print(summary(stats_1km))
<- read_csv(zonal_files['zonal_eez'])
stats_eez print(summary(stats_eez))
Grouping by rgn_id, the total number of cells per region is determined by summing cell counts across ALL years, including cells with year == NA (unprotected cells). We can then determine the protected area for each year by looking at the cumulative sum of cells up to any given year.
Since the cells are 500 m on a side, we can easily calculate area by multiplying cell count * 0.25 km2 per cell.
Finally we can calculate the status of a region for any given year by finding the ratio of protected:total and normalizing by the goal’s target of 30% protected area.
<- read_csv(zonal_files['zonal_3nm'])
stats_3nm <- read_csv(zonal_files['zonal_1km'])
stats_1km <- read_csv(zonal_files['zonal_eez'])
stats_eez
<- region_data()
rgn_eez <- max(c(stats_1km$year, stats_3nm$year, stats_eez$year), na.rm = TRUE)
max_year
### Determine total cells per region (n_cells_tot) and then a cumulative
### total of cells per region
region_data() #use this function to call rgns_eez, which is called below in the function "calc_areas()"
<- function(stats_df) {
calc_areas <- stats_df %>%
area_df group_by(rgn_id) %>%
mutate(n_cells_tot = sum(n_cells),
a_tot_km2 = n_cells_tot / 4) %>%
filter(!is.na(year) & !is.na(rgn_id)) %>%
mutate(n_cells_cum = cumsum(n_cells),
a_prot_km2 = n_cells_cum / 4) %>%
complete(year = 2000:max_year) %>%
ungroup() %>%
fill(-year, .direction = 'down') %>%
::select(-contains('cell')) %>%
dplyrdistinct() %>%
left_join(rgns_eez, by = 'rgn_id') %>%
::select(rgn_id:rgn_name)
dplyr
return(area_df)
}
<- stats_1km %>%
prot_1km calc_areas()
<- stats_3nm %>%
prot_3nm calc_areas()
<- stats_eez %>%
prot_eez calc_areas()
write_csv(prot_3nm, file.path(dir_goal, 'int', 'area_protected_3nm.csv'))
write_csv(prot_1km, file.path(dir_goal, 'int', 'area_protected_1km.csv'))
write_csv(prot_eez, file.path(dir_goal, 'int', 'area_protected_eez.csv'))
From the protected area files, write out the individual layers ready for the Toolbox[TM].
<- read_csv(file.path(dir_goal, 'int', 'area_protected_3nm.csv')) %>%
prot_3nm rename(area = a_tot_km2,
a_prot_3nm = a_prot_km2)
<- read_csv(file.path(dir_goal, 'int', 'area_protected_1km.csv')) %>%
prot_1km rename(area = a_tot_km2,
a_prot_1km = a_prot_km2)
<- function(df, layers, layername) {
write_lsp_layer <- df[ , c('rgn_id', layers)] %>%
df1 filter(rgn_id <= 250) %>%
distinct()
write_csv(df1, file.path(dir_goal, 'output', paste0(layername, '.csv')))
}
<- write_lsp_layer(prot_3nm, 'area', 'rgn_area_offshore3nm')
a_tot_3nm <- write_lsp_layer(prot_1km, 'area', 'rgn_area_inland1km')
a_tot_1km
<- write_lsp_layer(prot_3nm, c('year', 'a_prot_3nm'), 'lsp_prot_area_offshore3nm')
a_prot_3nm <- write_lsp_layer(prot_1km, c('year', 'a_prot_1km'), 'lsp_prot_area_inland1km') a_prot_1km
Some goals require calculation of resilience nearshore (3nm) or entire EEZ.
= .30 ### 30% of area protected = reference point
area_ref
<- read_csv(file.path(dir_goal, 'int', 'area_protected_3nm.csv')) %>%
resil_3nm mutate(resilience.score = (a_prot_km2 / a_tot_km2) / area_ref,
resilience.score = ifelse(resilience.score > 1, 1, resilience.score))
<- read_csv(file.path(dir_goal, 'int', 'area_protected_eez.csv')) %>%
resil_eez mutate(resilience.score = (a_prot_km2 / a_tot_km2) / area_ref,
resilience.score = ifelse(resilience.score > 1, 1, resilience.score))
## Save resilience scores for 3 nm and EEZ data
<- resil_3nm %>%
tmp_3nm ::select(rgn_id, year, resilience.score)
dplyrwrite_csv(tmp_3nm, file.path(dir_goal, 'output', "mpa_3nm_resilience.csv"))
<- resil_eez %>%
tmp_eez ::select(rgn_id, year, resilience.score)
dplyrwrite_csv(tmp_eez, file.path(dir_goal, 'output', "mpa_eez_resilience.csv"))
There was no gapfilling for these data. Created gapfill files with values of 0.
library(dplyr)
<- read.csv("output/mpa_eez_resilience.csv")%>%
res_eez mutate(resilience.score = 0) %>%
rename(gapfilled = resilience.score)
write.csv(res_eez, "output/mpa_eez_resilience_gf.csv", row.names=FALSE)
<- read.csv("output/mpa_3nm_resilience.csv")%>%
res_3nm mutate(resilience.score = 0) %>%
rename(gapfilled = resilience.score)
write.csv(res_3nm, "output/mpa_3nm_resilience_gf.csv", row.names=FALSE)
<- read.csv("output/lsp_prot_area_inland1km.csv") %>%
inland mutate(a_prot_1km = 0) %>%
rename(gapfilled = a_prot_1km)
write.csv(inland, "output/lsp_prot_area_inland1km_gf.csv", row.names=FALSE)
<- read.csv("output/lsp_prot_area_offshore3nm.csv") %>%
offshore mutate(a_prot_3nm = 0)%>%
rename(gapfilled = a_prot_3nm)
write.csv(offshore, "output/lsp_prot_area_offshore3nm_gf.csv", row.names=FALSE)
Plot scores for 2020 vs 2019 assessment years
library(ggplot2)
#library(plotly)
## Calculates this year and last year's coastal marine protected area ratio (CMPA/Ref-CMPA) for plotting
<- read_csv(file.path(dir_goal, 'output', 'lsp_prot_area_offshore3nm.csv')) %>%
status_3nm_new full_join(read_csv(file.path(dir_goal, 'output', 'rgn_area_offshore3nm.csv')),
by = 'rgn_id') %>%
mutate(pct_prot_3nm_new = a_prot_3nm / area,
status_3nm_new = pct_prot_3nm_new / 0.3,
status_3nm_new = ifelse(status_3nm_new > 1, 1, status_3nm_new)) %>%
filter(year == max(year)) %>%
::select(rgn_id, pct_prot_3nm_new, status_3nm_new)
dplyr
<- read_csv(file.path(dir_goal, '../v2021/output', 'lsp_prot_area_offshore3nm.csv')) %>%
status_3nm_old full_join(read_csv(file.path(dir_goal, 'output', 'rgn_area_offshore3nm.csv')),
by = 'rgn_id') %>%
mutate(pct_prot_3nm_old = a_prot_3nm / area,
status_3nm_old = pct_prot_3nm_old / 0.3,
status_3nm_old = ifelse(status_3nm_old > 1, 1, status_3nm_old)) %>%
filter(year == max(year)) %>%
::select(rgn_id, pct_prot_3nm_old, status_3nm_old)
dplyr
## Calculates this year and last year's coastline protected ratio (CP/Ref-CP) for plotting
<- read_csv(file.path(dir_goal, 'output', 'lsp_prot_area_inland1km.csv')) %>%
status_1km_new full_join(read_csv(file.path(dir_goal, 'output', 'rgn_area_inland1km.csv')),
by = 'rgn_id') %>%
mutate(pct_prot_1km_new = a_prot_1km / area,
status_1km_new = pct_prot_1km_new / 0.3,
status_1km_new = ifelse(status_1km_new > 1, 1, status_1km_new)) %>%
filter(year == max(year)) %>%
::select(rgn_id, pct_prot_1km_new, status_1km_new)
dplyr
<- read_csv(file.path(dir_goal, '../v2021/output', 'lsp_prot_area_inland1km.csv')) %>%
status_1km_old full_join(read_csv(file.path(dir_goal, 'output', 'rgn_area_inland1km.csv')),
by = 'rgn_id') %>%
mutate(pct_prot_1km_old = a_prot_1km / area,
status_1km_old = pct_prot_1km_old / 0.3,
status_1km_old = ifelse(status_1km_old > 1, 1, status_1km_old)) %>%
filter(year == max(year)) %>%
::select(rgn_id, pct_prot_1km_old, status_1km_old)
dplyr
# Updated sequence to use pivot_longer() instead of gather()
<- status_3nm_new %>%
lsp_new_old full_join(status_3nm_old, by = c('rgn_id')) %>%
full_join(status_1km_new, by = c('rgn_id')) %>%
full_join(status_1km_old, by = c('rgn_id')) %>%
mutate(status_old = (status_3nm_old + status_1km_old) / 2,
status_new = (status_3nm_new + status_1km_new) / 2) %>%
pivot_longer(cols = contains('new'), names_to = 'rgn', values_to = 'score_new') %>%
pivot_longer(cols = contains('old'), names_to = 'rgn_old', values_to = 'score_old') %>%
mutate(rgn = str_replace(rgn, '_new', ''),
rgn_old = str_replace(rgn_old, '_old', ''),
score_new = round(score_new, 3),
score_old = round(score_old, 3)) %>%
filter(rgn_id <= 250) %>%
filter(rgn == rgn_old) %>%
::select(-rgn_old) %>%
dplyrleft_join(rgns_eez, by = 'rgn_id') %>%
::select(rgn_id:rgn_name)
dplyr
<- ggplot(lsp_new_old,
lsp_status_plot aes(x = score_old, y = score_new, key = rgn_name)) +
geom_point(alpha = .6) +
theme(legend.position = 'none') +
geom_abline(slope = 1, intercept = 0, color = 'red') +
labs(x = 'LSP status v2021 (data through March 2021)',
y = 'LSP status v2022 (data through Jan 2022)',
title = 'Comparing LSP status: 2022 vs 2021') +
facet_wrap( ~ rgn)
#got rid of ggplotly
lsp_status_plot
ggsave(file.path(dir_goal, 'Figs/plot_v2021_v2022.png'),
plot = lsp_status_plot, height = 4.5, width = 6)
<- lsp_new_old %>%
x mutate(diff = score_new - score_old) %>%
filter(rgn == 'status' & abs(diff) > 0.05) %>%
mutate(abs_diff = abs(diff))
write.csv(x, "output/major_changes_2022.csv" ) #added this so I could come back to it without rerunning everything
Not sure what the purpose of the chunk below is?
## Calculates this year and last year's coastline protected ratio (CP/Ref-CP) for plotting
<- read_csv(file.path(dir_goal, 'output', 'lsp_prot_area_inland1km.csv')) %>%
status_1km_new full_join(read_csv(file.path(dir_goal, 'output', 'rgn_area_inland1km.csv')),
by = 'rgn_id') %>%
mutate(pct_prot_1km_new = a_prot_1km / area,
status_1km_new = pct_prot_1km_new / 0.3,
status_1km_new = ifelse(status_1km_new > 1, 1, status_1km_new)) %>%
filter(rgn_id == 146)
<- read_csv(file.path(dir_goal, 'output', 'lsp_prot_area_offshore3nm.csv')) %>%
status_3nm_new full_join(read_csv(file.path(dir_goal, 'output', 'rgn_area_offshore3nm.csv')),
by = 'rgn_id') %>%
mutate(pct_prot_3nm_new = a_prot_3nm / area,
status_3nm_new = pct_prot_3nm_new / 0.3,
status_3nm_new = ifelse(status_3nm_new > 1, 1, status_3nm_new)) %>%
filter(rgn_id == 146)
<- status_3nm_new %>%
lsp_new_old full_join(status_1km_new, by = c('rgn_id', 'year')) %>%
mutate(status_new = (status_3nm_new + status_1km_new) / 2)
# prov_wrapup()