This vignette will walk you through cleaning up quadrat data from CoralNet to produce easy to analyze data frames.
A little about the data:
The data being cleaned will be the softcoral_SQuads
and
this vignette will build off of the cropping vignette as well. Data was
collected by the Baum Lab and Kiritimati Field Teams and is the
uncleaned version of data found in Maucieri and Baum 2021. Biological
Conservation. The softcoral_SQuads
data are from photo
quadrats (0.9m by 0.6m) which were randomly annotated with 54 random
points each, while the softcoral_annotations
data are from
photo quadrats (1m by 1m) which were randomly annotated with 100 random
points each. At each of these annotated points, the substrate was
identified. Photo quadrats were collected on Kiritimati Island in the
Republic of Kiribati and document coral cover over time and space. The
annotations and output of the data frame were produced using CoralNet and all annotations were
done manually, by trained researchers.
So this vignette will follow the following steps:
Take the small quadrats (softcoral_SQuads
) and show
how to clean the data
Then crop the large quadrat data
(softcoral_annotations
) and clean it too
Join the two together to produce a large, easy to use data frame for future analyses, with some further cleaning of this joint data set.
First lets load the package, the dplyr package, tidyr package and the data, plus a few extras used to create this vignette.
library(quadcleanR)
library(dplyr)
library(tidyr)
library(shiny)
library(knitr)
library(kableExtra)
data("softcoral_SQuads")
Now let me point out some unique aspects of this data:
tail(softcoral_SQuads)
Image.ID | Image.name | Annotation.status | Points | AcCor | AcDig | Acr_arb | Acrop | AcroTab | Astreo | B_Acr_arb | B_Acro | B_Astre | BAT | B_Cosc | B_Echin | B_FavHal | B_Favia | B_FaviaM | B_FaviaS | B_FaviaSt | B_Favites | B_FavPent | B_Fung | BGard | B_GonEd | B_Herpo | B_HYDNO | B_HyExe | BlAcro.Cor | B_Lepta | B_Lepto | Blisop | B_Lobo | BlTurbFol | B_MOEN | B_MOFO | B_Monta | B_Monti | B_Oxyp | B_Paly | B_PaveDUER | B_Pavona | B_PEYDO | B_Plat | B_PMEAN | B_Pocillo | B_Porit | B_Psam | B_PVAR | B_Sando | B_UnkCoral | Cirr | COSC | ECHIN | Fav | FavHal | Favia | FaviaM | FaviaS | FaviaSt | FavPent | Fung | Gardin | GonEd | Herpo | HYDNO | HyExe | Isopora | Lepta | Lepto | Lobo | X.MOEN | X.MOFO | Monta | Monti | Oxyp | Paly | PaveDUER | Pavon | PEYDO | Plat | Plero | PMEAN | Pocill | Porit | Psam | PVAR | Sando | Tuba | TURB | UnkCoral | ANEM | B_Clad | B_Sinu | Clad | EncBry | EUR | HYDCO | Hydra | Mille | MOBI | Sarco | SECO | Sinu | Sponge | Stylas | UnkTUN | XmasW | ZOAN | B_Sarco | Sand | Sediment | SCRO | B_Loph | CYAN | Loph | Rubble | SHAD | Trans | Unc | AVRA | Caul | CCA | Dict | DICTY | Hali | Lobph | Macro | Mdict | Pad | Peysson | Turf | TURFH | Unidentified | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2517 | 1403339 | KI2011_site9_Q5.jpg | Confirmed | 54 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5.556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 94.444 |
2518 | 1403340 | KI2011_site9_Q6.jpg | Confirmed | 54 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5.556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 94.444 |
2519 | 1403341 | KI2011_site9_Q7.jpg | Confirmed | 54 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 98.148 |
2520 | 1403342 | KI2011_site9_Q8.jpg | Confirmed | 54 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 100.000 |
2521 | 1403343 | KI2011_site9_Q9.jpg | Confirmed | 54 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 100.000 |
2522 | ALL IMAGES | NA | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 100.000 |
This data as an Image.ID
column which was arbitrarily
added to this data set, so we are going to remove that as it holds no
scientific value. There is also a final row which sums all quadrats but
since we will be removing quadrats and points to clean these data up, we
will remove that final row as well. The Image.name
column
is the unique ID for each photo quadrat, but it is very messy and not
easy to use, so we will make this into new columns and add more
information. Annotation.status
is a column from CoralNet which tells if the
annotations in each photo quadrat have been confirmed by human
researchers or are only based on AI. The Points
column
tells us how many randomly annotated points there are in each quadrat,
and since they are all 54, we know these data are from the smaller
quadrats. The rest of the columns are the different coral and substrate
IDs and how many points where annotated for each tag in each photo.
So first, lets remove unneeded columns and make sure we are only working with “Confirmed” annotations.
<- softcoral_SQuads %>% filter(Annotation.status == "Confirmed") %>% select(-c(Image.ID, Points, Annotation.status)) SQuad_confirmed
Now we will separate the Image.name
column into more
descriptive columns.
<- separate(SQuad_confirmed, Image.name, sep="_", into=c("Field.Season", "Site","Quadrat")) SQuad_separated
## Warning: Expected 3 pieces. Additional pieces discarded in 3 rows [275, 446,
## 477].
But if you notice, there are still .jpg and .jpeg in the quadrat names, so lets remove those, and change the naming of siteT19 to site40.
<- rm_chr(SQuad_separated, c(".jpg", ".jpeg"))
SQuad_nojpg <- change_values(SQuad_nojpg, "Site", "siteT19", "site40") SQuad_site40
Now lets look at the levels of some of my columns.
unique(SQuad_site40$Field.Season)
## [1] "KI2007" "KI2009" "KI2010" "KI2011"
unique(SQuad_site40$Site)
## [1] "site10" "site11" "site12" "site13" "site14" "site15" "site16" "site17"
## [9] "site18" "site19" "site1" "site20" "site21" "site22" "site23" "site24"
## [17] "site25" "site26" "site27" "site28" "site29" "site2" "site30" "site31"
## [25] "site32" "site33" "site34" "site35" "site36" "site37" "site3" "site4"
## [33] "site5" "site6" "site7" "site8" "site9" "site38" "site39" "site40"
Sometimes you may not want every year and every site of the data, so you may want to subset some of them out. For this example I don’t want to subset any of these out, but I will demonstrate how to subset out based on values later, when we get to the large quadrats.
Now to prep for the addition of the large quadrats, I want to update the column names for this data frame. The column names are currently set as the tag shorthand used during the annotation process, but now I want them to better reflect the actual substrate names.
data("coral_labelset")
head(coral_labelset)
short_name | full_name | taxonomic_name | functional_group | life_history |
---|---|---|---|---|
Rubble | Broken_coral_rubble | Broken_coral_rubble | Abiotic_Substrate | Not_Coral |
SCRO | Consolidated_hard_rock | Consolidated_hard_rock | Abiotic_Substrate | Not_Coral |
Sand | Sand | Sand | Abiotic_Substrate | Not_Coral |
Sediment | Sediment | Sediment | Abiotic_Substrate | Not_Coral |
CCA | Crustose_coralline_algae | Crustose_coralline_algae | Crustose_Algae | Not_Coral |
Peysson | Peyssonnelia | Peyssonnelia | Crustose_Algae | Not_Coral |
This is what my label set document looks like, but you could also make this in R by joining a series of vectors. Now lets fix the column names.
<- change_names(SQuad_site40, coral_labelset, "short_name", "full_name")
SQuad_colnames names(SQuad_colnames)[1:16]
## [1] "Field.Season" "Site"
## [3] "Quadrat" "Acropora_corymbose"
## [5] "Acropora_digitate" "Acropora_arborescent"
## [7] "Acropora" "Acropora_tabulate"
## [9] "Astreopora" "Bleached_Acropora_arborescent"
## [11] "Bleached_Acropora" "Bleached_Astreopora"
## [13] "Bleached_Acropora_tabulate" "Bleached_Coscinarea"
## [15] "Bleached_Echinophyllia" "Bleached_Favites_halicora"
Much better.
Now because these are smaller quadrats with fewer annotated points, before we add in the large quadrat data, lets deal with the substrate that was not able to be accurately identified. For these photo quadrats, the Shadow, Transect_hardware and Unclear tags need to be removed and not used when we calculate percent cover. If I was going to use this data to do a diversity analysis with hard corals, I would also include unknown_hard_coral, and Bleached_unknown_hard_coral to this list, but as this data will be used for soft coral analyses, we will leave it.
<- mutate_at(SQuad_colnames, c(4:134), as.numeric)
SQuad_colnames
<- usable_obs(SQuad_colnames, c("Shadow", "Transect_hardware", "Unclear"),
SQuad_usable max = TRUE, cutoff = 10)
<- usable_obs(SQuad_colnames, c("Shadow", "Transect_hardware", "Unclear"),
SQuad_removed max = TRUE, cutoff = 10, above_cutoff = TRUE)
By identifying how many usable points there are in each quadrat, and
removing any quadrats that had over 10% of the identified points
unusable, we have removed 0 quadrats from analysis, which you could view
with the SQuad_removed
data frame but as we have no
quadrats to removed, it will be an empty data frame.
Now we can start working on the large quadrat data.
data("softcoral_annotations")
data("softcoral_LQuads")
I will be cropping this data as outlined in the cropping vignette, so check that out for the details, but before I crop it, I want to remove any unconfirmed quadrats just as we did for the small quads.
<- softcoral_LQuads %>% filter(Annotation.status == "Confirmed")
LQuad_confirmed
<- softcoral_annotations %>% filter(Name %in% unique(LQuad_confirmed$Image.name)) LQuad_sub
Then we will crop the annotations data.
<- crop_area(data = LQuad_sub, row = "Row",
LQuad_cropped column = "Column", id = "Name", dim = c(0.9, 0.6),
obs_range = c(36,64))
Now I will just format this to look like the small quad data so we can join the two.
<- LQuad_cropped %>% select(-c(obs, Row, Column)) %>% group_by(Name) %>% pivot_wider(names_from = Label, values_from = Label, values_fn = length, values_fill = 0)
LQuad_wide <- as.data.frame(LQuad_wide)
LQuad_wide
<- separate(LQuad_wide, Name, sep="_", into=c("Field.Season", "Site","Quadrat"))
LQuad_separated
<- rm_chr(LQuad_separated, c(".jpg", ".jpeg"))
LQuad_nojpg
<- change_names(LQuad_nojpg, coral_labelset, "short_name", "full_name")
LQuad_colnames
<- mutate_at(LQuad_colnames, c(4:11), as.numeric)
LQuad_colnames
<- usable_obs(LQuad_colnames, c("Shadow", "Transect_hardware", "Unclear"),
LQuad_usable max = TRUE, cutoff = 0.1*54)
<- usable_obs(LQuad_colnames, c("Shadow", "Transect_hardware", "Unclear"),
LQuad_removed max = TRUE, cutoff = 0.1*54, above_cutoff = TRUE)
With the large quadrats, only 4 photos needed to be removed because they had too many unusable points.
Alright, now we can join these data frames together, and continue
with one large data set. As both data sets used the same label set of
names, This is easy with the bind_rows() function out of the
dplyr
package
<- bind_rows(SQuad_usable, LQuad_usable) %>% select(-c(unusable))
AllQuads
4:131][is.na(AllQuads[, 4:131])] <- 0 AllQuads[,
Now lets continue with the cleaning of these data.
There are a few more things that need to be removed including MPQs (Mega photo quadrats), site8.5 and DEEP photo quadrats.
.5 <- keep_rm(AllQuads, c("DEEP", "site8.5"), select = "row", exact = FALSE, colname = "Site", keep = FALSE)
AllQuad_noDEEP_site8
<- keep_rm(AllQuad_noDEEP_site8.5, c("MPQ"), select = "row", exact = FALSE, colname = "Quadrat", keep = FALSE) AllQuad_noMPQ
Now we have removed unusable annotations, so lets rescale and recalculate cover with proportion cover.
<- cover_calc(AllQuad_noMPQ, names(AllQuad_noMPQ[,4:131]), prop = TRUE) AllQuad_cover
This data frame is now nicely formatted and could be used for many community based analyses. This might be a great stopping point for some analyses, but to further clean this up I am going to convert this into long format data.
<- AllQuad_cover %>% pivot_longer(cols = names(AllQuad_cover[,4:131]), names_to = "Tag_Name", values_to = "prop_cover") AllQuad_long
One thing you may notice by looking at the Tag_Name
column, is that these species names are not unique species, but there
are duplicates of the same species, categorized into bleaching and non
bleaching forms. For any kind of diversity analysis, this would inflate
the number of different species, so it is important to combine different
forms of the same species if diversity analyses are being done.
For this clean up, we will walk through 3 ways of dealing with this based on what you want to accomplish.
Option A. Categorizing rows.
If you want to use your data in this long format, want to just categorize everything and you will use these various categories based on your different research questions, you could just add a bunch of category columns like so:
<- categorize(AllQuad_long, "Tag_Name", values = c("Bleach"), name = "Bleached", binary = TRUE, exact = FALSE) A_AllQuad_Bleach
This categorizes each Tag_Name
to whether it is a
bleaching or nonbleaching tag.
And you could also add other information in if you have it, like taxonomy.
<- categorize(A_AllQuad_Bleach, "Tag_Name", values = coral_labelset$full_name, name = "Taxonomic_Name", binary = FALSE, categories = coral_labelset$taxonomic_name) A_AllQuad_Taxa
Option B. Categorizing rows and then combining.
Now after you categorize your rows, perhaps you want to have all the
cover values summed at a different level, like at the taxonomy level. To
do this, the summarise()
function from dplyr
will work great.
<- A_AllQuad_Taxa %>% group_by(Field.Season, Site, Quadrat, Taxonomic_Name) %>% summarise(prop_cover = sum(prop_cover)) B_AllQuad_taxonomy
## `summarise()` has grouped output by 'Field.Season', 'Site', 'Quadrat'. You can
## override using the `.groups` argument.
Option C. Wide format summing columns
If you wanted to keep the data in a wide format, and sum columns
based on taxonomy, to allow for community level analyses, you could also
use the sum_cols()
function. To do this, we first need a
vector of what to change the names too, which can be done with a simple
match, unless you have a vector with the new names already in the right
order.
<- colnames(AllQuad_cover[,4:131])
current_names <- coral_labelset[match(current_names, coral_labelset$full_name),]$taxonomic_name
new_names
<- sum_cols(AllQuad_cover, from = current_names, to = new_names) AllQuad_wide_summed
Whichever of the options you choose, you will be able to customize
the data to your analysis needs. After that your data is nearly cleaned.
Some other things you may want to add would be environmental data, or
more taxonomic data. The add_data()
function can help with
adding multiple columns from a data set at a time.
<- add_data(B_AllQuad_taxonomy, coral_labelset, cols = c("functional_group", "life_history"), data_id = "Taxonomic_Name", add_id = "taxonomic_name", number = 5)
B_AllQuad_LH_FG
data("environmental_data")
<- add_data(B_AllQuad_LH_FG, environmental_data, cols = c("HD_Cat", "HD_Cont", "NPP", "WE", "Region", "WaveEnergy"), data_id = "Site", add_id = "Site", number = 4) B_AllQuad_enviro
The final things I will add to this data to get it in shape for analysis is a final categorization of the study years based on the timing of the 2015/2016 El Niño, subset the species to only soft coral, and refactor the levels of some of the variables.
<- categorize(B_AllQuad_enviro, column = "Field.Season", values = unique(B_AllQuad_enviro$Field.Season), name = "TimeBlock", binary = FALSE, exact = TRUE, categories = c(rep("Before", times = 8), rep("During", times = 3), rep("After", times = 4)))
B_AllQuad_timeblock
<- keep_rm(B_AllQuad_timeblock, values = "Soft_coral", select = "row", colname = "functional_group")
AllQuads_cropclean
$TimeBlock <- factor(AllQuads_cropclean$TimeBlock, levels = c("Before", "During", "After"))
AllQuads_cropclean$Site <- factor(AllQuads_cropclean$Site, levels = paste("site", seq(1:40), sep = ""))
AllQuads_cropclean$HD_Cat <- factor(AllQuads_cropclean$HD_Cat, levels = c("Very Low", "Low", "Medium", "High", "Very High")) AllQuads_cropclean
This data has now been sufficiently cleaned and can be used for many different analyses. Often once data has been cleaned, the first step is to start exploring the data. One thing we can look at is the sample sizes, to see how many quadrats I have over the different sites and years.
sample_size(AllQuads_cropclean, dim_1 = "Site", dim_2 = "Field.Season", count = "Quadrat")
KI2007 | KI2009 | KI2010 | KI2011 | KI2013 | KI2014 | KI2015a | KI2015b | KI2015c | KI2015d | KI2016a | KI2016b | KI2017 | KI2018 | KI2019 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
site1 | 22 | 30 | 0 | 0 | 28 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 29 | 0 | 0 |
site10 | 20 | 21 | 32 | 0 | 0 | 0 | 0 | 0 | 30 | 0 | 0 | 0 | 25 | 0 | 0 |
site11 | 29 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
site12 | 30 | 25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 22 | 30 | 31 | 0 | 30 |
site13 | 30 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 23 | 0 | 0 |
site14 | 29 | 22 | 22 | 25 | 13 | 21 | 0 | 0 | 30 | 0 | 16 | 0 | 30 | 0 | 0 |
site15 | 30 | 23 | 26 | 26 | 24 | 0 | 29 | 15 | 29 | 0 | 29 | 30 | 30 | 30 | 0 |
site16 | 28 | 26 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
site17 | 29 | 20 | 0 | 19 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
site18 | 29 | 28 | 0 | 24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 20 | 0 | 0 |
site19 | 30 | 28 | 26 | 21 | 21 | 0 | 0 | 0 | 28 | 0 | 0 | 0 | 26 | 0 | 0 |
site2 | 25 | 29 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
site20 | 28 | 0 | 0 | 0 | 31 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 19 | 0 | 0 |
site21 | 30 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
site22 | 30 | 20 | 0 | 0 | 19 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
site23 | 30 | 28 | 0 | 0 | 25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 29 | 0 | 0 |
site24 | 30 | 35 | 0 | 22 | 31 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
site25 | 30 | 22 | 21 | 24 | 26 | 25 | 0 | 0 | 31 | 0 | 0 | 0 | 17 | 27 | 0 |
site26 | 30 | 27 | 22 | 25 | 30 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 30 | 0 | 0 |
site27 | 30 | 26 | 18 | 24 | 28 | 30 | 30 | 30 | 29 | 30 | 30 | 30 | 29 | 29 | 30 |
site28 | 21 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
site29 | 21 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
site3 | 28 | 20 | 30 | 25 | 28 | 32 | 0 | 0 | 30 | 0 | 0 | 0 | 29 | 28 | 30 |
site30 | 30 | 28 | 23 | 0 | 29 | 25 | 30 | 30 | 30 | 0 | 30 | 30 | 30 | 21 | 30 |
site31 | 20 | 27 | 21 | 26 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 20 | 0 | 30 |
site32 | 10 | 18 | 18 | 24 | 30 | 0 | 0 | 30 | 30 | 34 | 30 | 30 | 30 | 30 | 30 |
site33 | 21 | 27 | 0 | 27 | 0 | 0 | 0 | 0 | 0 | 0 | 30 | 0 | 31 | 0 | 0 |
site34 | 22 | 27 | 23 | 19 | 28 | 31 | 0 | 30 | 29 | 0 | 30 | 30 | 29 | 27 | 30 |
site35 | 21 | 20 | 44 | 25 | 26 | 31 | 30 | 30 | 29 | 9 | 26 | 30 | 29 | 30 | 30 |
site36 | 21 | 22 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 27 | 0 | 0 |
site37 | 22 | 27 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 30 | 30 | 30 | 30 | 30 |
site4 | 30 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
site5 | 29 | 0 | 0 | 0 | 0 | 0 | 0 | 29 | 30 | 0 | 30 | 30 | 26 | 29 | 30 |
site6 | 30 | 19 | 0 | 27 | 29 | 0 | 0 | 0 | 0 | 0 | 0 | 30 | 30 | 0 | 30 |
site7 | 30 | 20 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 25 | 0 | 0 |
site8 | 30 | 0 | 0 | 0 | 29 | 27 | 30 | 30 | 30 | 27 | 30 | 30 | 30 | 30 | 29 |
site9 | 35 | 28 | 13 | 27 | 25 | 0 | 0 | 0 | 0 | 0 | 30 | 0 | 30 | 0 | 0 |
site38 | 0 | 25 | 0 | 21 | 0 | 31 | 0 | 0 | 30 | 0 | 29 | 0 | 0 | 28 | 30 |
site39 | 0 | 22 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
site40 | 0 | 21 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 23 | 30 | 31 | 0 | 30 |
Visualizing the data can be easy with a built in shiny app function. To see an example shiny app you can go here but to visualize the data we have cleaned up here, we can run the following code.
A good combination to examine this shiny with is : - y-axis: prop_cover - x-axis: Field.Season - color: TimeBlock (treat as discrete) - facet: HD_cat - group by: Field.Season, TimeBlock, Site and HD_Cat - view as a box plot
visualize_app(data = AllQuads_cropclean, xaxis = colnames(AllQuads_cropclean[,1:13]), yaxis = "prop_cover")