In the previous week we learned how to clean and prepare data with several {dplyr} functions – select()
, filter()
, summarise()
, mutate()
, group_by()
, case_when()
, and so on. In this module we will tackle other challenging problems and lean on another package, {tidyr} to do so.
This library seems to have only four core functions, separate
, unite()
, pivot_longer
, and pivot_wider()
but each of them packs a wallop.
You will, at times, end up with columns that contain multiple pieces of information, all mashed up into some alphanumeric string or sequence of numbers. separate()
allows us to split this mashed up column into specific pieces. For example, here are some data from the Census Bureau:
library(tidyverse)
read_csv(
"https://www2.census.gov/programs-surveys/popest/datasets/2010-2018/metro/totals/cbsa-est2018-alldata.csv"
) -> cbsa
cbsa %>%
select(NAME) %>%
glimpse()
## Observations: 2,789
## Variables: 1
## $ NAME <chr> "Abilene, TX", "Callahan County, TX", "Jones County, TX", "Taylo…
This data-set contains population estimates for CBSAs – core-based statistical areas. What are these?
Metropolitan and Micropolitan Statistical Areas are collectively referred to as Core-Based Statistical Areas. Metropolitan statistical areas have at least one urbanized area of 50,000 or more population, plus adjacent territory that has a high degree of social and economic integration with the core as measured by commuting ties. Micropolitan statistical areas are a new set of statistical areas that have at least one urban cluster of at least 10,000 but less than 50,000 population, plus adjacent territory that has a high degree of social and economic integration with the core as measured by commuting ties. Metropolitan and micropolitan statistical areas are defined in terms of whole counties or county equivalents, including the six New England states. As of June 6, 2003, there are 362 metropolitan statistical areas and 560 micropolitan statistical areas in the United States. Source
Look at the column called NAME
and note how It combines the state’s name (abbreviated) and the name of the area, “Abilene, TX”, “Callahan County, TX”, “Jones County, TX”, and so on. This is a common issue with many census data-sets and so it would be nice to be able to split up this NAME
column into two pieces – placename
(“Abilene”, “Callahan County”, etc) and stateabb
(“TX”, “TX”, etc.) We do this below, with the separation occurring where a ","
is seen in NAME
.
cbsa %>%
separate(
col = NAME,
into = c("placename", "stateabb"),
sep = ",",
remove = FALSE
) -> cbsa
cbsa %>%
select(NAME, placename, stateabb) %>%
head()
## NAME placename stateabb
## 1 Abilene, TX Abilene TX
## 2 Callahan County, TX Callahan County TX
## 3 Jones County, TX Jones County TX
## 4 Taylor County, TX Taylor County TX
## 5 Akron, OH Akron OH
## 6 Portage County, OH Portage County OH
Here is what each piece of code is doing:
code | what it does … |
---|---|
col = | identifies the column to be separated |
into = | creates the names for the new columns that will result |
sep = | indicates where the separation should occur |
remove = | indicates whether the column to be separated should be removed from the data-set or retained once the new columns have been created. Setting it equal to FALSE will keep the original column, TRUE will remove it. |
What if the column to be separated was made up of numbers rather than text? Take the STCOU
column, for instance. Look like numbers, right? Except these numbers are really identifiers, FIPS codes to be exact, with the first two digits flagging what state this area is in and the next three digits identifying the area itself. Ohio’s FIPS code is, for instance, 39
, and Portage County’s FIPS code is 133
. So here, it would be nice to create two new columns, one with the state FIPS code (stfips
) and the second with the county FIPS code (coufips
). We do this below, but this time setting sep = 2
because we want the separation to happen after the second digit.
cbsa %>%
separate(
col = STCOU,
into = c("stfips", "coufips"),
sep = 2,
remove = FALSE
) -> cbsa
cbsa %>%
select(STCOU, stfips, coufips) %>%
head()
## STCOU stfips coufips
## 1 NA <NA> <NA>
## 2 48059 48 059
## 3 48253 48 253
## 4 48441 48 441
## 5 NA <NA> <NA>
## 6 39133 39 133
This is the very opposite of separate()
– two or more columns are united into ONE column. For example, take the file I am reading in as coudf
. This file has similar content to what we read in for the CBSAs but this one has data for counties and states.
read_csv(
"https://www2.census.gov/programs-surveys/popest/datasets/2010-2018/counties/totals/co-est2018-alldata.csv"
) -> coudf
Let us filter this file so that we retain rows only for counties. I am doing this with filter(COUNTY != "000")
because the state rows are the ones with COUNTY == "000"
.
## STNAME CTYNAME
## 1 Alabama Alabama
## 2 Alabama Autauga County
## 3 Alabama Baldwin County
## 4 Alabama Barbour County
## 5 Alabama Bibb County
## 6 Alabama Blount County
Now I want to combine the county name (CTYNAME
) and the state name (STNAME
) into a single column, with the two names separated by a comma and a single white-space, i.e., by ", "
.
coudf2 %>%
unite(
col = "countystate",
c("CTYNAME", "STNAME"),
sep = ", ",
remove = FALSE
) -> coudf2
coudf2 %>%
select(CTYNAME, STNAME, countystate) %>%
head()
## CTYNAME STNAME countystate
## 1 Autauga County Alabama Autauga County, Alabama
## 2 Baldwin County Alabama Baldwin County, Alabama
## 3 Barbour County Alabama Barbour County, Alabama
## 4 Bibb County Alabama Bibb County, Alabama
## 5 Blount County Alabama Blount County, Alabama
## 6 Bullock County Alabama Bullock County, Alabama
Key elements here, are,
code | what it does … |
---|---|
col = | identifies the new column to be created |
c(“..”) | identifies the columns to be combined, as in c(“column1”, “column2”, “column3”) |
sep = | indicates if we want the merged elements to be separated in some manner. Here we are using “,” to separate with a comma followed by a single white-space. But we could have used any separator or no separator at all |
remove = | indicates if we want the original columns deleted (TRUE) or not (FALSE) |
If I look at the original CBSA file cbsa
, I see that it has been setup very oddly. In particular, starting with column 6 we have a jumble of information …
Let us keep only a few columns to see what the current layout looks like.
read_csv(
"https://www2.census.gov/programs-surveys/popest/datasets/2010-2018/metro/totals/cbsa-est2018-alldata.csv"
) -> cbsa
cbsa %>%
select(c(4, 8:16)) -> cbsa01
cbsa01 %>%
head()
## # A tibble: 6 x 10
## NAME POPESTIMATE2010 POPESTIMATE2011 POPESTIMATE2012 POPESTIMATE2013
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Abil… 165583 166616 167447 167472
## 2 Call… 13513 13511 13488 13501
## 3 Jone… 20237 20266 19870 20034
## 4 Tayl… 131833 132839 134089 133937
## 5 Akro… 703035 703123 702080 703625
## 6 Port… 161389 161857 161375 161691
## # … with 5 more variables: POPESTIMATE2014 <dbl>, POPESTIMATE2015 <dbl>,
## # POPESTIMATE2016 <dbl>, POPESTIMATE2017 <dbl>, POPESTIMATE2018 <dbl>
Why did they not setup the data in such a way that it had the following structure? This would make a lot more sense rather than having each year be a column all its own.
NAME | YEAR | POPULATION |
---|---|---|
Abilene, TX | 2010 | 165583 |
Abilene, TX | 2011 | 166616 |
Abilene, TX | 2012 | 167447 |
…. | …. | …. |
Callahan County, TX | 2010 | 13513 |
Callahan County, TX | 2011 | 13511 |
Callahan County, TX | 2012 | 13488 |
…. | …. | …. |
Well, we can easily create the proper structure of the data-set, as shown below
cbsa01 %>%
group_by(NAME) %>%
pivot_longer(
names_to = "variable",
values_to = "POPULATION",
2:10
) -> cbsa01.long
cbsa01.long %>%
head()
## # A tibble: 6 x 3
## # Groups: NAME [2,787]
## NAME variable POPULATION
## <fct> <chr> <int>
## 1 Abilene, TX POPESTIMATE2010 165583
## 2 Abilene, TX POPESTIMATE2011 166616
## 3 Abilene, TX POPESTIMATE2012 167447
## 4 Abilene, TX POPESTIMATE2013 167472
## 5 Abilene, TX POPESTIMATE2014 168355
## 6 Abilene, TX POPESTIMATE2015 169704
code | what it does … |
---|---|
names_to = | identifies the name of the new column that will be created |
values_to = | identifies the name of the new column in which values will be stored |
2:10 | identifies the columns that will be pivoted from wide to long |
group_by() | holds unique combinations of whatever column names you put in group_by() fixed while it pivots the other columns |
I still need to clean up the variable column so that it only shows the four-digit year rather than POPESTIMATE2010, and so on. Let us do this next.
cbsa01.long %>%
separate(
col = variable,
into = c("todiscard", "toyear"),
sep = 11,
remove = TRUE) -> cbsa01.long2
cbsa01.long2 %>%
mutate(YEAR = as.numeric(toyear)) %>%
select(c(NAME, YEAR, POPULATION)) -> cbsa01.long3
cbsa01.long3 %>%
head()
## # A tibble: 6 x 3
## # Groups: NAME [2,787]
## NAME YEAR POPULATION
## <fct> <dbl> <int>
## 1 Abilene, TX 2010 165583
## 2 Abilene, TX 2011 166616
## 3 Abilene, TX 2012 167447
## 4 Abilene, TX 2013 167472
## 5 Abilene, TX 2014 168355
## 6 Abilene, TX 2015 169704
Now, let us say the data-set was a different one, perhaps the one shown below. This data-set comes from the 2017 American Community Survey and along with state FIPS codes (geoid
) and state name (NAME
) it contains data on income
= median yearly income, rent
= median monthly rent, and moe
= the margin of error at the 90% confidence level.
## # A tibble: 6 x 5
## GEOID NAME variable estimate moe
## <chr> <chr> <chr> <dbl> <dbl>
## 1 01 Alabama income 24476 136
## 2 01 Alabama rent 747 3
## 3 02 Alaska income 32940 508
## 4 02 Alaska rent 1200 13
## 5 04 Arizona income 27517 148
## 6 04 Arizona rent 972 4
Notice here the setup looks weird because two different variables have been combined in a single column. Instead, the data-set should have been setup as follows:
GEOID | NAME | income | rent | moe_income | moe_rent |
---|---|---|---|---|---|
01 | Alabama | 24476 | 747 | 136 | 3 |
02 | Alaska | 32940 | 1200 | 508 | 13 |
03 | Arizona | 27517 | 972 | 148 | 4 |
… | … | … | … | … | … |
Well, this can be done as well, with the pivot_wider()
function that takes from the “long” format to the “wide” format.
us_rent_income %>%
group_by(GEOID, NAME) %>%
pivot_wider(
names_from = variable,
values_from = c(estimate, moe)
) -> usri.wide
usri.wide %>%
head()
## # A tibble: 6 x 6
## # Groups: GEOID, NAME [6]
## GEOID NAME estimate_income estimate_rent moe_income moe_rent
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 01 Alabama 24476 747 136 3
## 2 02 Alaska 32940 1200 508 13
## 3 04 Arizona 27517 972 148 4
## 4 05 Arkansas 23789 709 165 5
## 5 06 California 29454 1358 109 3
## 6 08 Colorado 32401 1125 109 5
code | what it does … |
---|---|
names_from = | identifies the column from which unique values will be taken to create the names of the new columns that will result |
values_from = | identifies the column(s) from which the values should be assigned to the new columns that will result |
group_by() | holds unique value combinations of whatever column names you put in group_by() fixed while it pivots the rows to new columns |
Here is another example, this time with the cbsa
data but showing you how we might use a combination of pivot_longer()
and pivot_wider()
cbsa %>%
select(3:5, 8:88) %>%
group_by(NAME) %>%
pivot_longer(
names_to = "variable",
values_to = "estimate",
4:84
) -> cbsa.01
cbsa.01 %>%
head()
## # A tibble: 6 x 5
## # Groups: NAME [2,787]
## STCOU NAME LSAD variable estimate
## <int> <fct> <fct> <chr> <int>
## 1 NA Abilene, TX Metropolitan Statistical Area POPESTIMATE2010 165583
## 2 NA Abilene, TX Metropolitan Statistical Area POPESTIMATE2011 166616
## 3 NA Abilene, TX Metropolitan Statistical Area POPESTIMATE2012 167447
## 4 NA Abilene, TX Metropolitan Statistical Area POPESTIMATE2013 167472
## 5 NA Abilene, TX Metropolitan Statistical Area POPESTIMATE2014 168355
## 6 NA Abilene, TX Metropolitan Statistical Area POPESTIMATE2015 169704
Now I will clean up the contents of cbsa.01 so that year is a separate column.
cbsa.01 %>%
separate(
col = "variable",
into = c("vartype", "year"),
sep = "(?=[[:digit:]])",
extra = "merge",
remove = FALSE
) -> cbsa.02
cbsa.02 %>%
head()
## # A tibble: 6 x 7
## # Groups: NAME [2,787]
## STCOU NAME LSAD variable vartype year estimate
## <int> <fct> <fct> <chr> <chr> <chr> <int>
## 1 NA Abilene, … Metropolitan Statisti… POPESTIMATE2… POPESTIM… 2010 165583
## 2 NA Abilene, … Metropolitan Statisti… POPESTIMATE2… POPESTIM… 2011 166616
## 3 NA Abilene, … Metropolitan Statisti… POPESTIMATE2… POPESTIM… 2012 167447
## 4 NA Abilene, … Metropolitan Statisti… POPESTIMATE2… POPESTIM… 2013 167472
## 5 NA Abilene, … Metropolitan Statisti… POPESTIMATE2… POPESTIM… 2014 168355
## 6 NA Abilene, … Metropolitan Statisti… POPESTIMATE2… POPESTIM… 2015 169704
Now the final flip to wide format …
cbsa.02 %>%
select(c(2, 5:7)) %>%
group_by(NAME, year) %>%
pivot_wider(
names_from = "vartype",
values_from = "estimate"
) -> cbsa.03
cbsa.03 %>%
glimpse()
## Observations: 25,083
## Variables: 11
## Groups: NAME, year [25,083]
## $ NAME <fct> "Abilene, TX", "Abilene, TX", "Abilene, TX", "Abilen…
## $ year <chr> "2010", "2011", "2012", "2013", "2014", "2015", "201…
## $ POPESTIMATE <list> [165583, 166616, 167447, 167472, 168355, 169704, 17…
## $ NPOPCHG <list> [337, 1033, 831, 25, 883, 1349, 314, 498, 935, -33,…
## $ BIRTHS <list> [540, 2295, 2358, 2390, 2382, 2417, 2379, 2427, 238…
## $ DEATHS <list> [406, 1506, 1587, 1694, 1598, 1698, 1726, 1705, 173…
## $ NATURALINC <list> [134, 789, 771, 696, 784, 719, 653, 722, 642, -29, …
## $ INTERNATIONALMIG <list> [84, 205, 516, 361, 419, 484, 388, 325, 282, 0, 4, …
## $ DOMESTICMIG <list> [124, 54, -448, -1051, -301, 162, -723, -544, 19, -…
## $ NETMIG <list> [208, 259, 68, -690, 118, 646, -335, -219, 301, -3,…
## $ RESIDUAL <list> [-5, -15, -8, 19, -19, -16, -4, -5, -8, -1, -1, -2,…
Using the cmhflights
data from last week, create a column that unites the three columns Year
, Month
, and DayofMonth
into a single column that we will name date_of_flight
. This column should separate the three fields by “-”.
Hint: You will have to use
load(...)
withhere(...)
.
Sticking with cmhflights
, separate OriginCityName
into two new columns, origin_city
and origin_state
. Do the same for DestCityname
, calling the new columns destination_city
and destination_state
, respectively. Both city columns should only display the name of the city, while both state columns should only display the abbreviated state name (for example, “CA”, “OH”, etc.)
Tidy the weather
data such that the resulting data-set, called wdf
, has the days
(the d1-d31 columns) as rows and TMIN
and TMAX
as columns. The end result should be as shown below:
read.delim(
file = "http://stat405.had.co.nz/data/weather.txt",
stringsAsFactors = FALSE
) -> weather
id | year | month | days | TMAX | TMIN |
---|---|---|---|---|---|
MX000017004 | 2010 | 1 | d1 | NA | NA |
MX000017004 | 2010 | 1 | d2 | NA | NA |
MX000017004 | 2010 | 1 | d3 | NA | NA |
MX000017004 | 2010 | 1 | d4 | NA | NA |
MX000017004 | 2010 | 1 | d5 | NA | NA |
MX000017004 | 2010 | 1 | d6 | NA | NA |
MX000017004 | 2010 | 1 | d7 | NA | NA |
MX000017004 | 2010 | 1 | d8 | NA | NA |
MX000017004 | 2010 | 1 | d9 | NA | NA |
MX000017004 | 2010 | 1 | d10 | NA | NA |