Working with dates and times is not as simple as it looks, and the reasons are as many as they are diverse. Let us spell out a few of the major ones. For one, dates come in all shapes and sizes, as text that looks like this: “Monday, Jan10, 2010” to strings like “2020-28-01 12:49” but no matter what software you use, you have to be able to convert all dates into a standard format.
Second, if you need to calculate how much time has passed between events, for example, how many days go by before a patient returns to the Emergency Room (ER), how many days between Covid-19 deaths in an infected population, hours needed to fly from Columbus to an Francisco, and so on … you need to be able to move between months, days, hours, etc with ease, AND calculate the length of time in a way that automatically adjusts for leap
years.
There are many more reasons I could advance but we might as well start working with dates. First up, some mangled date entries and we’ll see how to parse them into correct date formats!
"20171217" -> today1
"2017-12-17" -> today2
"2017 December 17" -> today3
"20171217143241" -> today4
"2017 December 17 14:32:41" -> today5
"December 17 2017 14:32:41" -> today6
"17-Dec, 2017 14:32:41" -> today7
Now we fix them up!
library(tidyverse)
library(lubridate)
ymd(today1) -> date1
ymd(today2) -> date2
ymd(today3) -> date3
date1; date2; date3
## [1] "2017-12-17"
## [1] "2017-12-17"
## [1] "2017-12-17"
today1
, today2
, and today3
all had the same structure of year-month-day and so ymd()
works to get the format right. today4
has year-month-day-hours-minutes-seconds so we’ll have to do this one slightly differently. The same thing works for today5
as well.
## [1] "2017-12-17 14:32:41 UTC"
## [1] "2017-12-17 14:32:41 UTC"
today6
has a slightly different format, month-day-year-hours-minutes-seconds
that is read in thus:
## [1] "2017-12-17 14:32:41 UTC"
today7
has a slightly different format, day-month-year-hours-minutes-seconds
that is read in thus:
## [1] "2017-12-17 14:32:41 UTC"
Now we should be able to start working with some date variables, and the ideal candidate would be the flight date column our cmhflights
data. So the first thing we will do is load that data-set so that we can work with it.
I dislike this uppercase-lowercase mixture they have in the column names and so will get rid of it as shown below, making everything nice and lowercase. This is done with the janitor
package’s clean_names()
command. I am also going to use select()
to keep only a handful of columns since keeping 100+ is of no value.
library(janitor)
cmhflights %>%
clean_names() %>%
select(
year, month, dayof_month, day_of_week, flight_date, carrier,
tail_num, flight_num, origin_city_name, dest_city_name,
dep_time, dep_delay, arr_time, arr_delay, cancelled, diverted
) -> cmh.df
The first thing I want to do now is to label the days of the week, the months, and then also create that flag for the weekend
versus weekdays
. Here goes:
cmh.df %>%
mutate(
dayofweek = wday(
day_of_week,
abbr = FALSE,
label = TRUE
),
monthname = month(
month,
abbr = FALSE,
label = TRUE
),
weekend = case_when(
dayofweek %in% c("Saturday", "Sunday") ~ "Weekend",
TRUE ~ "Weekday"
)
) -> cmh.df
Now let us ask some questions: (a) What month had the most flights? (b) What day of the week had the most flights? (c) What about weekends; did weekends have more flights than weekdays? (d) With respect to (c), does whatever pattern we see vary by month or does month not matter?
## # A tibble: 9 x 2
## monthname n
## <ord> <int>
## 1 July 4295
## 2 August 4279
## 3 June 4138
## 4 April 4123
## 5 March 4101
## 6 May 4098
## 7 September 3789
## 8 January 3757
## 9 February 3413
## # A tibble: 7 x 2
## dayofweek n
## <ord> <int>
## 1 Wednesday 5435
## 2 Thursday 5417
## 3 Sunday 5395
## 4 Tuesday 5368
## 5 Monday 5284
## 6 Saturday 4892
## 7 Friday 4202
## # A tibble: 2 x 2
## weekend n
## <chr> <int>
## 1 Weekday 25706
## 2 Weekend 10287
## # A tibble: 18 x 3
## monthname weekend n
## <ord> <chr> <int>
## 1 August Weekday 3165
## 2 March Weekday 3047
## 3 June Weekday 3023
## 4 July Weekday 2918
## 5 May Weekday 2908
## 6 April Weekday 2876
## 7 September Weekday 2760
## 8 January Weekday 2557
## 9 February Weekday 2452
## 10 July Weekend 1377
## 11 April Weekend 1247
## 12 January Weekend 1200
## 13 May Weekend 1190
## 14 June Weekend 1115
## 15 August Weekend 1114
## 16 March Weekend 1054
## 17 September Weekend 1029
## 18 February Weekend 961
So most flights are on weekdays, but weekend flights lead in July while weekday flights lead in August.
But wait a minute, if I can calculate these frequencies, why not do it by the hour. That may allow us to answer such questions as: What hour of the day has the most flights, the most delays? What about by airline? What if we push this to the minute of the hour?
Well, first we will have to create a new variable that marks just the hour of the day in the 24-hour cycle. But to do this we will first need to create a single flight_date_time
column that will be in the ymd_hms
format. How? With unite()
.
cmh.df %>%
unite(
col = "flight_date_time",
c(flight_date, dep_time),
sep = ":",
remove = TRUE
) -> cmh.df
Okay, now we create flt_date_time
and note the seconds here are automatically coerced to be 00
.
Now we extract just the hour of the day the flight was scheduled to depart.
All righty then, now we start digging in. What hour has the most flights, and does this vary by the day of the week? By the Month?
## # A tibble: 24 x 2
## flt_hour n
## <int> <int>
## 1 10 2626
## 2 17 2454
## 3 7 2448
## 4 16 2395
## 5 15 2392
## 6 14 2390
## 7 8 2331
## 8 9 2283
## 9 18 2268
## 10 6 2106
## # … with 14 more rows
## # A tibble: 199 x 3
## monthname flt_hour n
## <ord> <int> <int>
## 1 May 10 376
## 2 August 8 328
## 3 June 10 325
## 4 March 17 323
## 5 July 8 319
## 6 May 15 317
## 7 April 17 314
## 8 May 18 314
## 9 March 7 313
## 10 January 16 312
## # … with 189 more rows
Looks like 10:00 and then 17:00, these would be your best bets if you were looking to catch a flight and wanted as many options as possible. On the flip side, this might also be the time when flights get delayed more often because there are so many flights scheduled at these hours!
Now I want to ask the question about delays: Are median delays higher at certain hours?
cmh.df %>%
group_by(flt_hour) %>%
summarise(md.delay = median(dep_delay, na.rm = TRUE)) %>%
arrange(-md.delay)
## # A tibble: 24 x 2
## flt_hour md.delay
## <int> <dbl>
## 1 3 290
## 2 2 233
## 3 1 174
## 4 0 137
## 5 23 49
## 6 21 6
## 7 18 2
## 8 19 1
## 9 15 0
## 10 16 0
## # … with 14 more rows
cmh.df %>%
group_by(flt_hour) %>%
summarise(md.delay = median(dep_delay, na.rm = TRUE)) %>%
arrange(md.delay)
## # A tibble: 24 x 2
## flt_hour md.delay
## <int> <dbl>
## 1 5 -4
## 2 6 -4
## 3 7 -4
## 4 8 -3
## 5 9 -3
## 6 10 -2
## 7 11 -2
## 8 12 -2
## 9 13 -2
## 10 14 -2
## # … with 14 more rows
The expected result; Shortest median delay is at 5 AM, and delays increase by the hour. Bottom-line: Fly as early as you can. Might this vary by destination?
cmh.df %>%
group_by(dest_city_name, flt_hour) %>%
summarise(md.delay = median(dep_delay, na.rm = TRUE)) %>%
arrange(-md.delay)
## # A tibble: 418 x 3
## # Groups: dest_city_name [26]
## dest_city_name flt_hour md.delay
## <chr> <int> <dbl>
## 1 Newark, NJ 6 1046
## 2 Newark, NJ 7 688
## 3 Denver, CO 14 489
## 4 Houston, TX 7 420.
## 5 Minneapolis, MN 0 381
## 6 Atlanta, GA 1 348
## 7 New York, NY 0 337
## 8 Tampa, FL 1 324
## 9 Nashville, TN 23 323
## 10 Fort Myers, FL 0 297
## # … with 408 more rows
Avoid flying to Newark, NJ, even at 6 or 7 AM. Might these vary by airline?
cmh.df %>%
group_by(carrier, dest_city_name, flt_hour) %>%
summarise(md.delay = median(dep_delay, na.rm = TRUE)) %>%
arrange(-md.delay)
## # A tibble: 656 x 4
## # Groups: carrier, dest_city_name [52]
## carrier dest_city_name flt_hour md.delay
## <chr> <chr> <int> <dbl>
## 1 EV Newark, NJ 6 1046
## 2 EV Chicago, IL 6 1024
## 3 EV Newark, NJ 7 688
## 4 DL Columbus, OH 5 526
## 5 F9 Denver, CO 14 489
## 6 DL Los Angeles, CA 15 481
## 7 AA Phoenix, AZ 15 463
## 8 EV Houston, TX 7 420.
## 9 UA Chicago, IL 0 394
## 10 DL Minneapolis, MN 0 381
## # … with 646 more rows
Worst early-morning delays are for EV, to Newark and to Chicago.
Let us assume we are interested in seeing how much time lapses between successive flights of each aircraft seen in the data. We know we can identify each unique aircraft by its tail_num
. So let us first see how many times is each aircraft seen and create a new column called number_flew
. Some rows of data are missing flt_date_time
and tail_num
so I will filter these out as well.
cmh.df %>%
filter( # eliminates all rows where both these columns are blank
!is.na(tail_num),
!is.na(flt_date_time)
) %>%
group_by(tail_num) %>%
arrange(flt_date_time) %>% # each aircraft is now stacked by when it flew
mutate(n_flew = row_number()) %>% # each time aan aircraft is seen it gets a number, 1, 2, 3, and so on ...
select(tail_num, flt_date_time, n_flew) %>%
arrange(-n_flew) -> cmh.df2 # N396SW is seen the most often in this data-set
cmh.df2 %>%
head()
## # A tibble: 6 x 3
## # Groups: tail_num [1]
## tail_num flt_date_time n_flew
## <chr> <dttm> <int>
## 1 N396SW 2017-08-23 10:07:00 73
## 2 N396SW 2017-08-23 08:07:00 72
## 3 N396SW 2017-08-19 08:20:00 71
## 4 N396SW 2017-08-18 15:24:00 70
## 5 N396SW 2017-08-06 21:43:00 69
## 6 N396SW 2017-08-06 18:53:00 68
So far so good; N396SW is the winner and has well-earned its retirement.
Now we need to see how much time lapsed between flights, and this is just the difference between the preceding flt_date_time
recorded and the most recent flt_date_time
. As we do this, note that by default time span (ytspan
) is calculated in seconds.
cmh.df2 %>%
group_by(tail_num) %>%
arrange(flt_date_time) %>%
mutate(
tspan = interval(
lag(flt_date_time, order_by = tail_num), flt_date_time
), # calculate the time span between successive flights recorded
tspan.minutes = as.duration(tspan)/dminutes(1), # convert tspan into minutes
tspan.hours = as.duration(tspan)/dhours(1), # convert tspan into hours
tspan.days = as.duration(tspan)/ddays(1), # convert tspan into days
tspan.weeks = as.duration(tspan)/dweeks(1) # convert tspan into weeks
) -> cmh.df2
cmh.df2 %>%
filter(tail_num == "N396SW")
## # A tibble: 73 x 8
## # Groups: tail_num [1]
## tail_num flt_date_time n_flew tspan tspan.minutes tspan.hours
## <chr> <dttm> <int> <dbl> <dbl> <dbl>
## 1 N396SW 2017-01-05 09:30:00 1 NA NA NA
## 2 N396SW 2017-01-05 12:19:00 2 10140 169 2.82
## 3 N396SW 2017-01-11 08:34:00 3 504900 8415 140.
## 4 N396SW 2017-01-11 10:44:00 4 7800 130 2.17
## 5 N396SW 2017-01-19 10:31:00 5 690420 11507 192.
## 6 N396SW 2017-01-19 14:28:00 6 14220 237 3.95
## 7 N396SW 2017-02-10 08:23:00 7 1878900 31315 522.
## 8 N396SW 2017-02-10 10:32:00 8 7740 129 2.15
## 9 N396SW 2017-02-15 15:20:00 9 449280 7488 125.
## 10 N396SW 2017-02-15 18:15:00 10 10500 175 2.92
## # … with 63 more rows, and 2 more variables: tspan.days <dbl>,
## # tspan.weeks <dbl>
Here, tspan
is being converted into, say, minutes by dividing it by 60, into hours by dividing tspan by 60 x 60 = 3600, and so on. Note that dminutes(1)
is calculating the time span in one-minute intervals. Similarly for hours, days, and weeks. Thus if you ran dhours(2)
you would get the time interval in 2-hour increments.
There is a lot more we could do with time but the few things we have covered so far would be the more common tasks we usually encounter.
The data below come from tidytuesday and provide information on accidents at theme parks. You can see more of these data available here. The data give you some details of where and when the accident occurred, and something about the injured party as well.
safer_parks.csv
{-}
variable | class | description |
---|---|---|
acc_id | double | Unique ID |
acc_date | character | Accident Date |
acc_state | character | Accident State |
acc_city | character | Accident City |
fix_port | character | . |
source | character | Source of injury report |
bus_type | character | Business type |
industry_sector | character | Industry sector |
device_category | character | Device category |
device_type | character | Device type |
tradename_or_generic | character | Common name of the device |
manufacturer | character | Manufacturer of device |
num_injured | double | Num injured |
age_youngest | double | Youngest individual injured |
gender | character | Gender of individual injured |
acc_desc | character | Description of accident |
injury_desc | character | Injury description |
report | character | Report URL |
category | character | Category of accident |
mechanical | double | Mechanical failure (binary NA/1) |
op_error | double | Operator error (binary NA/1) |
employee | double | Employee error (binary NA/1) |
notes | character | Additional notes |
Working with the safer_parks
data, complete the following tasks.
Using acc_date
, create a new date variable called idate
that is a proper date column generated via {lubridate}.
Now create new columns for (i) the month of the accident, and (ii) the day of the week. These should not be abbreviated (i.e., we should see the values as ‘Monday’ instead of ‘Mon’, “July” instead of “Jul”). What month had the highest number of accidents? What day of the week had the highest number of accidents?
What if you look at days of the week by month? Does the same day of the week show up with the most accidents regardless of month or do we see some variation?
What were the five
dates with the most number of accidents?
Using the Texas injury data, answer the following question: What ride was the safest? [Hint: For each ride (ride_name
) you will need to calculate the number of days between accidents. The ride with the highest number of days is the safest.]
read_csv(
"https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-09-10/tx_injuries.csv"
) -> tx_injuries
tx_injuries.csv
variable | class | description |
---|---|---|
injury_report_rec | double | Unique Record ID |
name_of_operation | character | Company name |
city | character | City |
st | character | State (all TX) |
injury_date | character | Injury date - note there are some different formats |
ride_name | character | Ride Name |
serial_no | character | Serial number of ride |
gender | character | Gender of the injured individual |
age | character | Age of the injured individual |
body_part | character | Body part injured |
alleged_injury | character | Alleged injury - type of injury |
cause_of_injury | character | Approximate cause of the injury (free text) |
other | character | Anecdotal information in addition to cause of injury |
You should note that this assumes each ride was in operation for the same amount of time. If this is not true then our estimates will be unreliable.
These data (see below) come from this story: The next generation: The space race is dominated by new contenders. You have data on space missions over time, with dates of the launch, the launching agency/country, type of launch vehicle, and so on.
launches
variable | definition |
---|---|
tag | Harvard or [COSPAR][cospar] id of launch |
JD | [Julian Date][jd] of launch |
launch_date | date of launch |
launch_year | year of launch |
type | type of launch vehicle |
variant | variant of launch vehicle |
mission | space mission |
agency | launching agency |
state_code | launching agency’s state |
category | success (O) or failure (F) |
agency_type | type of agency |
Create a new column called date
that stores launch_date
as a proper data field in ymd format from {lubridate}.
Creating columns as needed, calculate and show the number of launches first by year, then by month, and then by day of the week. The result should be arranged in descending order of the number of launches.
How many launches were successful (O)
versus failed (F)
by country and year? The countries of interest will be state_code values of “CN”, “F”, “J”, “RU”, “SU”, “US”. You do not need to arrange your results in any order.