Take-Home Exercise 3

Author

Tan Wen Yang

Published

March 11, 2023

Modified

March 24, 2023

Background:

Singapore housing prices (HDB flats) are a form of public housing in Singapore that are designed to be affordable for the majority of the population. However, over the years, HDB prices have slowly been on the incline, influenced by various factors such as policies, locales, economic growth as well as its amenities and proximity to areas such as CBD. While there are policies in place by the government to stabilize pricing, they are still largely influenced by factors surrounding the HDB areas.

Packages used:

  • sf: Used for importing and processing geospatial data

  • tidyverse: A collection of packages to manipulate data

  • tmap: Used for creating thematic maps

  • onemapsapi: Used to query Singapore-specific spatial point data.

  • httr: Used for calling GET/POST requests for APIs

  • jsonlite: Used for parsing JSON files

  • olsrr: Used for building the Ordinary least squares model

  • GWmodel: Collection of spatial statistical methods - Mainly used for GWR and also its summary methods.

  • dotenv: Reading the API key from environment file

  • matrixStats: A convenient way to perform functions over rows and columns of matrices

  • spdep: Used to create spatial weights matrix objects.

  • SpatialML: A collection of packages. Mainly used to bandwidth calculations and performing geographically random forest methods.

  • Metrics: Used for calculating RMSE in our case.

  • corrplot: Used for mapping the Correlation plot, not included in pacman as there are some package issues

  • ggpubr: Used for data visualization and analysis

Loading in all the packages and data

pacman::p_load(sf, tidyverse, tmap, onemapsgapi, httr, jsonlite, olsrr, GWmodel, dotenv, matrixStats, spdep, SpatialML, Metrics, ggpubr)
#rsample

Geospatial Data:

Singapore National Boundary and Master Plan 2019 subzone!

  • Singapore National Boundary is a polygon feature data showing the national boundary!

  • Master Plan 2019 subzone are information on URA 2019!

mpsz <- st_read(dsn="data/geospatial", "MPSZ-2019")
national_boundary <- st_read(dsn="data/geospatial", "CostalOutline")

Aspatial Data

  • Resale Flat prices

    • Training dataset from 1st January 2021 to 31st December 2022

    • Test dataset from 1 January to the last day of February 2023 resale prices For our

For our case, we will be using four-room flats for our prediction

resale_flat <- read_csv("data/aspatial/resale-flat-prices-from-jan-2017-onwards.csv")

Locational Factors:

For our assignment, we will need to look at location for

  • Proximity to CBD

  • Proximity to Eldercare

  • Proximity to Food court/Hawker

  • Proximity to MRT

  • Proximity to Park

  • Proximity to ‘Good’ primary schools

  • Proximity to Shopping Malls

  • Proximity to supermarket

  • Number of Kindergartens within 350m

  • Number of childcare centers within 350m

  • Number of primary schools within 1km

We start by sourcing some of this data

Geospatial Locational Source:

MRT/LRT and Bus Stop Locations

mrt_lrt <- st_read(dsn="data/geospatial", layer="RapidTransitSystemStation")
bus_stop <- st_read(dsn="data/geospatial", layer="BusStop")

There is something wrong, the MRT/LRT dataset is giving us polygon type rather than point. Despite the dataset saying it’s point.

OneMapSG Service API

We can extract the following according to their theme on OneMapSG API

  • Childcare (childcare)

  • Eldercare (eldercare)

  • Hawker Center (Queryname: hawkercentre)

  • Kindergarten (Queryname: Kindergartens)

  • Parks (Queryname: nationalparks)

Extra if we have time to think about:

  • Libraries (Queryname: libraries) [NUMBER WITHIN 350M]

  • Integrated Screening Programmes (queryname: moh_isp_clinics)

  • Tourist Attractions (Queryname: tourism) [NUMBER WITHIN 1KM]

Process of going through to create the shp file:

Courtesy of Megan’s work. (The following code chunk will be repeated to create the shp file for all the themes above).

#load_dot_env(file=".env")
#token <- Sys.getenv("TOKEN")
#themetibble <- get_theme(token, "themename")
#themetibble
#themesf <- st_as_sf(themetibble, coords=c("Lng", "Lat"), crs=4326)
#st_write(themesf, "themename.shp")

Load in all the shp data

childcare <- st_read(dsn="data/geospatial", layer="childcare")
eldercare <- st_read(dsn="data/geospatial", layer="eldercare")
hawker_centre <- st_read(dsn="data/geospatial", layer="hawkercentre")
kindergarten <- st_read(dsn="data/geospatial", layer="kindergartens")
parks <- st_read(dsn="data/geospatial", layer="nationalparks")
libraries <- st_read(dsn="data/geospatial", layer="libraries")
isp_clinics <- st_read(dsn="data/geospatial", layer="moh_isp_clinics")
tourism <- st_read(dsn="data/geospatial", layer="tourism")

External Sources

  • Supermarkets (Source: Dataportal)

  • Primary School (Wikipedia, Reverse Geocoded using OneMap API)

  • Top Primary Schools (We will pick the top 10 based off SchLah’s dataset)

  • Malls (Taken from this dataset) [NOTE: SVY21]

supermarkets <- st_read("data/geospatial/supermarkets.kml")

Load Primary School Data with Lat/Long

The lat/long are taken from OneMapSG API

Top 10 schools as well, dervied from schlah

primary_sch <- read_csv("data/aspatial/primary_school_geo.csv")
top10_pri_sch <- read_csv("data/aspatial/top_10_primary_school_geo.csv")

Load Shopping Malls

shopping_mall <- read_csv("data/aspatial/shopping_mall.csv")

Data Wrangling (Geospatial)

List of task to do:

  • Convert some remaining data to Geospatial

  • Convert multipoint to point data (Removing the ‘z’) for supermarket

  • Convert all if not already in SVY21 into SVY21 (3414)

  • Remove all unnecessary columns

  • Check for null values

Convert all of our datasets from CSV to Geospatial (shp)

  • Primary Schools

  • Shopping Malls

Within primary school, the OneMapAPI is not able to properly return the correct coordinate as the name is ‘CATHOLIC HIGH SCHOOL (PRIMARY)’. For it to return the correct coordinate, the name has to be truncated without (PRIMARY).

primary_sch_sf <- st_as_sf(primary_sch, coords=c("LONG", "LAT"), crs=4326)
top_primary_sch_sf <- st_as_sf(top10_pri_sch, coords=c("LONG", "LAT"), crs=4326)
shopping_mall_sf <- st_as_sf(shopping_mall, coords=c("longitude", "latitude"), crs=4326)

Remove ‘Z-Dimension’ for Supermarket Data

supermarkets <- st_zm(supermarkets)

Check and change all EPSG for Geospatial data to ESPG 3414

childcare3414 <- st_transform(childcare, crs=3414)
eldercare3414 <- st_transform(eldercare, crs=3414)
hawker_centre3414 <- st_transform(hawker_centre, crs=3414)
kindergarten3414 <- st_transform(kindergarten, crs=3414)
parks3414 <- st_transform(parks, crs=3414)
libraries3414 <- st_transform(libraries, crs=3414)
isp_clinics3414 <- st_transform(isp_clinics, crs=3414)
tourism3414 <- st_transform(tourism, crs=3414)
primary_sch_sf_3414 <- st_transform(primary_sch_sf, crs=3414)
top_primary_sch_sf_3414 <- st_transform(top_primary_sch_sf, crs=3414)
shopping_mall_sf_3414 <- st_transform(shopping_mall_sf, crs=3414)

Check others:

st_crs(primary_sch_sf_3414)
st_crs(top_primary_sch_sf_3414)
st_crs(shopping_mall_sf_3414)
st_crs(supermarkets)

Looks like Supermarket is still in WGS84, let’s fix that

supermarkets3414 <- st_transform(supermarkets, crs=3414)

Check to make sure…

st_crs(supermarkets3414)

Remove all unnecessary columns

After glimpsing at the sf dataframe, we are going to remove all useless columns.

  • Information such as addresses/descriptions are not necessary, as all we need are the point data

  • We will keep the name for identification purposes

We will keep name and geometry

childcare3414 <- childcare3414 %>%
  select(c('NAME', 'geometry'))

Check result

childcare3414

Same thing, name and geometry

eldercare3414 <- eldercare3414 %>%
  select(c('NAME', 'geometry'))

Check result

eldercare3414
hawker_centre3414 <- hawker_centre3414 %>%
  select(c('NAME', 'geometry'))
hawker_centre3414
isp_clinics3414 <- isp_clinics3414 %>%
  select(c('NAME', 'geometry'))
isp_clinics3414
kindergarten3414 <- kindergarten3414 %>%
  select(c('NAME', 'geometry'))
kindergarten3414
libraries3414 <- libraries3414 %>%
  select(c('NAME', 'geometry'))
libraries3414

We only keep STN_NAM_DE and geometry

Note that the file is in polygon format, rather than point. Might have to deal with it.

https://stackoverflow.com/questions/52522872/r-sf-package-centroid-within-polygon

mrt_lrt <- mrt_lrt %>%
  select(c('STN_NAM_DE', 'geometry'))

Check

mrt_lrt
parks3414 <- parks3414 %>%
  select(c('NAME', 'geometry'))
parks3414
primary_sch_sf_3414 <- primary_sch_sf_3414 %>%
  select(c('Primary school', 'geometry'))
primary_sch_sf
supermarkets3414 <- supermarkets3414 %>%
  select(c('Name', 'geometry'))
supermarkets3414
tourism3414 <- tourism3414 %>%
  select(c('NAME', 'geometry'))
tourism3414

Dealing with the anomaly - MRT/LRT

Despite it saying that’s point data, it gives polygon data, therefore we need to convert the polygon into point data. Unfortunately, there appears to be an issue that causes st_make_valid to not work, due to missing data within the polygon (There is an ‘na’).

Thus, we’ll take this chance to do some fine tuning and make use of OneMapSG to return us the correct coordinates. We will also remove all ‘depots’ as they are not considered MRT/LRTs for our dataset.

mrt_lrt_new_set <- read_csv("data/aspatial/mrt_lrt_lat_long.csv")

Convert into an SF Object

mrt_lrt_ns_sf <- st_as_sf(mrt_lrt_new_set, coords=c("LONG", "LAT"), crs=4326)
mrt_lrt_ns_sf <- mrt_lrt_ns_sf %>%
  select(c('STN_NAM_DE', 'geometry'))
mrt_lrt_ns_sf_3414 <- st_transform(mrt_lrt_ns_sf, crs=3414)

View the map to make sure the mapping is correct

tmap_mode("view")+
  tm_shape(bus_stop)+
  tm_dots(col="purple", size=0.05)

Looks like we have some bus stations in Malaysia, which makes sense, as there are some Singapore buses that travel into Malaysia.

These stops are:

  • LARKIN TERMINAL (Malaysia) [46239]

  • KOTARAYA II TER [46609]

  • JB SENTRAL [47701]

  • JOHOR BAHRU CHECKPOINT (46211, 46219)

We don’t want them in our calculation, so we’ll remove them.

bus_stop_to_remove <- c(46239, 46609, 47701, 46211, 46219)
bus_stop_cleaned <- filter(bus_stop, !(BUS_STOP_N %in% bus_stop_to_remove))

Check the map again:s

tm_shape(bus_stop_cleaned)+
  tm_dots(col="purple", size=0.05)

Convert Bus Stop from WGS84 to SVY21

bus_stop_cleaned_3414 <- st_transform(bus_stop_cleaned, crs = 3414)
st_crs(bus_stop_cleaned_3414)

Data Wrangling (Aspatial)

We will strictly be looking at Four Room Flats

  • Training Set: 1st January 2021 - 31st December 2022

  • Test Set: January - February 2023

resale_flat_sub <-  filter(resale_flat,flat_type == "4 ROOM") %>% 
              filter(month >= "2021-01" & month <= "2023-02")

Transforming the Data into usable data

Looking at the data, we will now need to combine both the block and street name to form the address so we can lookup for the postal code and retrieve the LAT/LONG

We will also need to calculate the remaining number of lease. Given that the remaining lease are in ‘YEAR’ + ‘MONTH’ format, we need to change it into a continuous number. We will change it to ‘MONTHS’.

rs_modified <- resale_flat_sub %>%
  mutate(resale_flat_sub, address = paste(block,street_name)) %>%
  mutate(resale_flat_sub, remaining_lease_yr = as.integer(str_sub(remaining_lease, 0, 2))) %>%
  mutate(resale_flat_sub, remaining_lease_mth = as.integer(str_sub(remaining_lease, 9, 11)))

Calculate the LEASE in Months and remove all intermediary columns

  • Convert all NA into 0

  • Sum up YEAR and MONTH and place them into a new column

  • Select only the columns we need.

rs_modified$remaining_lease_mth[is.na(rs_modified$remaining_lease_mth)] <- 0

rs_modified$remaining_lease_yr <- rs_modified$remaining_lease_yr * 12
rs_modified <- rs_modified %>% 
  mutate(remaining_lease_mths = rowSums(rs_modified[, c("remaining_lease_yr", "remaining_lease_mth")])) %>%
  select(month, town, address, block, street_name, flat_type, storey_range, floor_area_sqm, flat_model, 
         lease_commence_date, remaining_lease_mths, resale_price)

Glimpse at our data:

glimpse(rs_modified)

Retrieving the LAT/LONG Coordinates from OneMap.sg API

The following solution is courtesy of: Nor Aisyah

We first create a unique list of addresses

add_list <- sort(unique(rs_modified$address))

Now we will focus on getting the data needed.

In the following code chunk, the goal is to retrieve coordinates for a given search value using OneMap SG’s REST APIs. The code creates a dataframe to store the final retrieved coordinates and uses the httr package’s GET() function to make a GET request to https://developers.onemap.sg/commonapi/search.

The required variables to be included in the GET request are searchVal, returnGeom {Y/N}, and getAddrDetails {Y/N}.

The JSON response returned will contain multiple fields, but we are only interested in the postal code and coordinates like Latitude & Longitude. A new dataframe is created to store each final set of coordinates retrieved during the loop, and the number of responses returned is checked to append to the main dataframe accordingly. Invalid addresses are also checked, and the response is appended to the main dataframe using rbind() function of base R package.

retrieve_coords <- function(add_list){
  
  # Create a data frame to store all retrieved coordinates
  postal_coords <- data.frame()
    
  for (i in add_list){
    #print(i)

    r <- GET('https://developers.onemap.sg/commonapi/search?',
           query=list(searchVal=i,
                     returnGeom='Y',
                     getAddrDetails='Y'))
    data <- fromJSON(rawToChar(r$content))
    found <- data$found
    res <- data$results
    
    # Create a new data frame for each address
    new_row <- data.frame()
    
    # If single result, append 
    if (found == 1){
      postal <- res$POSTAL 
      lat <- res$LATITUDE
      lng <- res$LONGITUDE
      new_row <- data.frame(address= i, postal = postal, latitude = lat, longitude = lng)
    }
    
    # If multiple results, drop NIL and append top 1
    else if (found > 1){
      # Remove those with NIL as postal
      res_sub <- res[res$POSTAL != "NIL", ]
      
      # Set as NA first if no Postal
      if (nrow(res_sub) == 0) {
          new_row <- data.frame(address= i, postal = NA, latitude = NA, longitude = NA)
      }
      
      else{
        top1 <- head(res_sub, n = 1)
        postal <- top1$POSTAL 
        lat <- top1$LATITUDE
        lng <- top1$LONGITUDE
        new_row <- data.frame(address= i, postal = postal, latitude = lat, longitude = lng)
      }
    }

    else {
      new_row <- data.frame(address= i, postal = NA, latitude = NA, longitude = NA)
    }
    
    # Add the row
    postal_coords <- rbind(postal_coords, new_row)
  }
  return(postal_coords)
}

We call the function to get the coordinates

coords <- retrieve_coords(add_list)

Inspect the results

coords[(is.na(coords$postal) | is.na(coords$latitude) | is.na(coords$longitude) | coords$postal=="NIL"), ]

From the results, we can see that it has missing postal code. However, the postal code is not as relevant to our analysis as latitude and longitude.

The missing respective postal code are:

  • 680215

  • 680216

rs_coords <- left_join(rs_modified, coords, by = c('address' = 'address'))

Write file to RDS

write_rds(rs_coords, "data/aspatial/rs_coords.rds")

Read the RDS file

resale_sub_flat <- read_rds("data/aspatial/rs_coords.rds")

Assign and transform CRS

Data from OneMapSG are in WGS84, as evident from the decimal. So we will create a SF object in that code before transforming into SVY21

resale_coords_sf <- st_as_sf(resale_sub_flat,
                    coords = c("longitude", 
                               "latitude"),
                    crs=4326) %>%
  st_transform(crs = 3414)
st_crs(resale_coords_sf)

Check if there are any NA outside of what we confirmed earlier

rows_with_na <- which(is.na(resale_coords_sf), arr.ind=TRUE)
if (nrow(rows_with_na) > 0) {
  message("The following rows have NA values:")
  print(rows_with_na)
} else {
  message("The dataframe does not contain any rows with NA values.")
}

Remove all unnecessary rows

We only need to keep the identifier, time, resale price, spatial point data, and the structural factors as listed below

  • Area of Unit

  • Floor Level

  • Remaining Lease

  • Age of Unit

Note that we have not handled the age of unit yet. We’ll handle that later.

trim_resale_flat <- resale_coords_sf %>%
  select(1:3, 6:8, 11:12)

Determining CBD

We need to consider the distance to CBD as well. We will take ‘Downtown Core’ as our reference point, which is located in the southwest of Singapore.

cbd_lat <- 1.287953
cbd_long <- 103.851784

cbd_sf <- data.frame(cbd_lat, cbd_long) %>%
  st_as_sf(coords = c("cbd_long", "cbd_lat"), crs=4326) %>%
  st_transform(crs=3414)

Proximity Distance Calculation

For our next step, we will need to integrate all of our geospatial data by calculating the proximity to the amenities.

The following function calculates proximity

  • rowMins from matrixStats package finds the shortest possible distance within the distance matrix.
proximity <- function(df1, df2, varname) {
  dist_matrix <- st_distance(df1, df2) %>%
    units::drop_units()
  df1[,varname] <- rowMins(dist_matrix)
  return(df1)
}

Implement the Proximity Calculations:

trim_resale_flat <- proximity(trim_resale_flat, cbd_sf, "PROX_CBD") %>%
  proximity(., eldercare3414, "PROX_ELDERCARE") %>%
  proximity(., hawker_centre3414, "PROX_HAWKER") %>%
  proximity(., isp_clinics3414, "PROX_ISP") %>%
  proximity(., mrt_lrt_ns_sf_3414, "PROX_MRT") %>%
  proximity(., parks3414, "PROX_PARKS") %>%
  proximity(., top_primary_sch_sf_3414, "PROX_TOP_PRI") %>%
  proximity(., shopping_mall_sf_3414, "PROX_SHOPPING") %>%
  proximity(., supermarkets3414, "PROX_SUPERMARKETS")

Facility Count within Radius

In addition to calculating the shortest distance between points, we’re also interested in finding out how many facilities are located within a specific radius. To accomplish this, we’ll use the st_distance() function to calculate the distance between the flats and the desired facilities. We’ll then use rowSums() to add up the observations and obtain the count of facilities within the desired radius. The resulting values will be added to the data frame as a new column.

num_radius <- function(df1, df2, varname, radius) {
  dist_matrix <- st_distance(df1, df2) %>%
    units::drop_units() %>%
    as.data.frame()
  df1[,varname] <- rowSums(dist_matrix <= radius)
  return(df1)
}

Implement the function:

trim_resale_flat <- 
  num_radius(trim_resale_flat, kindergarten3414, "NUM_KINDERGARTEN", 350) %>%
  num_radius(., childcare3414, "NUM_CHILDCARE", 350) %>%
  num_radius(., bus_stop_cleaned_3414, "NUM_BUS_STOPS", 350) %>%
  num_radius(., primary_sch_sf_3414, "NUM_PRI_SCHS", 1000) %>%
  num_radius(., tourism3414, "NUM_TOURIST_SITES", 1000)

Save the Data as RDS

Before we do that, we should trim the name, as when converting to RDS the column names are shortened due to limitations

trim_resale_flat <- trim_resale_flat %>%
  mutate() %>%
  rename("AREA_SQM" = "floor_area_sqm", 
         "LEASE_MTHS" = "remaining_lease_mths", 
         "PRICE"= "resale_price",
         "FLAT" = "flat_type", 
         "STOREY" = "storey_range")
st_write(trim_resale_flat, "data/geospatial/resale_flat_final.shp")

Reimport by reloading our dataset

resale_flat_sf <- st_read(dsn="data/geospatial", layer="resale_flat_final") 

We’re done with gathering and creating our data! Now we can move onto our exploratory data analysis and building our prediction models. Note that there are still some data that need to be modified. Such as ordinal data like Storeys and also the calculation of the flat’s age.

Calculate age of unit

To calculate the age of unit, we will take 2023 (current year) - lease_commence_date to find out the age of the flat. Then we’ll append it into our new sf object.

resale_flat_sub_age <- resale_flat_sub %>%
  mutate("age_of_unit" = 2023 - lease_commence_date)
resale_flat_sf <- resale_flat_sf %>%
  mutate("age_of_unit" = resale_flat_sub_age$age_of_unit)

Speaking of which, we’ll handle them now.

Storey Data:

It’s a categorical data that has meaning when ordered. Higher/lower levels could potentially have an impact on the price of the HDB Flat. Therefore, we should convert this column of data into numbers, one that the regression can learn from.

storeys <- sort(unique(resale_flat_sf$STOREY))
storey_order <- 1:length(storeys)
storey_range_order <- data.frame(storeys, storey_order)
storey_range_order

From the above, we can see that the storey values have been sorted from 1 - 10

We will combine this data with the base resale SF. Unfortunately, we have to drop the geometric column first before we can perform the join. So we’ll do that, then we’ll combine both the columns

geom_temp_store <- st_geometry(resale_flat_sf)
temp_no_geo <- st_drop_geometry(resale_flat_sf)
temp_join <- dplyr::left_join(temp_no_geo, storey_range_order, by=c("STOREY"= "storeys"))
final_resale_output <- st_as_sf(temp_join, geom_temp_store)
final_resale_output

Write and read file

write_rds(final_resale_output, "data/model/final_resale_output.rds")
final_resale_output <- read_rds("data/model/final_resale_output.rds")

Rename to Columns so they are more readable

Due to abbreviation from Shapefile limitation, we will rename them back so it’s readable.

final_resale_output <- final_resale_output %>%
  mutate() %>%
  rename("PROX_CBD" = "PROX_CB", 
         "PROX_ELDERCARE" = "PROX_EL", 
         "PROX_HAWKER"= "PROX_HA",
         "PROX_ISP" = "PROX_IS", 
         "PROX_MRT" = "PROX_MR",
         "PROX_PARK" = "PROX_PA",
         "PROX_TOP_PRI" = "PROX_TO",
         "PROX_SHOPPING" = "PROX_SH",
         "PROX_SUPERMARKET" = "PROX_SU",
         "NUM_KINDERGARTEN" = "NUM_KIN",
         "NUM_CHILDCARE" = "NUM_CHI",
         "NUM_BUS_STOPS" = "NUM_BUS",
         "NUM_PRI_SCHS" = "NUM_PRI",
         "NUM_TOURIST_SITES" = "NUM_TOU",
         "geometry" = "geom_temp_store")

Drop ’STOREY”

final_resale_output <- final_resale_output %>%
  select(c(1:4, 6:24))

Write and read final output

write_rds(final_resale_output, "data/model/final_resale_output.rds")
final_resale_output <- read_rds("data/model/final_resale_output.rds")

Exploratory Analysis:

Now that we have finished our data wrangling, we can move into exploratory analysis and prediction.

We will start by visualizing the resale prices

ggplot(data=final_resale_output, aes(x=`PRICE`)) +
  geom_histogram(bins=20, color="black", fill="light blue") +
    labs(title = "Distribution of Resale Prices",
         x = "Resale Prices",
         y = 'Frequency')

There appears to be a right-skewed distribution. This means that the majority of the housing prices are transacted on the lower end with some extremes.

Histogram Plotting of the Structural Factors

Extract the column names to plot

s_factors <- c("AREA_SQ", "LEASE_M", "age_of_unit", "storey_order")

Create a list to store histograms on structural factors

  • Create a vector of the size of our structural vectors (4 in this case)

  • Plot the histogram for each structural factor

  • Append the histogram to the created vector

s_factor_hist_list <- vector(mode = "list", length = length(s_factors))
for (i in 1:length(s_factors)) {
  hist_plot <- ggplot(final_resale_output, aes_string(x = s_factors[[i]])) +
    geom_histogram(color="black", fill = "light blue") +
    labs(title = s_factors[[i]]) +
    theme(plot.title = element_text(size = 10),
          axis.title = element_blank())
  
  s_factor_hist_list[[i]] <- hist_plot
}

Plot the histograms

We’ll use the ggarrange() function to organize them into a 2x2 column row plot.

ggarrange(plotlist = s_factor_hist_list,
          ncol = 2,
          nrow = 2)

AREA_SQ:

It appears that most four-room flats roam around 90+ square feet. However, there are anomalies where it exceeds 110 square feet.

LEASE_M and Age of Unit

They appear to be inversely correlated. Where there is a large number of four-room flats that are not old and have a long lease life ahead.

Story Order:

There appear to be a familiar pattern where most of the four-room flats are built on lower floors. Though, this might be an effect where most HDBs are just built lower in general, with some anomalies of HDB flats which are built extremely high.

Locational Factors

l_factor <- c("PROX_CBD", "PROX_ELDERCARE", "PROX_HAWKER", "PROX_ISP", "PROX_MRT", "PROX_PARK", "PROX_TOP_PRI", "PROX_SHOPPING", "PROX_SUPERMARKET",
              "NUM_KINDERGARTEN", "NUM_CHILDCARE", "NUM_BUS_STOPS", "NUM_PRI_SCHS", "NUM_TOURIST_SITES")
l_factor_hist_list <- vector(mode = "list", length = length(l_factor))
for (i in 1:length(l_factor)) {
  hist_plot <- ggplot(final_resale_output, aes_string(x = l_factor[[i]])) +
    geom_histogram(color="midnight blue", fill = "light sky blue") +
    labs(title = l_factor[[i]]) +
    theme(plot.title = element_text(size = 10),
          axis.title = element_blank())
  
  l_factor_hist_list[[i]] <- hist_plot
}
ggarrange(plotlist = l_factor_hist_list,
          ncol= 4,
          nrow = 4)

From observation, it appears that ELDERCARE are in the right-skewed in distribution. Which means some areas have a lot, while majority have almost none. Same goes for hawkers, ISP, MRT, PARK, SUPERMARKETS and shopping. This is a good thing, as this means most HDB are close to ISP, MRT, PARKS, SUPERMARKETS and shopping malls.

View the prices on the map:

tmap_mode("view")
tmap_options(check.and.fix = TRUE)
tm_shape(final_resale_output) +  
  tm_dots(col = "PRICE",
          alpha = 0.6,
          style="quantile") +
  # sets minimum zoom level to 11, sets maximum zoom level to 14
  tm_view(set.zoom.limits = c(11,14))

Change back to plot.

tmap_mode("plot")

Correlation Plot

Now, we’ll perform the correlation plot to look at the correlation between the different columns. We will drop columns there is high multicolinearity between them > 0.8,

To do that, we will remove the ‘geom’ from the table.

final_resale_output_nogeom <- final_resale_output %>%
  st_drop_geometry()

View the chart

corrplot::corrplot(cor(final_resale_output_nogeom[, 5:ncol(final_resale_output_nogeom)]), diag = FALSE, order = "AOE",
         tl.pos = "td", tl.cex = 0.5, method = "number", type = "upper")

From the corrplot chart, we can see the the remaining lease is inversely correlated with age of unit. Therefore, it will be excluded for our linear regression.

Split our Data

For our OLS and GWS models, we will be using 1st January 2021 to 31st December 2022 for training set and January and February 2023 for our test set.

Set seed.

set.seed(1234)
resale_flat_train <-filter(final_resale_output, month >= "2021-01" & month <= "2022-12")
resale_flat_test <-filter(final_resale_output, month >= "2023-01" & month <= "2023-02")

Non-Spatial OLS

price_mlr <- lm(PRICE ~ AREA_SQ +
                  storey_order + LEASE_M +
                  PROX_CBD + PROX_ELDERCARE + PROX_HAWKER +
                  PROX_ISP + PROX_MRT + PROX_PARK + PROX_TOP_PRI + 
                  PROX_SHOPPING + PROX_SUPERMARKET + NUM_KINDERGARTEN +
                  NUM_CHILDCARE + NUM_BUS_STOPS + NUM_PRI_SCHS + 
                  NUM_TOURIST_SITES,
                data=resale_flat_train)

Save Results:

write_rds(price_mlr, "data/model/price_mlr.rds" )
price_mlr <- read_rds("data/model/price_mlr.rds")
summary(price_mlr)

Call:
lm(formula = PRICE ~ AREA_SQ + storey_order + LEASE_M + PROX_CBD + 
    PROX_ELDERCARE + PROX_HAWKER + PROX_ISP + PROX_MRT + PROX_PARK + 
    PROX_TOP_PRI + PROX_SHOPPING + PROX_SUPERMARKET + NUM_KINDERGARTEN + 
    NUM_CHILDCARE + NUM_BUS_STOPS + NUM_PRI_SCHS + NUM_TOURIST_SITES, 
    data = resale_flat_train)

Residuals:
    Min      1Q  Median      3Q     Max 
-364678  -44410   -1224   43893  361399 

Coefficients:
                    Estimate Std. Error  t value Pr(>|t|)    
(Intercept)       56418.2024  8071.0816    6.990 2.82e-12 ***
AREA_SQ            3470.3497    69.4165   49.993  < 2e-16 ***
storey_order      15899.2024   228.3414   69.629  < 2e-16 ***
LEASE_M              30.1038     0.2686  112.070  < 2e-16 ***
PROX_CBD            -16.6569     0.1621 -102.771  < 2e-16 ***
PROX_ELDERCARE       -8.8123     0.8041  -10.959  < 2e-16 ***
PROX_HAWKER         -20.6838     0.9481  -21.817  < 2e-16 ***
PROX_ISP            -17.2855     2.3869   -7.242 4.56e-13 ***
PROX_MRT            -16.5187     1.2924  -12.781  < 2e-16 ***
PROX_PARK             5.1464     1.1690    4.402 1.08e-05 ***
PROX_TOP_PRI          1.8706     0.2472    7.568 3.94e-14 ***
PROX_SHOPPING       -13.1984     1.3355   -9.883  < 2e-16 ***
PROX_SUPERMARKET     10.1508     3.1209    3.253  0.00115 ** 
NUM_KINDERGARTEN  10134.4235   515.2692   19.668  < 2e-16 ***
NUM_CHILDCARE     -3486.3416   245.9476  -14.175  < 2e-16 ***
NUM_BUS_STOPS      1051.2177   168.3720    6.243 4.35e-10 ***
NUM_PRI_SCHS      -6515.3464   333.8226  -19.517  < 2e-16 ***
NUM_TOURIST_SITES  4902.9376   316.1419   15.509  < 2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 68670 on 23638 degrees of freedom
Multiple R-squared:  0.7191,    Adjusted R-squared:  0.7189 
F-statistic:  3559 on 17 and 23638 DF,  p-value: < 2.2e-16

Predict Results (OLS) LM

lm_ols_pred <- predict.lm(price_mlr, 
                           resale_flat_test, 
                           x.var.name="X",
                           y.var.name="Y", 
                           local.w=1,
                           global.w=0)
lm_ols_pred_df <- as.data.frame(lm_ols_pred)
test_data_ols_lm <- cbind(resale_flat_test, lm_ols_pred_df)

Calculate Root Mean Square Error (RMSE)

rmse(test_data_ols_lm$PRICE, 
     test_data_ols_lm$lm_ols_pred)
[1] 82456.97

Visualize the Predicted Values (Non spatially weighted)

ggplot(data = test_data_ols_lm,
       aes(x = lm_ols_pred,
           y = PRICE)) +
  geom_point()

From a look at the results, it appears that the model is performing quite well. However, we will now introduce geographically weighted regression to see if we can improved the prediction model.

Geogprahical Random Forest Model

For our proof of concept (due to it taking too long to run), we will run just 6 months of data in the year 2022

resale_flat_train_6mth <-  filter(resale_flat_train,month >= "2022-06" & month <= "2022-12")

Extracting coordinates data

coords_train <- st_coordinates(resale_flat_train_6mth)
coords_test <- st_coordinates(resale_flat_test)

Dropping the Geometry column for training

resale_flat_train_6mth_no_geom <- resale_flat_train_6mth %>% 
  st_drop_geometry()

Getting the bandwidth

We will use the grf.bw method to obtain the bandwidth

grf_bw <- grf.bw(formula = PRICE ~ AREA_SQ +
                  storey_order + LEASE_M +
                  PROX_CBD + PROX_ELDERCARE + PROX_HAWKER +
                  PROX_ISP + PROX_MRT + PROX_PARK + PROX_TOP_PRI + 
                  PROX_SHOPPING + PROX_SUPERMARKET + NUM_KINDERGARTEN +
                  NUM_CHILDCARE + NUM_BUS_STOPS + NUM_PRI_SCHS + 
                  NUM_TOURIST_SITES,
                  dataset=resale_flat_train_6mth_no_geom,
                  coords=coords_train,
                  kernel="adaptive")

Results:

Due to running for over 24+ hours and not being able to converge, I’ve picked out the bandwidth with the highest R2 score, which is 380 at R2 0.930317378559901.

Code
#Bandwidth: 331
# R2 of Local Model: 0.928487917777661
# Bandwidth: 332
# R2 of Local Model: 0.927607441811455
# Bandwidth: 333
# R2 of Local Model: 0.928584376750302
# Bandwidth: 334
# R2 of Local Model: 0.927188503449855
# Bandwidth: 335
# R2 of Local Model: 0.929046952369366
# Bandwidth: 336
# R2 of Local Model: 0.92744619458086
# Bandwidth: 337
# R2 of Local Model: 0.928462439167593
# Bandwidth: 338
# R2 of Local Model: 0.927239387395655
# Bandwidth: 339
# R2 of Local Model: 0.928236245740123
# Bandwidth: 340
# R2 of Local Model: 0.928565253996034
# Bandwidth: 341
# R2 of Local Model: 0.928213147592213
# Bandwidth: 342
# R2 of Local Model: 0.928426868999498
# Bandwidth: 343
# R2 of Local Model: 0.92776173781965
# Bandwidth: 344
# R2 of Local Model: 0.928358812356241
# Bandwidth: 345
# R2 of Local Model: 0.927726205306799
# Bandwidth: 346
# R2 of Local Model: 0.928320483904636
# Bandwidth: 347
# R2 of Local Model: 0.927823271079961
# Bandwidth: 348
# R2 of Local Model: 0.928182141355479
# Bandwidth: 349
# R2 of Local Model: 0.928219600640922
# Bandwidth: 350
# R2 of Local Model: 0.928772627904435
# Bandwidth: 351
# R2 of Local Model: 0.928327557377999
# Bandwidth: 352
# R2 of Local Model: 0.928847537528902
# Bandwidth: 353
# R2 of Local Model: 0.927822293404792
# Bandwidth: 354
# R2 of Local Model: 0.928236357659819
# Bandwidth: 355
# R2 of Local Model: 0.929322496040631
# Bandwidth: 356
# R2 of Local Model: 0.92753837753389
# Bandwidth: 357
# R2 of Local Model: 0.927505363999238
# Bandwidth: 358
# R2 of Local Model: 0.928360964730128
# Bandwidth: 359
# R2 of Local Model: 0.927550988569272
# Bandwidth: 360
# R2 of Local Model: 0.928975884884678
# Bandwidth: 361
# R2 of Local Model: 0.927899562521322
# Bandwidth: 362
# R2 of Local Model: 0.927511569606382
# Bandwidth: 363
# R2 of Local Model: 0.928647887992443
# Bandwidth: 364
# R2 of Local Model: 0.928757292982437
# Bandwidth: 365
# R2 of Local Model: 0.928908580739289
# Bandwidth: 366
# R2 of Local Model: 0.927962472730493
# Bandwidth: 367
# R2 of Local Model: 0.928219469964645
# Bandwidth: 368
# R2 of Local Model: 0.928320922284468
# Bandwidth: 369
# R2 of Local Model: 0.92849783498469
# Bandwidth: 370
# R2 of Local Model: 0.928921900399736
# Bandwidth: 371
# R2 of Local Model: 0.928891008100474
# Bandwidth: 372
# R2 of Local Model: 0.927549027383622
# Bandwidth: 373
# R2 of Local Model: 0.928575480055919
# Bandwidth: 374
# R2 of Local Model: 0.927672503806709
# Bandwidth: 375
# R2 of Local Model: 0.928786463824046
# Bandwidth: 376
# R2 of Local Model: 0.927867914560842
# Bandwidth: 377
# R2 of Local Model: 0.928642768939767
# Bandwidth: 378
# R2 of Local Model: 0.927580452294152
# Bandwidth: 379
# R2 of Local Model: 0.928566237496647
# Bandwidth: 380
# R2 of Local Model: 0.930317378559901
# Bandwidth: 381
# R2 of Local Model: 0.928619424775118
# Bandwidth: 382
# R2 of Local Model: 0.928393814080233
# Bandwidth: 383
# R2 of Local Model: 0.928554527374789
# Bandwidth: 384
# R2 of Local Model: 0.929997689542621
# Bandwidth: 385
# R2 of Local Model: 0.928451798032526
# Bandwidth: 386
# R2 of Local Model: 0.928606446772016
# Bandwidth: 387
# R2 of Local Model: 0.927572173787888
# Bandwidth: 388
# R2 of Local Model: 0.9279460190051
# Bandwidth: 389
# R2 of Local Model: 0.928639325905448
# Bandwidth: 390
# R2 of Local Model: 0.927425893678889
# Bandwidth: 391
# R2 of Local Model: 0.928984441112024
# Bandwidth: 392
# R2 of Local Model: 0.929309280214433
# Bandwidth: 393
# R2 of Local Model: 0.928634927942065
# Bandwidth: 394
# R2 of Local Model: 0.927712167511897
# Bandwidth: 395
# R2 of Local Model: 0.92926953966394
# Bandwidth: 396
# R2 of Local Model: 0.92729235036897
# Bandwidth: 397
# R2 of Local Model: 0.927968728080108
# Bandwidth: 398
# R2 of Local Model: 0.928743814521345
# Bandwidth: 399
# R2 of Local Model: 0.92843759664944
# Bandwidth: 400
# R2 of Local Model: 0.927845394422112
# Bandwidth: 401
# R2 of Local Model: 0.927834121000581
# Bandwidth: 402
# R2 of Local Model: 0.92772708923665
# Bandwidth: 403
# R2 of Local Model: 0.928456765302502
# Bandwidth: 404
# R2 of Local Model: 0.928707329746776
# Bandwidth: 405
# R2 of Local Model: 0.928201723851263
# Bandwidth: 406
# R2 of Local Model: 0.927933401611096
# Bandwidth: 407
# R2 of Local Model: 0.928078755992115
# Bandwidth: 408
# R2 of Local Model: 0.92854535038349
# Bandwidth: 409
# R2 of Local Model: 0.928843574362936
# Bandwidth: 410
# R2 of Local Model: 0.929569049808885
# Bandwidth: 411
# R2 of Local Model: 0.926971642583936
# Bandwidth: 412
# R2 of Local Model: 0.928641261036375
# Bandwidth: 413
# R2 of Local Model: 0.928638800541796
# Bandwidth: 414
# R2 of Local Model: 0.927717133079944
# Bandwidth: 415
# R2 of Local Model: 0.929141072043574
# Bandwidth: 416
# R2 of Local Model: 0.928819539176847
# Bandwidth: 417
# R2 of Local Model: 0.928112567996163
# Bandwidth: 418
# R2 of Local Model: 0.928726864592745
# Bandwidth: 419
# R2 of Local Model: 0.92868879797587
# Bandwidth: 420
# R2 of Local Model: 0.928535230457292
# Bandwidth: 421
# R2 of Local Model: 0.928525595541211
# Bandwidth: 422
# R2 of Local Model: 0.92841857485932
# Bandwidth: 423
# R2 of Local Model: 0.928573329455401
# Bandwidth: 424
# R2 of Local Model: 0.928389768864086
# Bandwidth: 425
# R2 of Local Model: 0.928208100557079
# Bandwidth: 426
# R2 of Local Model: 0.928417597646
# Bandwidth: 427
# R2 of Local Model: 0.928920331982284
# Bandwidth: 428
# R2 of Local Model: 0.928862463865919
# Bandwidth: 429
# R2 of Local Model: 0.929183290315554
# Bandwidth: 430
# R2 of Local Model: 0.928074307805113
# Bandwidth: 431
# R2 of Local Model: 0.929398256657408
# Bandwidth: 432
# R2 of Local Model: 0.929389867029869
# Bandwidth: 433
# R2 of Local Model: 0.927452734648107
# Bandwidth: 434
# R2 of Local Model: 0.927332916905707

Running the GRF model

set.seed(1234)
gwRF_adaptive <- grf(formula = PRICE ~ AREA_SQ +
                  storey_order + LEASE_M +
                  PROX_CBD + PROX_ELDERCARE + PROX_HAWKER +
                  PROX_ISP + PROX_MRT + PROX_PARK + PROX_TOP_PRI + 
                  PROX_SHOPPING + PROX_SUPERMARKET + NUM_KINDERGARTEN +
                  NUM_CHILDCARE + NUM_BUS_STOPS + NUM_PRI_SCHS + 
                  NUM_TOURIST_SITES,
                  dframe=resale_flat_train_6mth_no_geom, 
                  bw=380,
                  kernel="adaptive",
                  coords=coords_train,
                  ntree=100)

Save Model Output

write_rds(gwRF_adaptive, "data/model/gwRF_adaptive.rds")

Read Model Output

gwRF_adaptive <- read_rds("data/model/gwRF_adaptive.rds")

Predicting with Test Data

Drop the geometry column

resale_flat_test_nogeom <- cbind(resale_flat_test, coords_test) %>% 
  st_drop_geometry()
gwRF_pred <- predict.grf(gwRF_adaptive, 
                           resale_flat_test_nogeom, 
                           x.var.name="X",
                           y.var.name="Y", 
                           local.w=1,
                           global.w=0)

Save the Test for future use

write_rds(gwRF_pred, "data/model/GRF_pred.rds")
gwRF_pred <- read_rds("data/model/GRF_pred.rds")

Convert the predicting output into a data frame

GRF_pred_df <- as.data.frame(gwRF_pred)
summary(gwRF_adaptive$LocalModelSummary)

Bind the predicted value onto the test values

resale_flat_test_bind <- cbind(resale_flat_test_nogeom, GRF_pred_df)

Write and save

write_rds(resale_flat_test_bind, "data/model/resale_flat_test_bind.rds")

Load the file

resale_flat_test_bind <- read_rds("data/model/resale_flat_test_bind.rds")

Calculating the Root Mean Square Error

rmse(resale_flat_test_bind$PRICE, 
     resale_flat_test_bind$gwRF_pred)
[1] 34547.61

Visualizing the predicted values

ggplot(data = resale_flat_test_bind,
       aes(x = gwRF_pred,
           y = PRICE)) +
  geom_point()

Evaluation between OLS and Geographic Random Forest

We first look at the variable importance from GRF

vi_df <- as.data.frame(gwRF_adaptive$Global.Model$variable.importance)
vi_df
                  gwRF_adaptive$Global.Model$variable.importance
AREA_SQ                                             5.234475e+12
storey_order                                        1.554667e+13
LEASE_M                                             2.460844e+13
PROX_CBD                                            3.570041e+13
PROX_ELDERCARE                                      3.735912e+12
PROX_HAWKER                                         3.470587e+12
PROX_ISP                                            1.297481e+12
PROX_MRT                                            2.764498e+12
PROX_PARK                                           2.036445e+12
PROX_TOP_PRI                                        5.026554e+12
PROX_SHOPPING                                       2.255388e+12
PROX_SUPERMARKET                                    1.583102e+12
NUM_KINDERGARTEN                                    8.320563e+11
NUM_CHILDCARE                                       7.135871e+11
NUM_BUS_STOPS                                       7.861290e+11
NUM_PRI_SCHS                                        1.958971e+12
NUM_TOURIST_SITES                                   2.383916e+12

From the result above, we can see that Proximity to CBD contributes the greatest towards affecting the prices. Followed by remaining lease, storey the flat is in and Area SQ. These are expected, however the unexpected was how proximity to a top primary school comes right after, indicating that having a top primary school in its vicinity is a strong indicator in affecting prices.

This goes to show that people are willing to spend and drive up market prices just to ensure proximity to top primary schools in order to give their children the best start.

OLS Stats:

Stats
RMSE 82456.97
R2 0.7189

Geographical Random Forest

Stats
RMSE 34547.61
Global R2 0.943793

From the results it is immediately evident that geographical weight is a strong predictor as well. But why is that?

It is due to a concept known as ‘spatial heterogeneity’. This is an important factor that OLS does not take into account. In regular OLS, it assumes that the relationship between housing prices for HDB and other explanatory variables are constant across all the study areas. However, it does not take into account the fact that housing prices may be influenced by different factors in different neighborhoods, and this can give rise to spatial heterogeneity.

Spatial heterogeneity refers to the variation in spatial patterns or distributions of a phenomenon or variable across a given area. In other words, it is the idea that things or attributes can differ in space, and spatial differences can be significant.

For example, location specific factors in our case such as proximity to good schools, CBD can have an effect on housing prices on different neighborhoods. Geographically Weighted Regression (GWR) takes into account this spatial heterogeneity by allowing for spatially varying relationships between the explanatory variables and housing prices. By estimating the relationship between the variables at each location, GWR can identify the underlying factors that are driving spatial variation in housing prices.

Therefore, by using our proximity-based explanatory variable together with our GWR model, we can model the localized relationship to help us better understand the prediction of housing prices as it can capture the influential variables at a neighborhood level that OLS can miss.

Hence, using GWR is recommended over OLS as it provides a more nuanced and most importantly, locally specific insights which can produce more accurate predictions of housing prices across the neighborhood.

Acknowledgement:

  • Prof Kam for his guidance and notes

  • Megan and Nor Aisyah for the codes and preparations