U.S. flag

Một trang web chính thức của Chính Phủ Hoa Kỳ

Dot gov

Official websites use .gov

A .gov website belongs to an official government organization in the United States.

Https

Secure .gov websites use HTTPS

A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites..

alert - warning

Trang này chưa được dịch sang tiếng Tiếng Việt. Truy cập trang được dịch sang tiếng Tiếng Việt để xem các tài nguyên hỗ trợ theo ngôn ngữ này.

OpenFEMA Developer Resources

Welcome to the OpenFEMA Developer Resources page, devoted to providing additional development information regarding our Application Programming Interface (API) for use in your applications and mashups.  The API is free of charge and does not currently have user registration requirements.  Please contact the OpenFEMA Team at openfema@fema.dhs.gov  to suggest additional data sets and additional API features.

Please review the API Documentation for list of commands that can be used for each endpoint. As OpenFEMA’s main purpose is to act as a content delivery mechanism, each endpoint represents a data set. Therefore, the documentation does not outline each one; they all operate in the same manner. Metadata (content descriptions, update frequency, data dictionary, etc.) for each data set can be found on the individual data set pages. The Data Sets page provides a list of the available endpoints.

The Changelog identifies new, changing, and deprecated datasets, and describes new features to the API.

The  API Specifics/Technical portion of the FAQ may be of particular interest to developers.

The Large Data Set Guide provides recommendations and techniques for working with OpenFEMA's large data files. Some code examples are included.

Following are examples or recipes of commonly performed actions - many expressed in different programming or scripting languages. We will continue to expand this section. If you have code examples you would like to see, please contact the OpenFEMATeam. We also welcome any code examples you would like to provide.

Accessing Data from API Endpoint

There are many ways to access data from the OpenFEMA API such as using a programming language, scripting language, or some built-in command. The following examples demonstrate how to get data using an OpenFEMA API endpoint. All of these examples will return disaster summaries for Hurricane Isabell (disaster number 1491).

Note that not all the data may be returned. By default only 1,000 records are returned. If more data exists, it will be necessary to page through the data to capture it all. See the API Documentation for more information.

HTTP/URL – Paste in your browsers URL.

https://www.fema.gov/api/open/v1/DisasterDeclarationsSummaries?$filter=disasterNumber eq 1491

cURL – Saving returned data to a file. Note URL %20 encoding used for spaces.

curl 'https://www.fema.gov/api/open/v1/DisasterDeclarationsSummaries?$filter=disasterNumber%20eq%201491' >> output.txt

wget – Saving returned data to a file.

wget –O output.txt 'https://www.fema.gov/api/open/v1/DisasterDeclarationsSummaries?$filter=disasterNumber%20eq%201491'

Windows PowerShell 3.0 – Note site security uses TLS 1.2, therefore the security protocol must be set first.

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
Invoke-WebRequest -Uri https://www.fema.gov/api/open/v1/DisasterDeclarationsSummaries?$filter=disasterNumber%20eq%201491 –OutFile c:\temp\output.txt

Paging Through Data

For performance reasons, only 1,000 records are returned per API endpoint call. If more than 1,000 records exist, it will be necessary to page through the data, using the $skip and $inlinecount parameters to retrieve every record. The metadata header returned as part of the data set JSON response will only display the full record count if the $inlinecount parameter is used—otherwise, it will have a value of 0. Computer code containing a loop is written to continue making API calls, incrementing the $skip parameter accordingly, until the number of records retrieved equals the total record count. See the OpenFEMA Documentation, URI commands section for additional information regarding these parameters.

Following are examples in various languages.

Bash - Downloading a full data set with more than 1,000 records and saving the results to one JSON file.

#!/bin/bash
# Paging example using bash. Output in JSON.


baseUrl='https://www.fema.gov/api/open/v1/FemaWebDisasterDeclarations?$inlinecount=allpages'


# Return 1 record with your criteria to get total record count. Specifying only 1
#   column here to reduce amount of data returned. The backslashes are needed before
#   the API parameters otherwise bash will interpret them as variables. The -s switch
#   in the curl command will suppress its download status information.
result=$(curl -s -H "Content-Type: application/json" "$baseUrl&\$select=id&\$top=1")


# use jq (a json parser) to extract the count - not included in line above for clarity
recCount=$(echo "$result" | jq '.metadata.count')


# calculate the number of calls we will need to get all of our data (using the maximum of 1000)
top='1000'
loopNum=$((($recCount+$top-1)/$top))


# send some logging info to the console so we know what is happening
echo "START "$(date)", $recCount records, $top returned per call, $loopNum iterations needed."


# Initialize our file. Only doing this because of the type of file wanted. See the loop below.
#   The root json entity is usually the name of the dataset, but you can use any name.
echo '{"femawebdisasterdeclarations":[' >> output.json


# Loop and call the API endpoint changing the record start each iteration. NOTE: Each call will
# return the metadata object along with the results. This should be striped off before appending 
# to the final file or use the $metadata parameter to suppress it.
i=0
skip=0
while [ "$i" -lt $loopNum ]
do
    # Execute API call, skipping records we have already retrieved, excluding metadata header, in jsona.
    # NOTE: By default data is returned as a JSON object, the data set name being the root element. Unless
    #   you extract records as you process, you will end up with 1 distinct JSON object for EVERY call/iteration.
    #   An alternative is to return the data as JSONA (an array of json objects) with no root element - just
    #   a bracket at the start and end. Again, one bracketed array will be returned for every call. Since I
    #   want 1 JSON array, not many, I have stripped off the the closing bracket and added a comma. For the
    #   last iteration, do not add a comma and terminate the object with a bracket and brace. This certainly
    #   can be done differently, it just depends on what you are ultimately trying to accomplish.
    results=$(curl -s -H "Content-Type: application/json" "$baseUrl&\$metadata=off&\$format=jsona&\$skip=$skip&\$top=$top")


    # append results to file - the following line is just a simple append
    #echo $results >> "output.json"
    
    # Append results to file, trimming off first and last JSONA brackets, adding comma except for last call,
    #   AND root element terminating array bracket and brace to end unless on last call. The goal here is to 
    #   create a valid JSON file that contains ALL the records. This can be done differently.
    if [ "$i" -eq "$(( $loopNum - 1 ))" ]; then
        # on the last so terminate the single JSON object
        echo "${results:1:${#results}-2}]}" >> output.json
    else
        echo "${results:1:${#results}-2}," >> output.json
    fi


    i=$(( i + 1 ))       # increment the loop counter
    skip=$((i * $top))   # number of records to skip on next iteration


    echo "Iteration $i done"
done
# use jq to count the JSON array elements to make sure we got what we expected
echo "END "$(date)", $(jq '.femawebdisasterdeclarations | length' output.json) records in file"

Bash - Downloading a full data set with more than 1,000 records and saving the results to one CSV file.

#!/bin/bash
# Paging example using bash. Output in CSV.


# Base URL for this endpoint with $inlinecount set to return total record count. Add 
#   filters, column selection, and sort order to the end of the baseURL
baseUrl='https://www.fema.gov/api/open/v1/FemaWebDisasterDeclarations?$inlinecount=allpages'


# Return 1 record with your criteria to get total record count. Specifying only 1
#   column here to reduce amount of data returned. The backslashes are needed before
#   the API parameters otherwise bash will interpret them as variables. The -s switch
#   in the curl command will suppress its download status information.
result=$(curl -s -H "Content-Type: application/json" "$baseUrl&\$select=id&\$top=1")


# use jq (a json parser) to extract the count - not included in line above for clarity
recCount=$(echo "$result" | jq '.metadata.count')


# calculate the number of calls we will need to get all of our data (using the maximum of 1000)
top='1000'
loopNum=$((($recCount+$top-1)/$top))


# send some logging info to the console so we know what is happening
echo "START "$(date)", $recCount records, $top returned per call, $loopNum iterations needed."


# Loop and call the API endpoint changing the record start each iteration. NOTE: Each call will
# return results in a JSON format along with a metadata object. Returning data in a CSV format 
# will not include the metadata so there is no need to use the $metadata parameter to suppress it.
i=0
skip=0
while [ "$i" -lt $loopNum ]
do
    # Execute API call, skipping records we have already retrieved. NOTE: The curl content type
    #   has been changed. Now we expect csv text not json.
    results=$(curl -s -H 'Content-type: text/csv' "$baseUrl&\$metadata=off&\$format=csv&\$skip=$skip&\$top=$top")


    # Append results to file. NOTE: Quotes around the bash variable being echoed. If this is not
    #   done, record terminators (line feeds) will not be preserved. Each call will result in one
    #   very long line.
    echo "$results" >> "output.csv"
    
    i=$(( i + 1 ))       # increment the loop counter
    skip=$((i * $top))   # number of records to skip on next iteration


    echo "Iteration $i done"
done


# Each call will return data that INCLUDES the field headers. We need to remove these. The
#   following line uses sed (a stream editor program) to do this. The following command uses 
#   a regular expression to find exact matches to the header line and remove them. This can
#   also be done using awk, or by editing the file after the fact - open in a spreadsheet, 
#   sort, and delete the duplicate header lines. NOTE: The -i switch edits the file inline -
#   that is, the original file is permanently altered.
sed -i -r "1h;1!G;/^(.*)\n\1/d;P;D" output.csv


# Use wc command to count the lines in the file to make sure we got what we expected. It 
#   will be 1 line longer because of the field header.
echo "END "$(date)", $(wc -l output.csv) records in file"

Python - Downloading a full data set with more than 1,000 records and saving the results to one JSON file.

#!/usr/bin/env python3
# Paging example using Python 3. Output in JSON.


import sys
import urllib.request
import json
import math
from datetime import datetime


# Base URL for this endpoint. Add filters, column selection, and sort order to this.
baseUrl = "https://www.fema.gov/api/open/v1/FemaWebDisasterDeclarations?"


top = 1000      # number of records to get per call
skip = 0        # number of records to skip


# Return 1 record with your criteria to get total record count. Specifying only 1
#   column here to reduce amount of data returned. Need inlinecount to get record count. 
webUrl = urllib.request.urlopen(baseUrl + "$inlinecount=allpages&$select=id&$top=1")
result = webUrl.read()
jsonData = json.loads(result.decode())


# calculate the number of calls we will need to get all of our data (using the maximum of 1000)
recCount = jsonData['metadata']['count']
loopNum = math.ceil(recCount / top)


# send some logging info to the console so we know what is happening
print("START " + str(datetime.now()) + ", " + str(recCount) + " records, " + str(top) + " returned per call, " + str(loopNum) + " iterations needed.")


# Initialize our file. Only doing this because of the type of file wanted. See the loop below.
#   The root json entity is usually the name of the dataset, but you can use any name.
outFile = open("output2.json", "a")
outFile.write('{"femawebdisasterdeclarations":[')


# Loop and call the API endpoint changing the record start each iteration. The metadata is being
#   suppressed as we no longer need it.
i = 0
while (i < loopNum):
    # By default data is returned as a JSON object, the data set name being the root element. Unless
    #   you extract records as you process, you will end up with 1 distinct JSON object for EVERY 
    #   call/iteration. An alternative is to return the data as JSONA (an array of json objects) with 
    #   no root element - just a bracket at the start and end. This is easier to manipulate.
    webUrl = urllib.request.urlopen(baseUrl + "&$metadata=off&$format=jsona&$skip=" + str(skip) + "&$top=" + str(top))
    result = webUrl.read()
    
    # The data is already returned in a JSON format. There is no need to decode and load as a JSON object.
    #   If you want to begin working with and manipulating the JSON, import the json library and load with
    #   something like: jsonData = json.loads(result.decode())


    # Append results to file, trimming off first and last JSONA brackets, adding comma except for last call,
    #   AND root element terminating array bracket and brace to end unless on last call. The goal here is to 
    #   create a valid JSON file that contains ALL the records. This can be done differently.
    if (i == (loopNum - 1)):
        # on the last so terminate the single JSON object
        outFile.write(str(result[1:-1],'utf-8') + "]}")
    else:
        outFile.write(str(result[1:-1],'utf-8') + ",")


    # increment the loop counter and skip value
    i+=1
    skip = i * top


    print("Iteration " + str(i) + " done")


outFile.close()


# lets re-open the file and see if we got the number of records we expected
inFile = open("output2.json", "r")
my_data = json.load(inFile)
print("END " + str(datetime.now()) + ", " + str(len(my_data['femawebdisasterdeclarations'])) + " records in file")
inFile.close()

R - Downloading a full data set with more than 1,000 records and saving the results to one RDS file.

# Paging example in R. Receiving data in JSON, saving in RDS - a single R object.


require("httr")         # wrapper for curl package - may require installation


# This is a simple JSON parser library (may require installation), but since we are not 
#   really doing JSON manipulation to get the data, this is not needed.
#require("jsonlite") 


datalist = list()       # a list that will hold the results of each call


baseUrl <- "https://www.fema.gov/api/open/v1/FemaWebDisasterDeclarations?"


# Determine record count. Specifying only 1 column here to reduce amount of data returned. 
#   Remember to add criteria/filter here (if you have any) to get an accurate count.
result <- GET(paste0(baseUrl,"$inlinecount=allpages&$top=1&$select=id"))
jsonData <- content(result)         # should automatically parse as JSON as that is mime type
recCount <- jsonData$metadata$count


# calculate the number of calls we will need to get all of our data (using the maximum of 1000)
top <- 1000
loopNum <- ceiling(recCount / top)


# send some logging info to the console so we know what is happening
print(paste0("START ",Sys.time(),", ", recCount, " records, ", top, " returned per call, ", loopNum," iterations needed."),quote=FALSE)


# Loop and call the API endpoint changing the record start each iteration. Each call will
# return results in a JSON format. The metadata has been suppressed as we no longer need it.
skip <- 0
for(i in seq(from=0, to=loopNum, by=1)){
    # As above, if you have filters, specific fields, or are sorting, add that to the base URL 
    #   or make sure it gets concatenated here.
    result <- GET(paste0(baseUrl,"$metadata=off&$top=",top,"&$skip=",i * top))
    jsonData <- content(result)         # should automatically parse as JSON as that is mime type


    # Here we are adding the resulting JSON return to a list that can be turned into a combined
    #   dataframe later or saved. You may encounter memory limitations with very large datasets.
    #   For those, inserting into a database or saving chunks of data may be desired.
    datalist[[i+1]] <- jsonData


    print(paste0("Iteration ", i, " done)"), quote=FALSE)



# binds many items in our list to one data frame
fullData <- dplyr::bind_rows(datalist)


# Save as one R object - probably more useful (and storage efficient) than CSV or JSON if doing
#   analysis within R.
saveRDS(fullData, file = "output.rds")


# open file just to verify that we got what we expect
my_data <- readRDS(file = "output.rds")
print(paste0("END ",Sys.time(), ", ", nrow(my_data), " records in file"))

Node.js /Javascript - Downloading a full data set with more than 1,000 records and saving the results to a CSV file.

/* Paging example using Node.js and Javascript promises to make API calls to OpenFEMA via https requests.
 * The results of the https requests are saved to a CSV file called out.csv
 */


const https = require('https');
const fs = require('fs')


let csvFile = './out.csv'
var writeStream = fs.createWriteStream(csvFile, {flags:'a'});
let skip = "skip=0"
let metadataUrl = 'https://tdl.gis.fema.gov/openfema/api/open/v1/DisasterDeclarationsSummaries?$inlinecount=allpages&$top=1'
let url = 'https://tdl.gis.fema.gov/openfema/api/open/v1/DisasterDeclarationsSummaries?$format=csv&$top=1000&$' + skip
let totalDocs = 0
let firstApiCall = true
let csvHeader = ''
let metadataApiCall = true


// function returns a Promise
function getPromise(url) {
    return new Promise((resolve, reject) => {
        https.get(url, (response) => {
            let chunks_of_data = [];
            let arr = [];


            response.on('data', (fragments) => {
                // enter this block to get the total doc count using a call to the api that includes the metadata
                if (totalDocs === 0) {
                    arr = fragments.toString().split(",") // isolate count from metadata
                    totalDocs = parseInt(arr[2].slice(8), 10) // parse count into numerical value
                }
                // enter this block to write the csv header
                if (firstApiCall && !metadataApiCall) {
                    csvHeader = fragments.toString();
                    chunks_of_data.push(fragments);
                    firstApiCall = false
                }
                // prevents csv header from being written with every api request
                if (!firstApiCall && totalDocs > 0 && fragments.toString() !== csvHeader) {
                    chunks_of_data.push(fragments);
                }
            });


            response.on('end', () => {
                let response_body = Buffer.concat(chunks_of_data);
                resolve(response_body.toString());
                metadataApiCall = false
            });


            response.on('error', (error) => {
                reject(error);
            });
        });
    });



// async function to make http request
async function makeSynchronousRequest(url) {
    try {
        let http_promise = getPromise(url);
        let response_body = await http_promise;


        // holds response from server that is passed when Promise is resolved
        writeStream.write(response_body)
    }
    catch(error) {
        // Promise rejected
        console.log(error);
    }



// anonymous async function to execute some code synchronously after http request
(async function () {


    if (totalDocs === 0) {
        await makeSynchronousRequest(metadataUrl);
        console.log("Total Expected Documents: " + totalDocs)
    }


    writeStream.write(csvHeader)


    let skipCount = 0
    // wait to http request to finish
    do {
        await makeSynchronousRequest(url);
        // below code will be executed after http request is finished
        skipCount += 1000
        url = url.replace(skip, "skip=" + skipCount);
        skip = "skip=" + skipCount


    } while (skipCount < totalDocs)
    console.log("Finished writing to file")
    getTotalRows()
})();


/**
 * Calculates the number of rows in out.csv file.
 * This is done to make sure the number of rows in out.csv equals the number of expected rows.
 */
function getTotalRows(){
    var i;
    var numRows = 0;
    require('fs').createReadStream(csvFile)
        .on('data', function(chunk) {
            for (i=0; i < chunk.length; ++i)
                if (chunk[i] == 10) numRows++; // 10 is th ascii character for a new line which indicates a row
        })
        .on('end', function() {
            console.log("Total documents written to file ", numRows - 1);// we subtract 1 to account for the header
        });


.Net - Coming soon!

Other Common Examples to be Added Soon

  • Downloading full files
  • Periodic updates instead of full downloads
  • Checking for dataset data updates
  • Converting JSON to a different format
  • Working with different time formats
  • Using the metadata endpoints