U.S. flag

Một trang web chính thức của Chính Phủ Hoa Kỳ

Dot gov

Phần mở rộng .gov nghĩa là đây là trang web chính thức.

Các trang web của chính phủ liên bang thường kết thúc bằng .gov hoặc .mil. Trước khi chia sẻ thông tin nhạy cảm, hãy chắc chắn rằng quý vị đang truy cập trang web của chính phủ liên bang.

Https

Trang mạng này được bảo vệ.

Phần https:// đảm bảo rằng quý vị đang kết nối với trang web chính thức và mọi thông tin quý vị cung cấp đều được mã hóa và truyền tải an toàn.

alert - warning

Trang này chưa được dịch sang tiếng Tiếng Việt. Truy cập trang được dịch sang tiếng Tiếng Việt để xem các tài nguyên hỗ trợ theo ngôn ngữ này.

OpenFEMA Developer Resources

Welcome to the OpenFEMA Developer Resources page, devoted to providing additional development information regarding our Application Programming Interface (API) for use in your applications and mashups.  The API is free of charge and does not currently have user registration requirements.  Please contact the OpenFEMA Team at openfema@fema.dhs.gov  to suggest additional data sets and additional API features.

Please review the API Documentation for list of commands that can be used for each endpoint. As OpenFEMA’s main purpose is to act as a content delivery mechanism, each endpoint represents a data set. Therefore, the documentation does not outline each one; they all operate in the same manner. Metadata (content descriptions, update frequency, data dictionary, etc.) for each data set can be found on the individual data set pages. The Data Sets page provides a list of the available endpoints.

The Changelog identifies new, changing, and deprecated datasets, and describes new features to the API.

The  API Specifics/Technical portion of the FAQ may be of particular interest to developers.

The Large Data Set Guide provides recommendations and techniques for working with OpenFEMA's large data files. Some code examples are included.

Following are examples or recipes of commonly performed actions - many expressed in different programming or scripting languages. We will continue to expand this section. If you have code examples you would like to see, please contact the OpenFEMATeam. We also welcome any code examples you would like to provide.

Accessing Data from API Endpoint

There are many ways to access data from the OpenFEMA API such as using a programming language, scripting language, or some built-in command. The following examples demonstrate how to get data using an OpenFEMA API endpoint. All of these examples will return disaster summaries for Hurricane Isabell (disaster number 1491).

Note that not all the data may be returned. By default only 1,000 records are returned. If more data exists, it will be necessary to page through the data to capture it all. See the API Documentation for more information.

HTTP/URL – Paste in your browsers URL.

https://www.fema.gov/api/open/v1/DisasterDeclarationsSummaries?$filter=disasterNumber eq 1491

cURL – Saving returned data to a file. Note URL %20 encoding used for spaces.

curl 'https://www.fema.gov/api/open/v1/DisasterDeclarationsSummaries?$filter=disasterNumber%20eq%201491' >> output.txt

wget – Saving returned data to a file.

wget –O output.txt 'https://www.fema.gov/api/open/v1/DisasterDeclarationsSummaries?$filter=disasterNumber%20eq%201491'

Windows PowerShell 3.0 – Note site security uses TLS 1.2, therefore the security protocol must be set first.

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
Invoke-WebRequest -Uri https://www.fema.gov/api/open/v1/DisasterDeclarationsSummaries?$filter=disasterNumber%20eq%201491 –OutFile c:\temp\output.txt

Paging Through Data

For performance reasons, only 1,000 records are returned per API endpoint call. If more than 1,000 records exist, it will be necessary to page through the data, using the $skip and $inlinecount parameters to retrieve every record. The metadata header returned as part of the data set JSON response will only display the full record count if the $inlinecount parameter is used—otherwise, it will have a value of 0. Computer code containing a loop is written to continue making API calls, incrementing the $skip parameter accordingly, until the number of records retrieved equals the total record count. See the OpenFEMA Documentation, URI commands section for additional information regarding these parameters.

Following are examples in various languages.

Bash - Downloading a full data set with more than 1,000 records and saving the results to one JSON file.

#!/bin/bash
# Paging example using bash. Output in JSON.


baseUrl='https://www.fema.gov/api/open/v1/FemaWebDisasterDeclarations?$inlinecount=allpages'


# Return 1 record with your criteria to get total record count. Specifying only 1
#   column here to reduce amount of data returned. The backslashes are needed before
#   the API parameters otherwise bash will intrepret them as variables. The -s switch
#   in the curl command will suppress its downwnload status information.
result=$(curl -s -H "Content-Type: application/json" "$baseUrl&\$select=id&\$top=1")


# use jq (a json parser) to extract the count - not included in line above for clarity
recCount=$(echo "$result" | jq '.metadata.count')


# calculate the number of calls we will need to get all of our data (using the maximum of 1000)
top='1000'
loopNum=$((($recCount+$top-1)/$top))


# send some logging info to the console so we know what is happening
echo "START "$(date)", $recCount records, $top returned per call, $loopNum iterations needed."


# Initialize our file. Only doing this because of the type of file wanted. See the loop below.
#   The root json entity is usually the name of the dataset, but you can use any name.
echo '{"femawebdisasterdeclarations":[' >> output.json


# Loop and call the API endpoint changing the record start each iteration. NOTE: Each call will
# return the metadata object along with the results. This should be striped off before appending 
# to the final file or use the $metadata parameter to suppress it.
i=0
skip=0
while [ "$i" -lt $loopNum ]
do
    # Execute API call, skipping records we have already retrieved, excluding metadata header, in jsona.
    # NOTE: By default data is returned as a JSON object, the data set name being the root element. Unless
    #   you extract records as you process, you will end up with 1 distinct JSON object for EVERY call/iteration.
    #   An alternative is to return the data as JSONA (an array of json objects) with no root element - just
    #   a bracket at the start and end. Again, one bracketed array will be returned for every call. Since I
    #   want 1 JSON array, not many, I have stripped off the the closing bracket and added a comma. For the
    #   last iteration, do not add a comma and terminate the object with a bracket and brace. This certianly
    #   can be done differently, it just depends on what you are ultimately trying to accomplish.
    results=$(curl -s -H "Content-Type: application/json" "$baseUrl&\$metadata=off&\$format=jsona&\$skip=$skip&\$top=$top")


    # append results to file - the following line is just a simple append
    #echo $results >> "output.json"
    
    # Append results to file, trimming off first and last JSONA brackets, adding comma except for last call,
    #   AND root element terminating array bracked and braceto end unless on last call. The goal here is to 
    #   create a valid JSON file that contains ALL the records. This can be done differently.
    if [ "$i" -eq "$(( $loopNum - 1 ))" ]; then
        # on the last so terminate the single JSON object
        echo "${results:1:${#results}-2}]}" >> output.json
    else
        echo "${results:1:${#results}-2}," >> output.json
    fi


    i=$(( i + 1 ))       # increment the loop counter
    skip=$((i * $top))   # number of records to skip on next iteration


    echo "Iteration $i done"
done
# use jq to count the JSON array elements to make sure we got what we expected
echo "END "$(date)", $(jq '.femawebdisasterdeclarations | length' output.json) records in file"

Bash - Downloading a full data set with more than 1,000 records and saving the results to one CSV file.

#!/bin/bash
# Paging example using bash. Output in CSV.


# Base URL for this endpoint with $inlinecount set to return total record count. Add 
#   filters, column selection, and sort order to the end of the baseURL
baseUrl='https://www.fema.gov/api/open/v1/FemaWebDisasterDeclarations?$inlinecount=allpages'


# Return 1 record with your criteria to get total record count. Specifying only 1
#   column here to reduce amount of data returned. The backslashes are needed before
#   the API parameters otherwise bash will intrepret them as variables. The -s switch
#   in the curl command will suppress its downwnload status information.
result=$(curl -s -H "Content-Type: application/json" "$baseUrl&\$select=id&\$top=1")


# use jq (a json parser) to extract the count - not included in line above for clarity
recCount=$(echo "$result" | jq '.metadata.count')


# calculate the number of calls we will need to get all of our data (using the maximum of 1000)
top='1000'
loopNum=$((($recCount+$top-1)/$top))


# send some logging info to the console so we know what is happening
echo "START "$(date)", $recCount records, $top returned per call, $loopNum iterations needed."


# Loop and call the API endpoint changing the record start each iteration. NOTE: Each call will
# return results in a JSON format along with a metadata object. Returning data in a CSV format 
# will not include the metadata so there is no need to use the $metadata parameter to suppress it.
i=0
skip=0
while [ "$i" -lt $loopNum ]
do
    # Execute API call, skipping records we have already retrieved. NOTE: The curl content type
    #   has been changed. Now we expect csv text not json.
    results=$(curl -s -H 'Content-type: text/csv' "$baseUrl&\$metadata=off&\$format=csv&\$skip=$skip&\$top=$top")


    # Append results to file. NOTE: Quotes around the bash variable being echoed. If this is not
    #   done, record terminators (line feeds) will not be preserved. Each call will result in one
    #   very long line.
    echo "$results" >> "output.csv"
    
    i=$(( i + 1 ))       # increment the loop counter
    skip=$((i * $top))   # number of records to skip on next iteration


    echo "Iteration $i done"
done


# Each call will return data that INCLUDES the field headers. We need to remove these. The
#   following line uses sed (a stream editor program) to do this. The following command uses 
#   a regular expression to find exact matches to the header line and remove them. This can
#   also be done using awk, or by editing the file after the fact - open in a spreadsheet, 
#   sort, and delete the duplicate header lines. NOTE: The -i switch edits the file inline -
#   that is, the original file is permanently altered.
sed -i -r "1h;1!G;/^(.*)\n\1/d;P;D" output.csv


# Use wc command to count the lines in the file to make sure we got what we expected. It 
#   will be 1 line longer because of the field header.
echo "END "$(date)", $(wc -l output.csv) records in file"

Python - Coming soon!

.Net - Coming soon!

JavaScript - Coming soon!

Other Common Examples to be Added Soon

  • Downloading full files
  • Retrieving data
  • Periodic updates instead of full downloads
  • Checking for dataset data updates
  • Converting JSON to a different format
  • Paging through data
  • Working with different time formats
  • Using the metadata endpoints