U.S. flag

An official website of the United States government

Dot gov

Official websites use .gov

A .gov website belongs to an official government organization in the United States.

Https

Secure .gov websites use HTTPS

A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites..

alert - warning

This page has not been translated into Hebrew. Visit the Hebrew page for resources in that language.

OpenFEMA Developer Resources

Welcome to the OpenFEMA Developer Resources page, devoted to providing additional development information regarding our Application Programming Interface (API) for use in your applications and mashups.  The API is free of charge and does not currently have user registration requirements.  Please contact the OpenFEMA Team at openfema@fema.dhs.gov  to suggest additional data sets and additional API features.

Please review the API Documentation for list of commands that can be used for each endpoint. As OpenFEMA’s main purpose is to act as a content delivery mechanism, each endpoint represents a data set. Therefore, the documentation does not outline each one; they all operate in the same manner. Metadata (content descriptions, update frequency, data dictionary, etc.) for each data set can be found on the individual data set pages. The Data Sets page provides a list of the available endpoints.

The Changelog identifies new, changing, and deprecated datasets, and describes new features to the API.

The  API Specifics/Technical portion of the FAQ may be of particular interest to developers.

The Large Data Set Guide provides recommendations and techniques for working with OpenFEMA's large data files. Some code examples are included.

Following are examples or recipes of commonly performed actions - many expressed in different programming or scripting languages. We will continue to expand this section. If you have code examples you would like to see, please contact the OpenFEMATeam. We also welcome any code examples you would like to provide.

alert - info

OpenFEMA has a presence on GitHub! Please visit github.com/FEMA

Accessing Data from API Endpoint

There are many ways to access data from the OpenFEMA API such as using a programming language, scripting language, or some built-in command. The following examples demonstrate how to get data using an OpenFEMA API endpoint. All of these examples will return disaster summaries for Hurricane Isabell (disaster number 1491).

Note that not all the data may be returned. By default only 1,000 records are returned. If more data exists, it will be necessary to page through the data to capture it all. See the API Documentation for more information.

HTTP/URL – Paste in your browsers URL.

https://www.fema.gov/api/open/v1/DisasterDeclarationsSummaries?$filter=disasterNumber eq 1491

cURL – Saving returned data to a file. Note URL %20 encoding used for spaces.

curl 'https://www.fema.gov/api/open/v1/DisasterDeclarationsSummaries?$filter=disasterNumber%20eq%201491' >> output.txt

wget – Saving returned data to a file.

wget –O output.txt 'https://www.fema.gov/api/open/v1/DisasterDeclarationsSummaries?$filter=disasterNumber%20eq%201491'

Windows PowerShell 3.0 – Note site security uses TLS 1.2, therefore the security protocol must be set first.

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
Invoke-WebRequest -Uri https://www.fema.gov/api/open/v1/DisasterDeclarationsSummaries?$filter=disasterNumber%20eq%201491 –OutFile c:\temp\output.txt

Paging Through Data

For performance reasons, only 1,000 records are returned per API endpoint call. If more than 1,000 records exist, it will be necessary to page through the data, using the $skip and $inlinecount parameters to retrieve every record. The metadata header returned as part of the data set JSON response will only display the full record count if the $inlinecount parameter is used—otherwise, it will have a value of 0. Computer code containing a loop is written to continue making API calls, incrementing the $skip parameter accordingly, until the number of records retrieved equals the total record count. See the OpenFEMA Documentation, URI commands section for additional information regarding these parameters.

NOTE: Although a few of the examples below download CSV files, it is recommended that results be downloaded in a JSON format. This format is native to the OpenFEMA data store, as such data need not be converted by the server thereby improving download performance. Further, when using CSV there is no guarantee that the record order will be maintained (this a very unlikely event however, that we have been unable to produce in tests.)

Following are examples in various languages.

Bash - Downloading a full data set with more than 1,000 records and saving the results to one JSON file.

#!/bin/bash
# Paging example using bash. Output in JSON.

baseUrl='https://www.fema.gov/api/open/v1/FemaWebDisasterDeclarations?$inlinecount=allpages'

# Return 1 record with your criteria to get total record count. Specifying only 1
#   column here to reduce amount of data returned. The backslashes are needed before
#   the API parameters otherwise bash will interpret them as variables. The -s switch
#   in the curl command will suppress its download status information.
result=$(curl -s -H "Content-Type: application/json" "$baseUrl&\$select=id&\$top=1")

# use jq (a json parser) to extract the count - not included in line above for clarity
recCount=$(echo "$result" | jq '.metadata.count')

# calculate the number of calls we will need to get all of our data (using the maximum of 1000)
top='1000'
loopNum=$((($recCount+$top-1)/$top))

# send some logging info to the console so we know what is happening
echo "START "$(date)", $recCount records, $top returned per call, $loopNum iterations needed."

# Initialize our file. Only doing this because of the type of file wanted. See the loop below.
#   The root json entity is usually the name of the dataset, but you can use any name.
echo '{"femawebdisasterdeclarations":[' >> output.json

# Loop and call the API endpoint changing the record start each iteration. NOTE: Each call will
# return the metadata object along with the results. This should be striped off before appending 
# to the final file or use the $metadata parameter to suppress it.
i=0
skip=0
while [ "$i" -lt $loopNum ]
do
    # Execute API call, skipping records we have already retrieved, excluding metadata header, in jsona.
    # NOTE: By default data is returned as a JSON object, the data set name being the root element. Unless
    #   you extract records as you process, you will end up with 1 distinct JSON object for EVERY call/iteration.
    #   An alternative is to return the data as JSONA (an array of json objects) with no root element - just
    #   a bracket at the start and end. Again, one bracketed array will be returned for every call. Since I
    #   want 1 JSON array, not many, I have stripped off the the closing bracket and added a comma. For the
    #   last iteration, do not add a comma and terminate the object with a bracket and brace. This certainly
    #   can be done differently, it just depends on what you are ultimately trying to accomplish.
    results=$(curl -s -H "Content-Type: application/json" "$baseUrl&\$metadata=off&\$format=jsona&\$skip=$skip&\$top=$top")

    # append results to file - the following line is just a simple append
    #echo $results >> "output.json"
    
    # Append results to file, trimming off first and last JSONA brackets, adding comma except for last call,
    #   AND root element terminating array bracket and brace to end unless on last call. The goal here is to 
    #   create a valid JSON file that contains ALL the records. This can be done differently.
    if [ "$i" -eq "$(( $loopNum - 1 ))" ]; then
        # on the last so terminate the single JSON object
        echo "${results:1:${#results}-2}]}" >> output.json
    else
        echo "${results:1:${#results}-2}," >> output.json
    fi

    i=$(( i + 1 ))       # increment the loop counter
    skip=$((i * $top))   # number of records to skip on next iteration

    echo "Iteration $i done"
done
# use jq to count the JSON array elements to make sure we got what we expected
echo "END "$(date)", $(jq '.femawebdisasterdeclarations | length' output.json) records in file"

Bash - Downloading a full data set with more than 1,000 records and saving the results to one CSV file.

#!/bin/bash
# Paging example using bash. Output in CSV.

# Base URL for this endpoint with $inlinecount set to return total record count. Add 
#   filters, column selection, and sort order to the end of the baseURL
baseUrl='https://www.fema.gov/api/open/v1/FemaWebDisasterDeclarations?$inlinecount=allpages'

# Return 1 record with your criteria to get total record count. Specifying only 1
#   column here to reduce amount of data returned. The backslashes are needed before
#   the API parameters otherwise bash will interpret them as variables. The -s switch
#   in the curl command will suppress its download status information.
result=$(curl -s -H "Content-Type: application/json" "$baseUrl&\$select=id&\$top=1")

# use jq (a json parser) to extract the count - not included in line above for clarity
recCount=$(echo "$result" | jq '.metadata.count')

# calculate the number of calls we will need to get all of our data (using the maximum of 1000)
top='1000'
loopNum=$((($recCount+$top-1)/$top))

# send some logging info to the console so we know what is happening
echo "START "$(date)", $recCount records, $top returned per call, $loopNum iterations needed."

# Loop and call the API endpoint changing the record start each iteration. NOTE: Each call will
# return results in a JSON format along with a metadata object. Returning data in a CSV format 
# will not include the metadata so there is no need to use the $metadata parameter to suppress it.
i=0
skip=0
while [ "$i" -lt $loopNum ]
do
    # Execute API call, skipping records we have already retrieved. NOTE: The curl content type
    #   has been changed. Now we expect csv text not json.
    results=$(curl -s -H 'Content-type: text/csv' "$baseUrl&\$metadata=off&\$format=csv&\$skip=$skip&\$top=$top")

    # Append results to file. NOTE: Quotes around the bash variable being echoed. If this is not
    #   done, record terminators (line feeds) will not be preserved. Each call will result in one
    #   very long line.
    echo "$results" >> "output.csv"
    
    i=$(( i + 1 ))       # increment the loop counter
    skip=$((i * $top))   # number of records to skip on next iteration

    echo "Iteration $i done"
done

# Each call will return data that INCLUDES the field headers. We need to remove these. The
#   following line uses sed (a stream editor program) to do this. The following command uses 
#   a regular expression to find exact matches to the header line and remove them. This can
#   also be done using awk, or by editing the file after the fact - open in a spreadsheet, 
#   sort, and delete the duplicate header lines. NOTE: The -i switch edits the file inline -
#   that is, the original file is permanently altered.
sed -i -r "1h;1!G;/^(.*)\n\1/d;P;D" output.csv

# Use wc command to count the lines in the file to make sure we got what we expected. It 
#   will be 1 line longer because of the field header.
echo "END "$(date)", $(wc -l output.csv) records in file"

Python - Downloading a full data set with more than 1,000 records and saving the results to one JSON file.

#!/usr/bin/env python3
# Paging example using Python 3. Output in JSON.

import sys
import urllib.request
import json
import math
from datetime import datetime

# Base URL for this endpoint. Add filters, column selection, and sort order to this.
baseUrl = "https://www.fema.gov/api/open/v1/FemaWebDisasterDeclarations?"

top = 1000      # number of records to get per call
skip = 0        # number of records to skip

# Return 1 record with your criteria to get total record count. Specifying only 1
#   column here to reduce amount of data returned. Need inlinecount to get record count. 
webUrl = urllib.request.urlopen(baseUrl + "$inlinecount=allpages&$select=id&$top=1")
result = webUrl.read()
jsonData = json.loads(result.decode())

# calculate the number of calls we will need to get all of our data (using the maximum of 1000)
recCount = jsonData['metadata']['count']
loopNum = math.ceil(recCount / top)

# send some logging info to the console so we know what is happening
print("START " + str(datetime.now()) + ", " + str(recCount) + " records, " + str(top) + " returned per call, " + str(loopNum) + " iterations needed.")

# Initialize our file. Only doing this because of the type of file wanted. See the loop below.
#   The root json entity is usually the name of the dataset, but you can use any name.
outFile = open("output2.json", "a")
outFile.write('{"femawebdisasterdeclarations":[')

# Loop and call the API endpoint changing the record start each iteration. The metadata is being
#   suppressed as we no longer need it.
i = 0
while (i < loopNum):
    # By default data is returned as a JSON object, the data set name being the root element. Unless
    #   you extract records as you process, you will end up with 1 distinct JSON object for EVERY 
    #   call/iteration. An alternative is to return the data as JSONA (an array of json objects) with 
    #   no root element - just a bracket at the start and end. This is easier to manipulate.
    webUrl = urllib.request.urlopen(baseUrl + "&$metadata=off&$format=jsona&$skip=" + str(skip) + "&$top=" + str(top))
    result = webUrl.read()
    
    # The data is already returned in a JSON format. There is no need to decode and load as a JSON object.
    #   If you want to begin working with and manipulating the JSON, import the json library and load with
    #   something like: jsonData = json.loads(result.decode())

    # Append results to file, trimming off first and last JSONA brackets, adding comma except for last call,
    #   AND root element terminating array bracket and brace to end unless on last call. The goal here is to 
    #   create a valid JSON file that contains ALL the records. This can be done differently.
    if (i == (loopNum - 1)):
        # on the last so terminate the single JSON object
        outFile.write(str(result[1:-1],'utf-8') + "]}")
    else:
        outFile.write(str(result[1:-1],'utf-8') + ",")

    # increment the loop counter and skip value
    i+=1
    skip = i * top

    print("Iteration " + str(i) + " done")

outFile.close()

# lets re-open the file and see if we got the number of records we expected
inFile = open("output2.json", "r")
my_data = json.load(inFile)
print("END " + str(datetime.now()) + ", " + str(len(my_data['femawebdisasterdeclarations'])) + " records in file")
inFile.close()

R - Downloading a full data set with more than 1,000 records and saving the results to one RDS file.

# Paging example in R. Receiving data in JSON, saving in RDS - a single R object.

require("httr")         # wrapper for curl package - may require installation

# This is a simple JSON parser library (may require installation), but since we are not 
#   really doing JSON manipulation to get the data, this is not needed.
#require("jsonlite") 

datalist = list()       # a list that will hold the results of each call

baseUrl <- "https://www.fema.gov/api/open/v1/FemaWebDisasterDeclarations?"

# Determine record count. Specifying only 1 column here to reduce amount of data returned. 
#   Remember to add criteria/filter here (if you have any) to get an accurate count.
result <- GET(paste0(baseUrl,"$inlinecount=allpages&$top=1&$select=id"))
jsonData <- content(result)         # should automatically parse as JSON as that is mime type
recCount <- jsonData$metadata$count

# calculate the number of calls we will need to get all of our data (using the maximum of 1000)
top <- 1000
loopNum <- ceiling(recCount / top)

# send some logging info to the console so we know what is happening
print(paste0("START ",Sys.time(),", ", recCount, " records, ", top, " returned per call, ", loopNum," iterations needed."),quote=FALSE)

# Loop and call the API endpoint changing the record start each iteration. Each call will
# return results in a JSON format. The metadata has been suppressed as we no longer need it.
skip <- 0
for(i in seq(from=0, to=loopNum, by=1)){
    # As above, if you have filters, specific fields, or are sorting, add that to the base URL 
    #   or make sure it gets concatenated here.
    result <- GET(paste0(baseUrl,"$metadata=off&$top=",top,"&$skip=",i * top))
    jsonData <- content(result)         # should automatically parse as JSON as that is mime type

    # Here we are adding the resulting JSON return to a list that can be turned into a combined
    #   dataframe later or saved. You may encounter memory limitations with very large datasets.
    #   For those, inserting into a database or saving chunks of data may be desired.
    datalist[[i+1]] <- jsonData

    print(paste0("Iteration ", i, " done)"), quote=FALSE)
}

# binds many items in our list to one data frame
fullData <- dplyr::bind_rows(datalist)

# Save as one R object - probably more useful (and storage efficient) than CSV or JSON if doing
#   analysis within R.
saveRDS(fullData, file = "output.rds")

# open file just to verify that we got what we expect
my_data <- readRDS(file = "output.rds")
print(paste0("END ",Sys.time(), ", ", nrow(my_data), " records in file"))

Node.js /Javascript - Downloading a full data set with more than 1,000 records and saving the results to a CSV file.

/* Paging example using Node.js and Javascript promises to make API calls to OpenFEMA via https requests.
 * The results of the https requests are saved to a CSV file called out.csv
 */

const https = require('https');
const fs = require('fs')

let csvFile = './out.csv'
var writeStream = fs.createWriteStream(csvFile, {flags:'a'});
let skip = "skip=0"
let metadataUrl = 'https://tdl.gis.fema.gov/openfema/api/open/v1/DisasterDeclarationsSummaries?$inlinecount=allpages&$top=1'
let url = 'https://tdl.gis.fema.gov/openfema/api/open/v1/DisasterDeclarationsSummaries?$format=csv&$top=1000&$' + skip
let totalDocs = 0
let firstApiCall = true
let csvHeader = ''
let metadataApiCall = true

// function returns a Promise
function getPromise(url) {
    return new Promise((resolve, reject) => {
        https.get(url, (response) => {
            let chunks_of_data = [];
            let arr = [];

            response.on('data', (fragments) => {
                // enter this block to get the total doc count using a call to the api that includes the metadata
                if (totalDocs === 0) {
                    arr = fragments.toString().split(",") // isolate count from metadata
                    totalDocs = parseInt(arr[2].slice(8), 10) // parse count into numerical value
                }
                // enter this block to write the csv header
                if (firstApiCall && !metadataApiCall) {
                    csvHeader = fragments.toString();
                    chunks_of_data.push(fragments);
                    firstApiCall = false
                }
                // prevents csv header from being written with every api request
                if (!firstApiCall && totalDocs > 0 && fragments.toString() !== csvHeader) {
                    chunks_of_data.push(fragments);
                }
            });

            response.on('end', () => {
                let response_body = Buffer.concat(chunks_of_data);
                resolve(response_body.toString());
                metadataApiCall = false
            });

            response.on('error', (error) => {
                reject(error);
            });
        });
    });
}

// async function to make http request
async function makeSynchronousRequest(url) {
    try {
        let http_promise = getPromise(url);
        let response_body = await http_promise;

        // holds response from server that is passed when Promise is resolved
        writeStream.write(response_body)
    }
    catch(error) {
        // Promise rejected
        console.log(error);
    }
}

// anonymous async function to execute some code synchronously after http request
(async function () {

    if (totalDocs === 0) {
        await makeSynchronousRequest(metadataUrl);
        console.log("Total Expected Documents: " + totalDocs)
    }

    writeStream.write(csvHeader)

    let skipCount = 0
    // wait to http request to finish
    do {
        await makeSynchronousRequest(url);
        // below code will be executed after http request is finished
        skipCount += 1000
        url = url.replace(skip, "skip=" + skipCount);
        skip = "skip=" + skipCount

    } while (skipCount < totalDocs)
    console.log("Finished writing to file")
    getTotalRows()
})();

/**
 * Calculates the number of rows in out.csv file.
 * This is done to make sure the number of rows in out.csv equals the number of expected rows.
 */
function getTotalRows(){
    var i;
    var numRows = 0;
    require('fs').createReadStream(csvFile)
        .on('data', function(chunk) {
            for (i=0; i < chunk.length; ++i)
                if (chunk[i] == 10) numRows++; // 10 is th ascii character for a new line which indicates a row
        })
        .on('end', function() {
            console.log("Total documents written to file ", numRows - 1);// we subtract 1 to account for the header
        });
}

.Net - Coming soon!

java - Coming soon!

Other Common Code Examples to be Added Soon

  • Downloading full files
  • Periodic updates instead of full downloads
  • Checking for dataset data updates
  • Converting JSON to a different format
  • Working with different time formats
  • Using the metadata endpoints

IPAWS Archived Alerts Query Examples

The Integrated Public Alert and Warning System (IPAWS) Archived Alerts data set is unique among OpenFEMA data sets in that the information is hierarchical in nature. Performing searches through the API can be challenging and the ability to filter, search, and sort IPAWS data is limited. In most cases it will be necessary to first download a subset based on a filter to limit data to a region or date, and then post-process offline with external tools.

OpenFEMA generally uses utilities and tools built into the Linux operating system. For example, once downloaded a utility called jq can be used to extract and manipulate JSON data and even extract it into a CSV file. This has to be done with care however, because the hierarchical nature of CAP messages can introduce duplicate records in the results.

Basic Filtering

# By date
https://www.fema.gov/api/open/v1/IpawsArchivedAlerts?$inlinecount=allpages&$filter=sent%20eq%20%272020-03-20%27
 
# By date range
https://www.fema.gov/api/open/v1/IpawsArchivedAlerts?$inlinecount=allpages&$filter=sent%20gt%20%272020-03-20%27%20and%20sent%20lt%20%272020-03-21%27
 
# Search by event code– Child Abduction Emergency
https://www.fema.gov/api/open/v1/IpawsArchivedAlerts?$filter=contains(info/eventCode,%27{%22SAME%22:%20%22CAE%22}%27)&$top=10&$orderby=sent%20desc
https://www.fema.gov/api/open/v1/IpawsArchivedAlerts?$filter=contains(info/eventCode,%27{%22SAME%22:%20%22CAE%22}%27)%20and%20contains(info/area/geocode,%27{%22SAME%22:%22051059%22}%27)&$top=1&$orderby=sent%20desc
 
# Search by event code – Silver Alerts for 11/09/2019
https://www.fema.gov/api/open/v1/IpawsArchivedAlerts?$filter=(contains(info/eventCode,%27{%22SAME%22:%20%22ADR%22}%27)%20and%20(sent%20ge%20%272019-11-09T00:00:00.000Z%27%20and%20sent%20lt%20%272019-11-10T00:00:00.000Z%27))&$orderby=sent%20desc
 
# Original CAP message only, by cogid
https://www.fema.gov/api/open/v1/IpawsArchivedAlerts?$filter=cogId%20eq%20200032&$select=originalMessage

Retrieve Data by State

Selecting by state involves finding the state FIPS prefix and using a "startswith" operator on a hierarchical element within the structure.

# IPAWS extract from OpenFEMA for California alerts between 1/1/2020 and 10/8/2020
 
# Get IPAWS data from 01/01/2020 to 10/08/2020 3:00pm EST (IPAWS data on OpenFEMA has a 24 hour lag)
https://www.fema.gov/api/open/v1/IpawsArchivedAlerts?$inlinecount=allpages&$top=0&$filter=sent%20ge%20%272020-01-01%27%20and%20startswith(info/area/geocode/SAME,%27006%27)&$filename=ipaws_ca_cy2020.json
 
# verified count of the records (9,753)
jq '.IpawsArchivedAlerts | length' ipaws_ca_cy2020.json
 
# extracting some data as a csv (no headers) - the -r parameter is important as it prevents duplicate double quotes
jq -r '.IpawsArchivedAlerts[] | {cogId, identifier, sent, msgType, sender, eventCode: .info[].eventCode[].SAME, event: .info[].event, headline: .info[].headline, areaDesc: .info[].area[].areaDesc} | map(.) | @csv' ipaws_ca_cy2020.json >> ipaws_ca_cy2020.csv
 
# The file called ipaws_ca_cy2020.json contains the OpenFEMA extract of the IPAWS data. This extract contains
 the raw CAP messages.
 
# NOTE: IPAWS CAP messages have a hierarchical structure - there are many parent child relationships. Flattening
#   the data into a CSV file as done above will result in duplicate records. A better approach is to specifically
#   extract desired information from the CAP messages themselves.

Retrieving Data by Event Type (and County)

The example below tries to identify those events associated with flooding. The following event codes are for alerts that are associated, but do not guarantee flooding:

  • SVR - Severe Thunderstorm Warning
  • FFW - Flash Flood Warning
  • FLW - Flood Warning
  • FLS - Flood Statement
  • HLS - Hurricane Statement
  • HUW - Hurricane Warning
# The first 1,000 (out of 3,345) IPAWS records for Lycoming County, PA (FIPS code 042081)
https://www.fema.gov/api/open/v1/IpawsArchivedAlerts?$inlinecount=allpages&$filter=contains(info/area/geocode,%27{%22SAME%22:%22042081%22}%27)
 
# The first 1,000 (out of 3,050) IPAWS records for Clinton County, PA (FIPS code 042035)
https://www.fema.gov/api/open/v1/IpawsArchivedAlerts?$inlinecount=allpages&$filter=contains(info/area/geocode,%27{%22SAME%22:%22042081%22}%27)
 
# IPAWS records for Lycoming County, PA, Severe Thunderstorm Warning (181 records)
https://www.fema.gov/api/open/v1/IpawsArchivedAlerts?$inlinecount=allpages&$filter=(contains(info/area/geocode,%27{%22SAME%22:%22042081%22}%27))%20and%20(contains(info/eventCode,%27{%22SAME%22:%20%22SVR%22}%27))
 
# IPAWS records for Clinton County, PA, Severe Thunderstorm Warning (32 records)
https://www.fema.gov/api/open/v1/IpawsArchivedAlerts?$inlinecount=allpages&$filter=(contains(info/area/geocode,%27{%22SAME%22:%22042035%22}%27))%20and%20(contains(info/eventCode,%27{%22SAME%22:%20%22FLW%22}%27))

Execute a Geospatial Query

The entity/field/object to be searched is passed along with a bounding polygon or a point. The syntax for the polygon must be in the format of the example. Replace the coordinates with your own polygon coordinates in WKT (Well Known Text) format.

# find alerts falling within the defined polygon 
https://www.fema.gov/api/open/v1/IpawsArchivedAlerts?$filter=geo.intersects(searchGeometry, geography 'POLYGON((34.38 -86.65,34.2 -86.72,34.31 -86.99,34.4 -86.94,34.38 -86.65))')

Retrieving Covid-19 Data

Currently the IPAWS Historical Archive does not permit free form text searches within the alert description or title fields, making it difficult to search on the term "covid". It is possible to filter on eventCode, however alert issuers may not have tagged COVID related alerts with the same code. It looks like most were issued with the CEM (Civil Emergency Message) event code. Some appear under the SPW (Shelter In-place) event code. There may be non-COVID civil emergency and shelter in place events in this list. There may exist other COVID related alerts that are not associated with these event codes. The following examples will pull by event codes. The resulting data could be further refined with post processing.

#The following query pulls messages with the CEM event type code from 01/01/2020:
https://www.fema.gov/api/open/v1/IpawsArchivedAlerts?$inlinecount=allpages&$filter=sent%20gt%20%272020-01-01%27%20and%20contains(info/eventCode,%27{%22SAME%22:%20%22CEM%22}%27)
 
#This will return SPW event codes:
https://www.fema.gov/api/open/v1/IpawsArchivedAlerts?$inlinecount=allpages&$filter=sent%20gt%20%272020-01-01%27%20and%20contains(info/eventCode,%27{%22SAME%22:%20%22SPW%22}%27)
 
#Event code searches can be combined as in:
https://www.fema.gov/api/open/v1/IpawsArchivedAlerts?$inlinecount=allpages&$filter=sent%20gt%20%272020-01-01%27%20and%20(contains(info/eventCode,%27{%22SAME%22:%20%22CEM%22}%27)%20or%20contains(info/eventCode,%27{%22SAME%22:%20%22SPW%22}%27))