How To Download All Blog Images From Webflow CMS Automatically

If you're going through a website redesign, and transitioning from one particular style to another, you may want to download all your blog's images so that you can send them to a graphic designer.
Webflow currently doesn't support downloading all images from your CMS, but there is a pretty straightforward way to do it.
1. Export Your CMS Entries
When viewing your CMS collection (e.g. Blog Posts) in the Webflow CMS view, you'll have an 'Export' button in the top right. Click that to download all your CMS entries as a CSV file.
2. Copy All Image URLs Into Text File
From your CSV, select all the image URLs and paste them into a TXT file. You don't have to include commas between them, just ensure each image URL is on a separate line.
3. Automatically Grab All Images
Now you have your text file of image URLs, you could use an automation tool (like Automater on Mac or PowerShell on Windows) to go through the file and download each image one-by-one.
Download Via Automator On Mac
If you're on a Mac:
- Open Automator
- Choose 'Run Shell Script'
- Paste the following code into your script (this is assuming you have a file named blog-imgs.txt on your Desktop):
mkdir -p ~/Downloads/BlogDownloads
cd ~/Downloads/BlogDownloads
inputFile="/Users/$USER/Desktop/blog-imgs.txt"
while IFS= read -r url; do
cleanUrl=$(echo "$url" | tr -d '\r')
if [[ "$cleanUrl" =~ ^https?:// ]]; then
filename=$(basename "$cleanUrl")
echo "Downloading $cleanUrl as $filename"
curl -L "$cleanUrl" -o "$filename"
else
echo "Skipping invalid line: $cleanUrl"
fi
done < "$inputFile"
- Click 'Run' in the top right of Automator
Your images should then be downloaded over the next few minutes. If any failed you will have a log mentioning it in your console.
Download Via PowerShell On Windows
If you're on Windows:
- Open PowerShell
- Create a folder where the image downloads will go, e.g.
C:\Users\<YourName>\Downloads\ImageDownloads
- Copy and paste this script into PowerShell:
# Set the input file path and output directory
$inputFile = "$env:USERPROFILE\Desktop\blog-imgs.txt"
$outputDir = "$env:USERPROFILE\Downloads\ImageDownloads"
# Create the output directory if it doesn't exist
if (-not (Test-Path -Path $outputDir)) {
New-Item -ItemType Directory -Path $outputDir | Out-Null
}
# Read each line (URL) and download
Get-Content $inputFile | ForEach-Object {
$url = $_.Trim()
if ($url -match '^https?://') {
$filename = [System.IO.Path]::GetFileName($url)
$destination = Join-Path $outputDir $filename
Write-Host "Downloading $url -> $filename"
try {
Invoke-WebRequest -Uri $url -OutFile $destination -UseBasicParsing
}
catch {
Write-Warning "❌ Failed to download: $url"
}
}
else {
Write-Host "Skipping invalid line: $url"
}
}
- Run the script and your images will download one-by-one over the next few minutes.
How These Scripts Work
Both scripts work by reading a list of image URLs from a text file, validating each URL, and downloading the image to a specified folder using its original filename.
They first ensure the download directory exists, then loop through each line in the file, cleaning up any extra characters like carriage returns.
Each URL is checked to confirm it begins with http
or https
, and the filename is extracted from the URL's path.
The image is then downloaded using curl
on macOS or Invoke-WebRequest
on Windows, saving it with the extracted filename.
Both scripts handle invalid lines with logging to help you understand which images might not have been downloaded successfully.