Updated Dreamwidth backup script
Mar. 24th, 2024 09:55 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)

I found a Python script that does a backup, and was patched to work with Dreamwidth, but the backup took the form of a huge pile of XML files. Thousands of them. I wanted something more flexible, so I forked the script and added an optional flag that writes everything (entries, comments, userpic info) to a single SQLite database.
https://github.com/GBirkel/ljdump
Folks on MacOS can just grab the contents of the repo and run the script. All the supporting modules should already be present in the OS. Windows people will need to install some version of Python.
For what it's worth, here's the old discussion forum for the first version of the script, released way back around 2009.
Update, 2024-03-25:
The script now also downloads and stores tag and mood information.
Update, 2024-03-26:
After synchronizing, the script now generates browseable HTML files of the journal, including entries for individual pages with comment threads, and linked history pages showing 20 entries at a time.
Moods, music, tags, and custom icons are shown for the entries where applicable.
Currently the script uses the stylesheet for my personal journal (this one), but you can drop in the styles for yours and it should accept them. The structure of the HTML is rendered as close as possible to what Dreamwidth makes.
Update, 2024-03-28:
The script can also attempt to store local copies of the images embedded in journal entries. It organizes them by month in an images folder next to all the HTML. This feature is enabled with a "--cache_images" argument.
Every time you run it, it will attempt to cache 200 more images, going from oldest to newest. It will skip over images it's already tried and failed to fetch, until 24 hours have gone by, then it will try those images once again.
The image links in your entries are left unchanged in the database. They're swapped for local links only in the generated HTML pages.
Update, 2024-04-02:
The script is now ported to Python 3, and tested on both Windows and MacOS. I've added new setup instructions for both that are a little easier to follow.
Update, 2024-04-30:
Added an option to stop the script from trying to cache images that failed to cache once already.
2024-06-26: Version 1.7.6
Attempt to fix music field parsing for some entries.
Fix for crash on missing security properties for some entries.
Image fetch timeout reduced from 5 seconds to 4 seconds.
2024-08-14: Version 1.7.7
Slightly improves unicode handling in tags and the music field.
2024-09-07: Version 1.7.8
Changes "stop at fifty" command line flag to a "max n" argument, with a default of 400, and applies it to comments as well as entries. This may help people who have thousands of comments complete their initial download. I recommend using the default at least once, then using a value of 1500 afterward until you're caught up.
2024-09-18: Version 1.7.9
Table of contents for the table of contents!
First version of an "uncached images" report to help people find broken image links in their journal.
no subject
Date: 2024-03-25 08:47 am (UTC)Thank you!
no subject
Date: 2024-03-27 08:08 am (UTC)(no subject)
From:no subject
Date: 2024-03-26 04:50 am (UTC)no subject
Date: 2024-05-10 06:40 pm (UTC)no subject
Date: 2024-05-10 07:31 pm (UTC)(no subject)
From:(no subject)
From:(no subject)
From:<3
Date: 2024-05-11 10:51 pm (UTC)Got around it by adding
import time
with the other imports and changing line 823 (now 824) todate_or_none = time.mktime(date_first_seen.timetuple())
(fix stolen from here, dunno if it's a good fix tho)EDIT: I also ended up making some more changes to download images hosted on Dreamwidth, also in their original resolution - patch file below in case its handy.
Edit again: fix running ljdumptohtml.py alone, allow images to have attributes between <img and src="
Patch file
Re: <3
Date: 2024-05-21 06:59 am (UTC)https://github.com/GBirkel/ljdump/releases/tag/v1.7.5
Re: <3
From:Re: <3
From:Re: <3
From:Re: <3
From:Re: <3
From:KeyError: 'protected'
Date: 2024-06-01 12:24 pm (UTC)I've just started attempting to backup my old LJ but have come up with this error which I'm not sure what has caused it. Maybe it's because I have Private posts? It's been a long time since I've used LJ so I've probably forgotten what privacy categories exist. Here's the error, it appears after adding 'moods':
Adding new mood with name: jealous
Adding new mood with name: nervous
Traceback (most recent call last):
File "C:\Users\Anthony\Downloads\ljdump-1.7.5\ljdump.py", line 488, in
ljdump(
File "C:\Users\Anthony\Downloads\ljdump-1.7.5\ljdump.py", line 348, in ljdump
'security_protected': t['security']['protected'],
KeyError: 'protected'
Re: KeyError: 'protected'
Date: 2024-06-23 08:15 am (UTC)So it may not be about the entry being protected, it may be about the data coming from the server just being weirdly incomplete. Maybe some old entry before a schema change, or some weird side-effect of an import.
Thanks for the info! I'll see if I can cook up a patch for this later today.
Re: KeyError: 'protected'
From:Re: KeyError: 'protected'
From:Re: KeyError: 'protected'
From:Re: KeyError: 'protected'
From:Re: KeyError: 'protected'
From:Re: KeyError: 'protected'
From:Re: KeyError: 'protected'
From:Re: KeyError: 'protected'
From:no subject
Date: 2024-06-28 05:47 pm (UTC)This is the last bit of output:
Traceback (most recent call last):
File "./ljdump.py", line 501, in
ljdump(
File "./ljdump.py", line 430, in ljdump
ljdumptohtml(
File "/Users/brandie 1/Downloads/ljdump-1.7.6/ljdumptohtml.py", line 765, in ljdumptohtml
cached_image = get_or_create_cached_image_record(cur, verbose, url_to_cache, entry_date)
File "/Users/brandie 1/Downloads/ljdump-1.7.6/ljdumpsqlite.py", line 832, in get_or_create_cached_image_record
cur.execute("""
sqlite3.OperationalError: near "RETURNING": syntax error
no subject
Date: 2024-06-28 05:54 pm (UTC)(no subject)
From:(no subject)
From:no subject
Date: 2024-08-14 02:42 pm (UTC)Traceback (most recent call last):
File "-\ljdump.py", line 501, in
ljdump(
File "-\ljdump.py", line 177, in ljdump
insert_or_update_event(cur, verbose, ev)
File "-\ljdumpsqlite.py", line 417, in insert_or_update_event
cur.execute("""
sqlite3.InterfaceError: Error binding parameter :props_taglist - probably unsupported type.
I have tried re-running the script multiple times but it always errors out and stops at the same entry with this error.
no subject
Date: 2024-08-14 05:07 pm (UTC)How old is the entry?
Is there more than one tag assigned to that entry?
Anything unique about those tags?
(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:no subject
Date: 2024-08-22 11:03 am (UTC)Fetching journal comments for: falkner
*** Error fetching comment meta, possibly not community maintainer?
*** HTTP Error 504: Gateway Time-out
Fetching current users map from database
Traceback (most recent call last):
File "-\ljdump.py", line 501, in
ljdump(
File "-\ljdump.py", line 245, in ljdump
usermap = get_users_map(cur, verbose)
File "-\ljdumpsqlite.py", line 780, in get_users_map
cur.execute("SELECT id, name FROM users_map")
sqlite3.ProgrammingError: Cannot operate on a closed cursor.
no subject
Date: 2024-08-22 05:09 pm (UTC)(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:Thank you + Q
Date: 2024-12-11 10:53 pm (UTC)Second: the script skipped about ~7% of entries. Is there a way to re-run that would get it to re-try these non-pulled entries, rather than start from last pull/looking for new entries?
Re: Thank you + Q
Date: 2024-12-12 02:53 am (UTC)no subject
Date: 2025-01-04 08:22 pm (UTC)no subject
Date: 2025-01-04 09:55 pm (UTC)Also: Intrigued: What is "cat tipping"?
(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:no subject
Date: 2025-02-12 03:05 pm (UTC)Thank you so much for this ! I tried many things but couldn't make them work. Your script is very easy to understand and I got to make a back-up of all my entries !
no subject
Date: 2025-03-23 03:06 am (UTC)Images unable to cache?
Date: 2025-03-23 01:41 am (UTC)Hello,
Thank you for creating such an amazing tool! I wanted to backup my dreamwidth journal locally, but I couldn't figure out the image cache setting.
It works when I double click ljdump.py, but the images aren't cached. When I open terminal and type in this code,
./ljdump.py --cache_images
the terminal just opens and closes a window, but nothing happens. I'm not really sure what I'm doing wrong...Re: Images unable to cache?
Date: 2025-03-23 03:05 am (UTC)Re: Images unable to cache?
From:Re: Images unable to cache?
From:no subject
Date: 2025-03-31 01:22 am (UTC)Traceback (most recent call last):
File "./ljdump.py", line 508, in
ljdump(
File "./ljdump.py", line 91, in ljdump
ljsession = getljsession(journal_server, username, password)
File "./ljdump.py", line 55, in getljsession
r = urllib.request.urlopen(journal_server+"/interface/flat", data=data)
File "/usr/lib/python3.10/urllib/request.py", line 216, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.10/urllib/request.py", line 525, in open
response = meth(req, response)
File "/usr/lib/python3.10/urllib/request.py", line 634, in http_response
response = self.parent.error(
File "/usr/lib/python3.10/urllib/request.py", line 563, in error
return self._call_chain(*args)
File "/usr/lib/python3.10/urllib/request.py", line 496, in _call_chain
result = func(*args)
File "/usr/lib/python3.10/urllib/request.py", line 643, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found
I am using Linux Mint and new to Linux, so am totally willing to bet this is something totally obvious that I just don't know.
no subject
Date: 2025-03-31 05:18 am (UTC)https://github.com/GBirkel/ljdump/blob/master/ljdump.config.sample
(no subject)
From:no subject
Date: 2025-04-16 10:04 am (UTC)This error seems to occur if there's unicode characters in the display name (in my case it was "♪"):
[Running it on Windows this time around.] I got around it by temporarily removing the character in the name, but I'm reporting anyway in case anybody else comes across this issue.
no subject
Date: 2025-04-27 03:35 pm (UTC)Second: Probably a dumb question, but: so I messed around a little with the html of the index/table of contents and saved the style sheet from my current dreamwidth journal in the folder -- https://wembley.dreamwidth.org/res/4147489/stylesheet?1745745106 -- but I can't get the darker blue left-hand sidebar to appear with the list of tags and such. I'm not very tech-savvy, I just know a teeny bit of HTML and I don't really understand CSS. Do you know what code I should add in to put the sidebar back?
Third, about the actual tool again: A really awesome person backed up my Livejournal with a different method and it worked really well, but I wanted to test out your version of LJDump just for fun and see how it worked on my old LJ, since it worked really well on my DW (though I had to do the max-1500 thing or it would get rate-limited). When I tried it on my Livejournal (wemblee.livejournal.com), this happened:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 1348, in do_open
h.request(req.get_method(), req.selector, req.data, headers,
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1286, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1332, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1281, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1041, in _send_output
self.send(msg)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 979, in send
self.connect()
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1458, in connect
self.sock = self._context.wrap_socket(self.sock,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 517, in wrap_socket
return self.sslsocket_class._create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 1075, in _create
self.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 1346, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1002)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Applications/Downloader apps/Livejournal downloaders/ljdump-1.7.9/ljdump.py", line 508, in
ljdump(
File "/Applications/Downloader apps/Livejournal downloaders/ljdump-1.7.9/ljdump.py", line 91, in ljdump
ljsession = getljsession(journal_server, username, password)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/Downloader apps/Livejournal downloaders/ljdump-1.7.9/ljdump.py", line 55, in getljsession
r = urllib.request.urlopen(journal_server+"/interface/flat", data=data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 216, in urlopen
return opener.open(url, data, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 519, in open
response = self._open(req, data)
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 536, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 496, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 1391, in https_open
return self.do_open(http.client.HTTPSConnection, req,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 1351, in do_open
raise URLError(err)
urllib.error.URLError: urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1002)
Btw, this part:
urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1002)
had the "<" ">" brackets around it but when I kept those in this comment, DW didn't like it and said it was an unclosed HTML bracket, so just FYI.
Thank you so much for updating this program, it really is awesome of you. <3