Updated Dreamwidth backup script
Mar. 24th, 2024 09:55 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)

I found a Python script that does a backup, and was patched to work with Dreamwidth, but the backup took the form of a huge pile of XML files. Thousands of them. I wanted something more flexible, so I forked the script and added an optional flag that writes everything (entries, comments, userpic info) to a single SQLite database.
https://github.com/GBirkel/ljdump
Folks on MacOS can just grab the contents of the repo and run the script. All the supporting modules should already be present in the OS. Windows people will need to install some version of Python.
For what it's worth, here's the old discussion forum for the first version of the script, released way back around 2009.
Update, 2024-03-25:
The script now also downloads and stores tag and mood information.
Update, 2024-03-26:
After synchronizing, the script now generates browseable HTML files of the journal, including entries for individual pages with comment threads, and linked history pages showing 20 entries at a time.
Moods, music, tags, and custom icons are shown for the entries where applicable.
Currently the script uses the stylesheet for my personal journal (this one), but you can drop in the styles for yours and it should accept them. The structure of the HTML is rendered as close as possible to what Dreamwidth makes.
Update, 2024-03-28:
The script can also attempt to store local copies of the images embedded in journal entries. It organizes them by month in an images folder next to all the HTML. This feature is enabled with a "--cache_images" argument.
Every time you run it, it will attempt to cache 200 more images, going from oldest to newest. It will skip over images it's already tried and failed to fetch, until 24 hours have gone by, then it will try those images once again.
The image links in your entries are left unchanged in the database. They're swapped for local links only in the generated HTML pages.
Update, 2024-04-02:
The script is now ported to Python 3, and tested on both Windows and MacOS. I've added new setup instructions for both that are a little easier to follow.
Update, 2024-04-30:
Added an option to stop the script from trying to cache images that failed to cache once already.
2024-06-26: Version 1.7.6
Attempt to fix music field parsing for some entries.
Fix for crash on missing security properties for some entries.
Image fetch timeout reduced from 5 seconds to 4 seconds.
2024-08-14: Version 1.7.7
Slightly improves unicode handling in tags and the music field.
2024-09-07: Version 1.7.8
Changes "stop at fifty" command line flag to a "max n" argument, with a default of 400, and applies it to comments as well as entries. This may help people who have thousands of comments complete their initial download. I recommend using the default at least once, then using a value of 1500 afterward until you're caught up.
2024-09-18: Version 1.7.9
Table of contents for the table of contents!
First version of an "uncached images" report to help people find broken image links in their journal.
<3
Date: 2024-05-11 10:51 pm (UTC)Got around it by adding
import time
with the other imports and changing line 823 (now 824) todate_or_none = time.mktime(date_first_seen.timetuple())
(fix stolen from here, dunno if it's a good fix tho)EDIT: I also ended up making some more changes to download images hosted on Dreamwidth, also in their original resolution - patch file below in case its handy.
Edit again: fix running ljdumptohtml.py alone, allow images to have attributes between <img and src="
Patch file
Re: <3
Date: 2024-05-21 06:59 am (UTC)https://github.com/GBirkel/ljdump/releases/tag/v1.7.5
Re: <3
Date: 2024-05-21 10:56 pm (UTC)I get the ljuniq cookie by opening a new private browsing window, opening the network request section of the browser window, going to https://www.dreamwidth.org/ then solving the CAPTCHA. It redirects back to the homepage which returns a set-cookie HTTP header shown in the developer tools, the line looks something like
Long comment, click to open...
(I replaced parts of it with "x" in that since it's just an example.)
The part that looks like
ewp0jLxxxx97IQp%3A171xxxx687
is the important part, I copy that and paste it as the <ljuniq> value in the configuration file. Note the%3A
in the cookie I think needs to be changed to a:
(since it's "URL encoded")So the end configuration option looks like
<ljuniq>ewp0jLxxxx97IQp:171xxxx687</ljuniq>
That cookie tends to expire really easily in my experience (or maybe it's my user error, I'm bad at doing things right). I'm not sure what the conditions are, but it sometimes takes me a few tries to get it to let me download the images. If it fails, it'll say "Content type text not expected, image skipped" for the image that didn't work. I've used commenting out the logic at the end of the "Respect the global image cache setting" bit in ljdumptohtml.py and then running only ljdumptohtml.py retry the images that failed without waiting 24h or deleting the cache rows from the database manually.
Unrelated if it helps, found another crash but I'm not sure how to fix it:
The lines
print('Adding new event %s at %s: %s' % (data['itemid'], data['eventtime'], data['subject']))
and
print('Updating event %s at %s: %s' % (data['itemid'], data['eventtime'], data['subject']))
can crash with errors like
UnicodeEncodeError: 'cp932' codec can't encode character '\xe2' in position 54: illegal multibyte sequence
I don't know how to fix that, but removing the printing of data['subject'] works around it.
Possible note that since it uses the original image rather than also downloading the thumbnail, the image shown in the rendered HTML page can be very large and run off the web page if the original picture is high resolution but had a small thumbnail in the journal entry. Since that is different than the patch, I'm guessing that may be a deliberate decision though.
If I read it right I think this line sends the cookie unconditionally even for non-Dreamwidth images, just to note that I don't think that follows the cookie security that a browser would have, where the cookie is private information shared only to the original host. It may be a security consideration to leak it to other hosts, though I don't know how DW uses that cookie.
Thank you again for the script, and sorry to mention a lot of things!
Re: <3
Date: 2024-05-22 12:47 am (UTC)Yeah I think you're right about the encoding error... Something in the subject line. I currently treat the entry content carefully with respect to encoding conversion because it can be from all kinds of origins, but other short data fields like "music" and "subject" are handled a little more simply, which needs to change...
One thing that makes it especially difficult is Dreamwidth does some of its own internal conversion when rendering a journal, so diagnosing the problem isn't as simple as going to the entry on the Dreamwidth site and having a look...
What's the subject line that causes the crash? I know pasting it in here will probably convert it into unicode, but maybe it will give me a clue...
Extracting that cookie information is a complicated process. :D Instead of re-authenticating with a Captcha, what happens when you use the ljuniq cookie that's stored in the browser when you're already authenticated? Like, right now I can go to Developer Window->Storage->Cookies in Safari and see an ljuniq cookie... Could I just paste that into the script?
Sorry I can't test this myself... I don't have any images hosted on DW...
Re: <3
Date: 2024-06-17 12:18 am (UTC)The crashing entry title was "I haven’t been on reddit in ages, I got so much from it but honestlly was ready to move on ig". It didn't crash on linux, only on windows. I suspect it's the unicode quotation mark in "haven’t" breaking it, but haven't confirmed that.
Yes, when I used the very latest ljuniq cookie value from the last request in the developer tools from my normal logged in window, it seemed to work, so I wonder if it's just that I was grabbing ones that were too old before.
Thanks again for the useful tool and your replies!
Re: <3
Date: 2024-07-08 08:33 pm (UTC)I'm trying to fetch it from the livejournal.com api directly (didn't try with DW yet cause I want to try to fetch from source first)
And I get weird characters like this:
Adding new event 2 at 2004-02-24T00:44:00+00:00: ÑаÑÑÑки...
Which should be this: https://vicnaum.livejournal.com/623.html
I've read that you should go to settings/OldEncoding or smth and change it to cp1251 for Cyrilic (Windows), but this page doesn't exist anymore...
Interesting, is API returning it already broken, or it can be fixed within Python still?
Re: <3
Date: 2024-09-08 01:05 am (UTC)<?xml version="1.0" encoding="WINDOWS-1251"?> ..... </xml>
and if so, that can be used to decide what encoding to use when converting it to Unicode. But right now, unless there's some magic happening in the Python XML parser I don't know about, it always assumes UTF-8 so stuff in e.g. WINDOWS-1251 will get mangled.
LJ renders it just fine when presenting its own web interface, so either LJ preserves the encoding information internally, or it follows some kind of guessing procedure to convert it to UTF-8. One could theoretically answer that question by crawling through the LJ source code.