I'm getting truly fed up with spam in my wiki. At this point, all comments are manually approved and I still get trouble: now it's scammers spamming the registration form with dummy accounts, which bounce back to me when I make new posts, or just generate backscatter for the confirmation email. It's really bad. I have hundreds of users registered on my blog, and I don't know which are spammy, which aren't.
So. I'm considering ditching ikiwiki comments altogether and I'm turning towards Mastodon as a commenting platforms. Others (JAK) have implemented this as a server but a more interesting approach for me is to simply load them dynamically from the server, which is what this person has done. They are using Hugo, however, so they can easily embed page metadata in the template to load the right server with the right comment ID.
I'm not sure how to do this in ikiwiki: how can we access page-specific data in templates?
Or maybe i just need to make a new template and insert it in my blog posts... pondering. --anarcat
I have tried to make a template, and that (obviously) fails because the
<script>stuff gets sanitized. It seems I would need to split the javascript out of the template into a base template and then make the page template refer to a function in there. It's kind of horrible and messy. I wish there was a way to just access page metadata from the page template itself... I see that the meta plugin passes along its metadata, but that's not extensible, so i'd need to either patch that or make yet another plugin. Ugh.Update: I did it. I have something that kind of works that's a combination of a
page.tmplpatch and a plugin. The plugin adds a[[!mastodon ]]directive that feeds thepage.tmplwith the right stuff, and adds comments through Javascript and the API. It's not pretty, but it works. You need this page.tmpl (or at least this patch and that one) and the mastodon.pm plugin from my mastodon-plugin branch.I'm not even sure this is a good idea. The first test I did was a "test comment" which led to half a dozen "test reply" and then I realized i couldn't redact individual posts from there. Ugh. I don't even know if, when I mute a user, it actually gets hidden from everyone else too...
So I'll test this for a while, I guess.
Update: the feed2exec diversion
There's another thing that's needed to make this work, which I didn't mention above, which is to automatically post new entries to Mastodon when they show up, because otherwise the above needs you to manually post the thing to Mastodon. It's kind of a mess. At first, I implemented this as a feed2exec plugin called ikiwikitoot.py but that didn't work reliably: there was this horrible bootstrapping issue that you had to first post the article, then run the RSS feed, and that modifies the article again.
This didn't work reliably for me, so I added more duct-tape so you can now run the command directly without going through the RSS feed. I ended up with the following post-receive hook in my source.git:
w-anarcat@marcos:~/source.git$ cat hooks/post-receive
#!/bin/sh
/home/anarcat/src/feed2exec/feed2exec/plugins/ikiwikitoot.py --post-receive --base-url https://anarc.at/
And the code still lives in feed2exec... It makes use of my second Python parser for ikiwiki directives (ugh) which really makes me feel like this should be implemented natively as a real ikiwiki plugin instead.
I'm not sure how this would work: we'd need to somehow keep state of which page is associated with which mastodon post. In theory, this could be kept in .ikiwiki, but i find that iffy: i really like the idea of having the mastodon post directly in the git source, but i'm not aware of any ikiwiki plugin actually modifying the source, that seems to be traditionnally a "no no".
I would also really like ikiwiki to render comments as static HTML. This would add a couple benefits over the current approach:
- no javascript necessary
- reduce load on the mastodon server (the current approach hits the server for every visitor that hits the "load comments" button, which is not kind to the mastodon server)
- reduce reliance on the upstream mastodon server to be up (right now, if mastodon goes away, comments disappear)
- allow for moderation on the ikiwiki side (right now, we're completely subject to the mastodon server moderation, if we don't like a comment, we can't remove it without removing it from the mastodon server)
But that's even more work. I definitely like the idea of adding !comment directives to existing posts though, this feels nice, and it's easy to remove them to do moderation.
The trick is "when do you post updates": short of having a daemon (or CGI?) that receives activitypub pushes, we'd need something that pulls new updates on a regular basis anyways, so it's not clear how easy it would be to actually implement.
Still, moving that code from feed2exec/python to ikiwiki/perl (or ikiwiki/python, there's precedents here), would be an important step, IMHO.
Thoughts? -- anarcat 2025-03-03
I agree that, for this workflow, what you describe is a great improvement: getting the supplementary code into IkiWiki.
I've been wondering about a different approach to Fediverse-enabling an IkiWiki site. Instead of the chicken-and-egg problem of toot-about-post, then update post with toot-id; what if the post itself was a first-class Fediverse object from the start? You could "follow" your blog directly, and re-toot (or toot about) a blog post once that post existed in the Fediverse, which would ideally be as soon as it existed anywhere else. I'm exploring what might be required for this approach, starting with stuff like Mastodon instance with 6 files. — Jon, 2025-05-20
Yeah, that's what I initially thought of doing, but that seemed even harder to implement. That post, for example, mentions "posts don't work", which is the critical part that we actually need here. We'd also need to handle
POSTrequests for the /inbox endpoint, which would require CGI (which I'm trying to get rid of in my deployments...). But yeah, this would be pretty awesome! -- anarcat
Native ActivityPub implementation notes
So I think it's worth starting a new thread entirely here.
Current implementation
Above, I copied or wrote a bunch of Python code that does this:
- reimplements parts of ikiwiki's parser do figure out stuff like "is this a blog" or "is this a draft"
- once it decides it's good, post a link to the page to a Mastodon instance through the
tootbinary - inject a
[[!mastodon ]]directive in the markdown pointing to that post, which does some javascript magic to fetch comments
I think that kind of sucks, for many reasons:
- it reimplements ikiwiki's logic, poorly, and buggily (although most bugs should be gone now?) legacy, untested code
- it doesn't actually implement mastodon posting mechanism, requires third-party
tootcode and an external mastodon server - doesn't actually have a local copy of comments, no control over moderation
So we should look at other options.
What would an implementation mean?
Implementing full ActivityPub (AP) for Ikiwiki is a huge challenge, because it's made up of many, many parts, but perhaps we can start small and scope this.
My minimum viable product (MVP) is this:
- on a certain pagespec, post an article to the fediverse, idempotently. this can be done with an external AP server or behave as a server, i don't care, whichever is easier
- on those pagespecs, show replies to the article, ideally as static comments
This looks simple, but the devil is in the details:
- for posting, if we want to do it natively, we need to implement things like "subscriptions" or an "outbox"
- for local comments, we essentially need to be able to receive notices that there are new comments to posts
I have only a rough idea of how this works, but I suspect it can't be done without POST handling, which kicks the CGI in. In my case, this is a problem: I turned this entire thing off because of the abuse and load it was generating, and I am really happy about that.
So I think we're sitting at a cross-road here: we either implement full support, in CGI, or delegate it to a separate server. Right now I'm going that latter approach, but maybe there could be a modular way of making this work?
Other documentation
Here are some interesting links I found while researching this for another project:
- Mastodon instance with 6 files: mentioned above, says "posts don't work", so i'm not sure it's sufficient, the files are
.well-known/webfinger,followers,following,userinformation, abanner.jpgand aimage.jpg1 - A guide to implement ActivityPub in a static site (or any website), 9 steps, good guide that covers "notes" and "outbox" (posting), "subscribing" (allows for followers), "inbox" (replies), and "quotes", pretty complete, uses a rss2outbox tool to convert RSS into AP posts, uses Javascript for the "subscribe" form, a DotNet
implementation for follow/unfollow/reply (this is the bit that requires handling POST) - ActivityPub on a (mostly) static website, good introduction on the basics of the protocol, with direct links to the spec
- the W3C spec and official website, https://activitypub.rocks/
- staticpub, a script that bolts on AP suport into an existing site, faces the same problem as we do that it can't handle POST
- https://activitypub.academy/, provides test endpoints for AP
A Mastodon/ActivityPub MVP server implementation
I think that the TL;DR: of those endpoints is this:
- most endpoints are JSON-LD
.well-known/webfinger: user information, can be static, therefore easy to implement, without any custom code, technically a pointer to an Actor in the spec, another JSON objectoutbox: where our posts are publishedinbox: where we receive replies, and follower's requests through POST requests, we check cryptographic signatures here, here lies dragonsfollowers: a collection of actors that follow us, technically not necessary, but we need to keep track of those anyways to keep them updated with our posts, might as well publish (or keep private!)delivery: technically not an endpoint, this is the part where we post new posts from ouroutboxto the Followers
We need to keep track of those states:
- our identity: this already exists as well
- our published posts: technically, this already exists in the form of whatever pagespec we have, a bit like how RSS feeds are built from a pagespec. perhaps this would be an addition to the already gargantuan
inlinedirective? or another directive? but we need to keep track of which posts we've already posted to the fediverse to avoid duplicating outgoing traffic - our followers: a set of URLs (and possibly cached/TOFU public keys?) of where we need to deliver news
- maybe something about reply users signatures, again TOFU
- replies: when someone replies, we could create a
commentobject/directive/file to store it, which would allow us a crude mechanism for moderation (just delete the file) - blocked users / servers: a set of URLs or servers that we do not want to receive comments from (that is huge: moderation is a huge challenge in any federation, and the second we start implementing this correctly, we suddenly need to block gab and truth social and so on)
That's the "heavy client" design, where we essentially implement a AP service in Mastodon.
If we figure that out though, I'd certainly kick my CGI back into gear, that sounds pretty awesome.
for an MVP, if we statically generate
.well-known/...then I think we restrict this to single user, and to deployments at the root of a domain. Which I think is fine.For POST handling via CGI (and caveat I haven't looked at this much yet at all!) I think something we would have to do is release all the normal ikiwiki locks as early as possible, assuming we can avoid needing to write (or possibly read) any existing ikiwiki data to respond to the POST request. If we need to maintain state for Mastodon, I would probably look to store it in a completely independent place, perhaps an sqlite db under
.ikiwiki, and not use the existing global ikiwiki locks to manage it. (Sadly choosing sqlite still makes it our responsibility to manage spin locking for writes, I think.) This way all the Fediverse server communication handling will not block on other ikiwiki activity and vice versa. I don't think Ikiwiki's global lock approach would scale to Fediverse levels of parallelism.RE Perl, I'm not as burnt out on it as you
but I'd happily consider using external, especially if there is a lack of JSON-LD libraries in Perl already (haven't looked).
—Jon, 2026-03-23
Regarding state, I am not sure about introducing a new database for this, but perhaps you're right that using git or the existing DB could fail to scale correctly... I still really like the idea of storing the comments in .comments file though...
Re JSON-LD + Perl, I wouldn't worry about it too much. The data is technically JSON-LD, but really, it's just JSON and the dereferencing can be done outside of a JSON-LD implementation, from what I understand. That said, I did look quickly and there's a
libjsonld-perlpackage in Debian which is this CPAN package, so that should be fine.-- anarcat
A "lighter", "external server" implementation
A "light" design could be essentially reimplementing my hack, but natively, with some improvements:
- when a new post is posted (how do we tell?!), relay the post to an external AP server, essentially implementing the Mastodon POST API, which it seems we can get away with relying on as a standard, e.g. plemora implements it with some differences, but it is implementation specific... e.g. misskey has a different API
- regularly (cron job? javascript sync?) pull comments from posts and add them as
[[!comment <span class="error">Error: comment must have content</span>]]files - progressive enhancement for javascript-enabled users that would pull the latest comments from the server, possibly triggering the above sync
we need to keep state:
- URL of a status related to the post (currently the
[[!mastodon ]]directive, not sure if that's the best) - replies as comments, including deleted comments (not sure how to track that)
- API token
This seems much simpler, but the devil is in the details.
Either way, I'm not super excited about implementing this, to be quite honest. My days of wanting to read (let alone write) Perl code are far behind me, so I'm not sure I'll have much energy to do this.
I did want to look into this problem for other reasons and figured this was as good place as any to do this! I hope that helps... -- anarcat
I really appreciate you sharing your notes on this, thanks! — Jon, 2026-03-21
You're welcome! Happy it helps!
but I'd happily consider using
