If you've found a bug in ikiwiki, post about it here. TODO items go elsewhere. Link items to done when done.
Also see the Debian bugs.
If you are reporting a security vulnerability, please email the maintainers privately, instead of making it public by listing it here. See security for contact details.
There are 139 "open" bugs:
Please change the admin user password for ikiwiki.info as this is too weak
Originally reported to Debian at https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=598415:
If both relativedate and toggle plugins are enabled and features of both plugins are used on a wiki page, a tag for ikiwiki.js is rendered twice in the generated HTML for the page.
At the time I'm reporting this here, the problem is evident at https://ikiwiki.info/sandbox/.
Related: javascript resources placed after html tag
— Jon, 2024-03-25
If you enable filecheck and attachment, then IkiWiki (e.g. via wrapper) will throw a lot of the following errors on stderr
Use of uninitialized value $size in division (/) at /.../filecheck.pm line 73.
73 is
sub humansize ($) {
my $size=shift;
foreach my $unit (reverse sort { $units{$a} <=> $units{$b} || $b cmp $a } keys %units) {
if ($size / $units{$unit} > 0.25) {
return (int($size / $units{$unit} * 10)/10).$unit;
}
}
return $size; # near zero, or negative
}
Disabling attachment is sufficient to stop this. — Jon 2023-09-06
In a few places, it is recommended to try the ispage()
pagespec, described at attachment but provided by the (supposedly independent) plugin filecheck.
I've had trouble getting it to work so put together a minimal test-case. Very basic wiki with filecheck enabled (but attachment not).
File structure as follows
index.mdwn
a/foo.mdwn
a/bar.txt
a/pic.png
index containing
[[!map pages="a/*"]]
[[!map pages="a/* and ispage()"]]
The first map expands, as expected, to
- bar.txt
- foo
- pic.png
The second is empty. Expected behaviour:
- foo
(with txt not enabled)
— Jon, 2023-09-06
here's a trace of what happens to
ispage()
when applied to the above example.
match_ispage
gets called with argument "a/foo"- it calls
Ikiwiki::pagetype
with that argument unmodified- the first branch checks for a period, so fails
- base is calculated to be "foo"
- the next branch fails as $hooks{htmlize}{foo} is false
- the function returns undefined.
— Jon, 2023-09-06
I have tried to use the sparkline plugin today and it failed with:
remote: PHP Fatal error: Cannot use 'Object' as class name as it is reserved in /usr/share/php/sparkline/Object.php on line 71
... at built time. I have tried to follow the instructions in sparkline but those also failed with because php5 is long gone from Debian, of course. The libdigest-sha1-perl
package also seems gone, so I have tried this:
apt install libsparkline-php php-gd php-cli
... but that is how I ended up with the above failures. I suspect the embedded PHP code in ikiwiki needs to be ported to PHP 7 (or 8 now?)...
But really, maybe, the sparkline Perl library should be examined again. Surely it's not that bad that we need PHP around here, do we? It looks like SVG::Sparkline could be a good candidate although there's also Text::Sparkline.
Or maybe sparklines are dead... http://sparkline.org doesn't even resolve... Time flies, doesn't it? -- anarcat
I hit this a little while ago and ended up ditching the sparkline plugin. But, if it is to be resurrected, I would agree with ditching PHP here, too. For my use-case the data changes so infrequently (this graph of blog posts by year, not including the current year) that I manually generate something in LibreCalc annually, and copy the resulting picture in. — Jon, 2023-01-20
the w3 validator fails on the main ikiwiki website because of the way inline scripts are handled. For example, validating the post convention for signing posts to ikiwiki.info leads to this warning:
Warning: The charset attribute on the script element is obsolete.
From line 271, column 1; to line 271, column 78
>↩↩</div>↩<script src="../../ikiwiki/ikiwiki.js" type="text/javascript" charset="utf-8"></scri
Seems like a low-hanging fruit...
There are other errors on my blog, namely the pubdate=pubdate
blob added by IkiWiki::displaytime
, no idea where that's coming from, but it's not standard anymore. See for example, this validation and also the time element specification. It looks like it was part of HTML5 but was removed at some later point. According to this GitHub comment on the react project, it was replaced by the itemprop attribute, as in itemprop="datePublished"
. See also this w3 example. Phew.
-- anarcat 2022-09-01
For pubdate, I created pubdate not valid for html5 in 2020, with a patch. I've applied that in my opinionated ikiwiki container. -- Jon, 2022-09-06
I recently migrated from the (no longer supported) mod_auth_kerb to its designated replacement, mod_auth_gssapi for HTTP authentication.
mod_auth_kerb sets REMOTE_USER
to the Krb5 name that authenticated (e.g., wouter@EXAMPLE.COM). mod_auth_gssapi does not do so; it sets it in the GSS_NAME
variable, instead.
It would be awesome if the httpauth plugin would accept a configuration value to set the variable in which to look for the username to account for cases like these.
Hello.
I've just installed ikiwiki on Debian WSL and what I tried so far is working, but a thing: the websetup page is loading until it reaches a timeout and I can't see any button to save preferences at the bottom of the page.
This is the log:
2022/08/01 20:10:13 [error] 3881#3881: *2 upstream timed out (110: Connection timed out) while reading upstream, client: 172.26.16.1, server: cuspide, request: "POST /ikiwiki.cgi HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "cuspide:8080", referrer: "http://cuspide:8080/ikiwiki.cgi?do=prefs"
With Ikiwiki, mdwn, and discount configured, the following syntax
[some link with parens in it](http://foo.com/parens(yeah))
Has a broken URI (missing the closing parenthesis) and renders the second parenthesis outside of the link.
Here's it live on ikiwiki.info: some link with parens in it)
Bug present in at least discount 2.2.6 and 2.2.7 and merely exposed by IkiWiki. Note that upstream consider this to be not-a-bug/"behavior follows specs": https://github.com/Orc/discount/issues/241
The following alternative markdown implementations get this right:
- libtext-markdown-perl 1.000031 (debian -3)
- multimarkdown 1.000035 (debian -2)
- commonmark))
I note the irony of the commonmark URI being an apt demo of the problem.
— Jon, 2021-11-05
if I request more than one email login link, the first token received still has to be used. Using a later one triggers "bad token" error
ikiwiki version 3.20200202.3
Some wiki pages have multiple authors. It seems logical that you would use multiple meta directives to mark this,
[[!meta author="author one"]]
[[!meta author="other author"]]
however, when you do this, it appears that things that key off of the author tag only pick up on the last instance of the directive. for example a page spec looking for
author(author one)
will not match while one looking for
author(other author)
will match. I would expect that both would match.
If you include a wide character, such as a fancy quote, in a graphviz source file for use with the file parameter to the graph directive, you get the following error:
[[!graph Error: Wide character in subroutine entry at /usr/share/perl5/IkiWiki/Plugin/graphviz.pm line 57.]]
Note, this is what renders on the resulting wiki page, not something that you see on the command line. Since dot supports UTF-8, I would expect this to work.
I'm unsure if this is a bug in the graphviz plugin, or in the perl module it depends upon, but figured I would start here.
This reminds me of table can not deal with Chinese —Jon, 2020-11-02
Is it perhaps time to flip the default for the html5
to 1
? What criteria should be used to answer that question?
With smcv's last major change in this area, back in 2014, the market share of IE8 and earlier was a concern at
5%. What is the equivalent market share today?
The templates are still a real complex mess of branches around this configuration option. I'd love to see all that branching removed. Would anyone else? A hypothetical future html4 option/plugin could work by post-processing the generated output and replacing the troublesome tag names (via XSLT perhaps, depending on tooling). Is there any appetite for this?
—Jon, 2020-10-05
when using the table directive with the table plugin, if I use the data argument to specify a file as a source of the data, I currently have to give a full path from the root of the wiki to the file. I should be able to give just the file name and have the file located using the same rules that are followed when creating wiki links.
The responsive_layout option (which flags ikiwiki pages as not broken on small displays) is not applied in edit mode (or other CGI generated pages).
This can be verified easily by checking the meta tags in the pages's source or using Firefox's page info (Ctrl-i), which contains a width=device-width, initial-scale=1
viewport option in rendered pages but not the editing.
As a result, page editing from mobile is tedious, as it involves scrolling around not only in the edit window but also the scrolled page as a whole.
I've given it a try with a local copy of a saved page into which the meta was edited, and the page could be edited much more smoothly on mobile when the meta was set on the edit page (as was expected -- it works well on the regular pages, after all). There was still some scrolliness due to the large width of the commit message field, but that could be a bug in the used theme and even so does not limit the usefulness of setting this on all generated pages as long as the theme is basically responsive.
I have had the following in my ikiwiki.setup
since 2016:
account_creation_password: XXXXXXXXXXXX
The XXX is made of lowercase, uppercase and digits, randomly generated. I would expect this to stop any account creation. Yet new accounts get created all the time:
w-anarcat@marcos:~/source/.ikiwiki$ perl -le 'use Storable; my $userinfo=Storable::retrieve("userdb"); print $userinfo->{$_}->{regdate} foreach keys %$userinfo' | sort -n | tail -10
1587487021
1587574304
1587695540
1587770285
1588354442
1588409505
1589257010
1589834234
1590175162
1590176201
The last two timestamps, for example, are today. I'm not absolutely certain, but I believe that account is an emailauth account:
'zemihaso_hfdsf.sadsdskfm.com' => {
'regdate' => 1590175162,
'passwordless' => 'd8de5ec25cfd68e64318fe6353c6428a',
'subscriptions' => 'comment(blog/2020-04-27-drowning-camera)',
'email' => 'zemihaso@hfdsf.sadsdskfm.com'
},
It's obviously a spammer. It seems to be attacking my wiki by doing the following:
- register an account with emailauth
- subscribe to the page
- spam the page with a comment
- which then sends email to the victim(s)
It's all kind of a mess. I'm at the point in my anti-spam protection where I am seriously considering disabling all user registration and all comments on all pages. Maybe delegate this to Mastodon or some other third-party commenting system, because I'm just tired of dealing with spam and bounces...
Anyone else seeing this? Shouldn't the account_creation_password
setting apply to emailauth? What else am I missing?
Thanks! -- anarcat
With IkiWiki 3.20200202.3, aggregate's web trigger sends it's response as HTTP 200, Content-Type: text/plain, followed by a payload that seems to include a Content-type in the body accidentally, and a bunch of HTML:
qusp▶ GET -e "https://$redacted/ikiwiki?do=aggregate_webtrigger"
Enter username for Git Access at REDACTED:443: admin
Password:
200 OK
Connection: close
Date: Mon, 20 Apr 2020 08:27:39 GMT
Server: nginx/1.14.2
Content-Length: 2467
Content-Type: text/plain
Client-Date: Mon, 20 Apr 2020 08:27:39 GMT
Client-Peer: 31.51.75.214:443
Client-Response-Num: 1
Client-SSL-Cert-Issuer: /C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
Client-SSL-Cert-Subject: /CN=REDACTED
Client-SSL-Cipher: ECDHE-RSA-CHACHA20-POLY1305
Client-SSL-Socket-Class: IO::Socket::SSL
Aggregation triggered via web.
Content-type: text/html
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
…
Looking at the source it's fairly clear why: http://source.ikiwiki.branchable.com/?p=source.git;a=blob;f=IkiWiki/Plugin/aggregate.pm;hb=HEAD#l76
I guess it was never intended for a human to see this output. I've found it useful to add a link button to some private pages to manually trigger the web hook, something like
<a class=feedbutton href=https://REDACTED/ikiwiki?do=aggregate_webtrigger>webtrigger</a>
So it would be nice if the payload was sent with a HTML content-type (which wouldn't hurt cron jobs or clients that ignore the body anyway)
— Jon, 2020-04-20
Oh, haha, the issue here is that the aggregation is failing. In the failure case, the above happens (there's an error wrapped up in HTML and delivered as text/plain). In the success case, the output to browser is just plain text. — Jon, 2020-04-20
If you force-push to a source repository that is the configured git repository for the ikiwiki srcdir, rebuilding the wiki (after the hook) fails with
fatal: refusing to merge unrelated histories
'git pull --prune origin' failed: at /usr/local/share/perl/5.28.1/IkiWiki/Plugin/git.pm line 251.
I think that under some circumstances ikiwiki should support this. Perhaps via configuration options for the git plugin. — Jon
/usr/local/lib/ikiwiki/plugins/rst
and /usr/local/lib/ikiwiki/plugins/proxy.py
provoke the following error output during some CGI operations if there is no python3 interpreter available
/usr/bin/env: ‘python3’: No such file or directory
the CGI also sporadically dies (with signal 13, => HTTP error 500). I don't understand why this is sporadic though.
This relates to autosetup python warnings and some other bugs.
This happens even if rst
and proxy.py
are not active on the wiki (as they aren't, by default).
A hacky quick solution might be to rename rst rst.py and for IkiWiki to avoid scanning plugins that end in .py if they are not enabled.
— Jon
Some people in our office edit ikiwiki via command line, and others via the Web interface. At a certain point, the following error popped up in Apache error logs:
To ssh://git@localhost/brains.git
d7e8496..8a75b3b master -> master
Switched to branch 'throw_away_8a75b3b1da256452ae87b8543b5bec3d2f586ac5'
Switched to branch 'master'
From .
* branch throw_away_8a75b3b1da256452ae87b8543b5bec3d2f586ac5 -> FETCH_HEAD
Automatic merge went well; stopped before committing as requested
Already on 'master'
To ssh://git@localhost/brains.git
! [rejected] master -> master (fetch first)
error: failed to push some refs to 'ssh://git@localhost/brains.git'
hint: Updates were rejected because the remote contains work that you do
hint: not have locally. This is usually caused by another repository pushing
hint: to the same ref. You may want to first integrate the remote changes
hint: (e.g., 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.
'git push origin master' failed: at /usr/share/perl5/IkiWiki/Plugin/git.pm line 220.
From ssh://localhost/brains
927cc73..29c557b master -> origin/master
'git pull --prune origin' failed: at /usr/share/perl5/IkiWiki/Plugin/git.pm line 220.
fatal: cannot do a partial commit during a merge.
error: Pull is not possible because you have unmerged files.
hint: Fix them up in the work tree, and then use 'git add/rm <file>'
hint: as appropriate to mark resolution and make a commit.
fatal: Exiting because of an unresolved conflict.
'git pull --prune origin' failed: at /usr/share/perl5/IkiWiki/Plugin/git.pm line 220.
fatal: cannot do a partial commit during a merge.
error: Pull is not possible because you have unmerged files.
hint: Fix them up in the work tree, and then use 'git add/rm <file>'
hint: as appropriate to mark resolution and make a commit.
fatal: Exiting because of an unresolved conflict.
...
It looks like commits via the Web interface continued to appear to work mostly as normal, but weren't being commited in the public facing git repo that contains the .ikiwiki/
directory. I did notice a page that I edited contained ">>>>>" text, but not "====.." or "<<<<.." text in the Web version, but not in my git checkout (from the master git repo). Also some people reported problems with the Web interface after the fact.
When I went in to fix the repo, I quickly removed those uncommitted changes, but that was clearly a mistake, because I lost the edits that had been created via the Web after a certain point in time.
We're using ikiwiki version 3.20160121 on Trisquel 8, which is based on Ubuntu 16.04.
If you have any thoughts about what caused this issue, I'd be happy to hear it. At this point, we've mostly moved on from the data loss.
Thanks! : )
As reported by many confused users at http://git-annex.branchable.com/bugs/impossible_to_login_to_the_website_at_times/ a successful login can end up at the error message "Error: Login succeeded, but I don't remember why you were logging in [...]"
This happens when the user has earlier logged into a site on its https url, but then later navigates to the http url, and is prompted to log in there.
Workaround: Set redirect_to_https
.
Changes to comments result in notifyemail sending emails with broken urls like "http://whatever/foo/comment_1_10a49d69282155c5c3e66dc58f64f956/"
notifyemail uses meta permalink if set, so it must not be set for comment pages.
In the comments plugin, there's this code, which is supposed to set permalink:
if ($params{page} =~ m/\/\Q$config{comments_pagename}\E\d+_/) {
$pagestate{$page}{meta}{permalink} = urlto(IkiWiki::dirname($params{page})).
"#".page_to_id($params{page});
}
comments_pagename
is comment_
so the above code needs a comment page to contain
two underscores. I think that is the root of the bug. --Joey
Removed the trailing underscore in the regexp, so it should be fixed, though I have not tested the fix. Leaving this bug open until it's confirmed fixed. (I deployed it to branchable.)
This will only fix the stored permalink metadata for a comment when its get preprocessed again, not immediately. That's ok for notifyemail, but other uses of permalink might need a wiki rebuild to get the bug fix. --Joey
I'm not sure that I see how the regexp was wrong? It's looking for, for example,
foo/comment_1_eaab671848ee6129f6fe9399474eeac0._comment sccccccccdu
where
- s marks the literal
/
- cccc marks
comments_pagename
- d marks
\d+
(one or more digits)- u marks the literal
_
The old regexp would have failed for the older format
foo/comment_1._comment
, though.--smcv
I want inject IkiWiki::showform and use replace submit button with translated button, but it seem no effect in cgi, does IkiWiki::showfrom can not be injectd when run as cgi?
by the way, what about add a post-showform hook to ikiwiki?
the below is my code:
#!/usr/bin/perl
package IkiWiki::Plugin::chinesize;
use warnings;
use strict;
use IkiWiki 3.00;
inject(name => 'IkiWiki::showform', call => \&myshowform);
sub myshowform ($$$$;@) {
my $form=prepform(@_);
shift;
my $buttons=shift;
my $session=shift;
my $cgi=shift;
my $str=cgitemplate($cgi, $form->title,
$form->render(submit => $buttons), @_);
my %names = ("Save Page" => "保存页面",
"Preview" => "预览",
"Cancel" => "取消",
"Diff" => "差异比较",
"Rename" => "重命名",
"Remove" => "删除",
"Login" => "登录",
"Register" => "注册",
"Logout" => "退出",
"Setup" => "设置",
"Users" => "所有用户",
"Name" => "用户名",
"Password" => "密码",
"Email" => "电子邮件",
"Save Preferences" => "保存选项",
"Confirm Password" => "再次输入密码",
"Create Account" => "创建帐户",
"Reset Password" => "重置密码",
"Insert Links" => "插入链接",
"Rename Attachment" => "重命名附件",
"Remove Attachments" => "删除附件",
"FormattingHelp" => "格式帮助",
"Reset" => "重置",
"Save Setup" => "保存设置",
"Advanced Mode" => "高级模式",
"Account Creation Password" => "请输入帐户创建密码(管理员预设)"
);
foreach my $old_name (keys(%names))
{
my $new_name = Encode::decode_utf8($names{$old_name});
$str =~ s/<input +id="([_A-Za-z0-9]+)" +name="([_A-Za-z0-9]+)" +type="([_A-Za-z0-9]+)" +value="($old_name)" +\/>/<button id="$1" name="$2" type="$3" value="$4">$new_name<\/button>/g;
$str =~ s/<input +name="([_A-Za-z0-9]+)" +type="([_A-Za-z0-9]+)" +value="($old_name)" +\/>/<button name="$1" type="$2" value="$3">$new_name<\/button>/g;
$str =~ s/<input +class="([_A-Za-z0-9]+)" +id="([_A-Za-z0-9]+)" +name="([_A-Za-z0-9]+)" +type="([_A-Za-z0-9]+)" +value="($old_name)" +\/>/<button class="$1" id="$2" name="$3" type="$4" value="$5">$new_name<\/button>/g;
$str =~ s/<div class="([_A-Za-z0-9]+)" id="([_A-Za-z0-9]+)">$old_name<\/div>/<div class="$1" id="$2">$new_name<\/div>/g;
$str =~ s/<div class="([_A-Za-z0-9]+)" id="([_A-Za-z0-9]+)"><span class="([_A-Za-z0-9]+)">$old_name<\/span><\/div>/<div class="$1" id="$2"><span class="$3">$new_name<\/span><\/div>/g;
$str =~ s/<a href="\.\/ikiwiki\/formatting\/">($old_name)<\/a>/<a href="\.\/ikiwiki\/formatting\/">$new_name<\/a>/g;
};
printheader($session);
print $str
}
I have found that the gettext string can not be translated to Chinese when I set LC_ALL to zh_CN.UTF-8
Steps to reproduce:
- Create a new wiki via ikiwiki --setup /etc/ikiwiki/auto.setup
- Test your wiki and the recentchanges page
- add the pandoc plugin to the setup file and set correct path to executable (e.g.
/usr/bin/pandoc
) - run
ikiwiki --setup mywiki.setup
again
Then the recentchanges page doesn't work anymore.
Pandoc version: 1.19.2.1 Ikwiki version: 3.20170622
On Trisquel 8.0, if you have the python-future
package installed, this causes the wrong module to get loaded by python2.7.
In /usr/lib/ikiwiki/plugins/proxy.py
:
try: # Python 3
import xmlrpc.server as _xmlrpc_server
except ImportError: # Python 2
import SimpleXMLRPCServer as _xmlrpc_server
xmlrpc.server
gets loaded even though we are using python2.7. This causes the following non-fatal error when pushing to the git repo:
remote: Traceback (most recent call last):
remote: File "/usr/lib/ikiwiki/plugins/rst", line 45, in <module>
remote: from proxy import IkiWikiProcedureProxy
remote: File "/usr/lib/ikiwiki/plugins/proxy.py", line 72, in <module>
remote: class _IkiWikiExtPluginXMLRPCDispatcher(_xmlrpc_server.SimpleXMLRPCDispatcher):
remote: AttributeError: 'module' object has no attribute 'SimpleXMLRPCDispatcher'
Interleaving logs from ikiwiki and the kernel:
[Wed May 02 15:50:32.307921 2018] [cgi:error] [pid 4914:tid 3031423808] [client 74.113.40.30:12004] AH01215: To /home/b-waldeneffect-org/source.git: /var/www/b-waldeneffect-org/ikiwiki.cgi, referer: http://www.waldeneffect.org/ikiwiki.cgi?do=blog&from=pending&subpage=1&title=Pros+and+cons+of+the+community+garden
[Wed May 02 15:50:32.308000 2018] [cgi:error] [pid 4914:tid 3031423808] [client 74.113.40.30:12004] AH01215: 0c67dc578..893cc6e9b master -> master: /var/www/b-waldeneffect-org/ikiwiki.cgi, referer: http://www.waldeneffect.org/ikiwiki.cgi?do=blog&from=pending&subpage=1&title=Pros+and+cons+of+the+community+garden
May 02 15:50:50 pell kernel: ikiwiki[5054]: segfault at bf7d3ffc ip b6ec9e63 sp bf7d4000 error 6 in libmarkdown.so.2.2.2[b6ec7000+11000]
[Wed May 02 15:50:50.222077 2018] [cgi:error] [pid 4914:tid 3031423808] [client 74.113.40.30:12004] End of script output before headers: ikiwiki.cgi, referer: http://www.waldeneffect.org/ikiwiki.cgi?do=blog&from=pending&subpage=1&title=Pros+and+cons+of+the+community+garden
[Wed May 02 16:15:48.013597 2018] [cgi:error] [pid 10708:tid 2838391616] [client 74.113.40.30:11989] AH01215: 893cc6e9b..c4f23b861 master -> master: /var/www/b-waldeneffect-org/ikiwiki.cgi, referer: http://www.waldeneffect.org/ikiwiki.cgi?do=blog&from=pending&subpage=1&title=Advantages+and+disadvantages+of+a+community+garden
[Wed May 02 16:15:57.921670 2018] [cgi:error] [pid 10708:tid 2838391616] [client 74.113.40.30:11989] AH01215: /home/b-waldeneffect-org/public_html/pending/Pros_and_cons_of_the_community_garden/index.html independently created, not overwriting with version from pending/Pros_and_cons_of_the_community_garden: /var/www/b-waldeneffect-org/ikiwiki.cgi, referer: http://www.waldeneffect.org/ikiwiki.cgi?do=blog&from=pending&subpage=1&title=Advantages+and+disadvantages+of+a+community+garden
So, apparently an img directive led to libmarkdown segfaulting, crashing ikiwiki after it had rendered a html file but before it made note that it had done so.
The user saw an "Internal server error" and hit reload, which failed due to the "independently created, not overwriting" check. The site was then wedged not accepting edits until manually fixed.
After deleting the html file, ikiwiki --refresh
successfully built
things, without libmarkdown segfaulting this time. I don't know if this was
a transient libmarkdown bug or a memory glitch.
Either way, seems that ikiwiki could better handle recovery from this kind of scenario. The "independently created" check has a security benefit... Perhaps ikiwiki could keep a log file of destdir files it's recently created but has yet to record in the index, and then the check can be skipped for those files.
--Joey
I have this in my ikiwiki.setup
:
comments_closed_pagespec: 'creation_before(comment_forbid_before)'
I was hoping it would forbid comments on pages older than comment_forbid_before.mdwn
. I created the page, of which you can see a rendering here:
https://anarc.at/comment_forbid_before/
Notice how ikiwiki correctly parsed the meta
directive to make the page as created about two years ago, as I have put this on top of comment_forbid_before.mdwn
:
Yet the following page, which gets mysteriously respammed all the time, keeps on getting spammed anyways:
https://anarc.at/blog/2006-07-04-vol-et-ralentissement-de-ce-blog/
I have since them removed the spam, and worked around the problem by doing this configuration instead:
comments_closed_pagespec: 'creation_before(comment_forbid_before) or creation_year(2006)'
But later pages, "created before" the flag page, still have comments allowed:
https://anarc.at/blog/2017-07-03-free-software-activities-june-2017/
Now I know that I could list every year my blog was in operation until now in that pagespec to workaround this, but I figured it would be nicer to fix this bug.
Alternatively, it would be great to have a creation_before(2 years ago)
or creation_older_than(2 years)
to do what I actually want here, which is to keep spam to newer pages, to reduce the attack surface.
Thanks!
-- anarcat
As best as I can recall, running ikiwiki-mass-rebuild as root has never worked for me on NetBSD or Mac OS X. On both platforms, it gives me a shell as each user in the system wikilist. This is due to non-portable arguments to su(1).
The following patch works much better on the aforementioned platforms, as well as CentOS 6:
diff --git ikiwiki-mass-rebuild ikiwiki-mass-rebuild
index ce4e084e8..2ff33b493 100755
--- ikiwiki-mass-rebuild
+++ ikiwiki-mass-rebuild
@@ -32,7 +32,7 @@ sub processuser {
my $user=shift;
return if $user=~/^-/ || $users{$user};
$users{$user}=1;
- my $ret=system("su", $user, "-s", "/bin/sh", "-c", "--", "$0 --nonglobal @ARGV");
+ my $ret=system("su", "-m", $user, "-c", "/bin/sh -c -- '$0 --nonglobal @ARGV'");
if ($ret != 0) {
print STDERR "warning: processing for $user failed with code $ret\n";
}
The -m
may be overzealous. I have some sites running as users with /sbin/nologin
for a shell, and this allows running a command as those users, though without some typical environment variables. This is probably wrong. Maybe I should be doing something else to limit shell access for those users, and the su arg should instead be -
.
--schmonz
To get some real-world and very cross-platform testing, I've committed a conservative version of this patch, with
-
in place of-m
, to pkgsrc's ikiwiki package (rev 3.20180311nb1), and will report back. In the meanwhile, would this change cause any obvious regressions on Debian? --schmonzsu(1) does several things for us, not all of them completely obvious:
- raise or drop privileges
- avoid inheriting the controlling tty
- alter the environment
- run a PAM stack which can do more or less anything
- execute the given command
Because it's a privileged program, and POSIX/SUS don't specify the behaviour of privileged operations, its behaviour is determined by tradition rather than standards.
Dropping privileges (in this case) is uncontroversial: clearly we want to do that.
Not inheriting the controlling tty is necessary to prevent tty hijacking when dropping privileges (CVE-2011-1408, Debian bug #628843). See ikiwiki-mass-rebuild's git history. It might also be possible to do this with
POSIX::setsid
, but I don't know whether that fully protects us on all platforms, and I would hope that every platform'ssu
does the right things for that platform.Altering the environment is less clear. I'm taking the su(1) from Debian as a reference because that's what Joey would have developed against, and it has several modes for how much it does to the environment:
- with
-m
(or equivalently-p
or--preserve-environment
): reset onlyPATH
andIFS
; inherit everything else. I'm fairly sure we don't want this, because we don't want ikiwiki to run with root'sHOME
.- without
-m
or-
: resetHOME
,SHELL
,USER
,LOGNAME
,PATH
andIFS
; inherit everything else.- with
-
(or equivalently-l
or--login
) but not-m
: resetHOME
, etc.; inheritTERM
,COLORTERM
,DISPLAY
andXAUTHORITY
; clear everything else.Before Joey switched ikiwiki-mass-rebuild from dropping privileges itself to using
su
to fix CVE-2011-1408, it would resetHOME
, inheritPATH
and clear everything else. Using plainsu
without-
and without clearing the environment is increasingly discredited, because it isn't 1980 any more and a lot of programs respect environment variables whose correct values are user-specific, such asXDG_RUNTIME_DIR
andDBUS_SESSION_BUS_ADDRESS
. So I think usingsu -
would be reasonable and perhaps preferable.Running the PAM stack is essentially unavoidable when we're altering privileges like this, and it's what PAM is there for, so we should do it. I think some
su
implementations (although not the one in Debian) run different PAM stacks forsu
andsu -
.Finally, running the command.
su
has two design flaws in this area:
The command is a string to be parsed by the shell, not an argument vector; on Linux, this design flaw can be avoided by using
runuser -u USER ... -- COMMAND [ARGUMENT...]
from util-linux instead (essentially a non-setuid fork of util-linux su with more reasonable command-line handling), and on many Unix systems it can be avoided by usingsudo -u USER ... -- COMMAND [ARGUMENT...]
, but presumably neither is available as standard on all OSs because that would be far too helpful. runuser is also (still) vulnerable toTIOCSTI
tty hijacking, because its developers think that ioctl has no legitimate uses and should be disabled or made a privileged operation in the Linux kernel, but the Linux kernel maintainers have rejected that solution and neither seems to be willing to back down.We might be able to bypass this with this trick:
system('su', ..., '--', '-c', 'exec "$0" "$@"', $0, @ARGV);
using the fact that arguments to a Bourne/POSIX shell after
-c
are set as$0
,$1
, ... in the shell. But the second design flaw makes this unreliable.
-c
is specified to run the given command with the user's login shell from/etc/passwd
(which might benologin
orcsh
or anything else), not a standardized Bourne/POSIX shell, so you can't predict what (if anything) the given command will actually do, or even how to quote correctly. On Linux, giving-s /bin/sh
works around this design flaw, but apparently that's not portable or we wouldn't be having this discussion.In principle ikiwiki-mass-rebuild was already wrong here, becase it receives arbitrary arguments and passes them to ikiwiki, but will do the wrong thing if they contain shell metacharacters (this is not a security vulnerability, because it's the unprivileged shell that will do the wrong thing; it's just wrong). Your proposed change makes it differently wrong, which I suppose is not necessarily worse, but I'd prefer it to be actually correct.
It seems that by using
-m
you're relying on root having a Bourne-compatible (POSIX) login shell, so that whenSHELL
is inherited from root's environment, it will parse the argument of-c
according to/bin/sh
rules. This is less reliable than Linuxsu -s /bin/sh
and has more side-effects, but the man page collection on unix.com suggests that this meaning for-s
is Linux-specific and has not been copied by any other OSs, which is depressing because that option seems to be the only way to achieve what we want.In conclusion, non-interactive
su
is a disaster area, but we use it because traditional Unix terminal handling is also a disaster area, and I don't see a good solution. --smcvAfter reading this, appreciating your effort writing it, and then ignoring it for a while, I think our easiest option might be to take a dependency on sudo. It's ubiquitous-ish, and where it's not already present the dependency feels more "suggested" than "required": ikiwiki is plenty useful for many/most uses without a working
ikiwiki-mass-rebuild
(as I can vouch). A slightly more annoying and thorough option might be to make the run-as-user command configurable, with some strong suggestions and warnings. Thoughts? --schmonzHere's what I'm experimenting with now:
my $ret=system("sudo", "-n", "-s", "-u", $user, "/bin/sh", "-c", "--", "$0", "--nonglobal", @ARGV);
--schmonz
Works well for me on macOS and NetBSD. Does it look right? Can someone vouch that there is indeed no functional change on Debian? --schmonz
Following up on login problem, there's still some problems mixing https and http logins on sites that allow both and don't redirect http to https.
I think the only good solution to this is to configure web servers to redirect http to https, which is outside the scope of the ikiwiki software (but would be a useful configuration change on sites like ikiwiki.info). Redirecting from CGI code is problematic because reverse-proxies are a thing; see below. --smcv
If the user logs in on https first, their cookie is https-only. If they then open the http site and do something that needs them logged in, it will try to log them in again. But, the https-only cookie is apparently not replaced by the http login cookie. The login will "succeed", but the cookie is inaccessible over https and so they'll not be really logged in.
Mitigation: If you have a browser-trusted certificate (which lots of people do now, because Let's Encrypt exists), setting the
cgiurl
tohttps://...
will result in the CGI (which is the only part that needs cookies) being accessed via https whenever the user follows links from static content. (test_site4_cgi_is_secure_static_content_doesnt_have_to_be
int/relativity.t
.)In the past I've wondered whether to add a distinction between authenticating and unauthenticating CGI URLs, so that on sites that don't particularly care about eavesdropping, anonymous/read-only actions like
?do=goto
can go viahttp
, but write actions and actions that are usually authenticated like?do=edit
go viahttps
. However, in 2018 with Let's Encrypt widely available, it seems reasonable to just usehttps
for all CGI accesses. --smcv
I think that the only fix for this is make the login page redirect from http to https, and for it to return to the https version of the page that prompted the login. --Joey
Redirecting the login page from http to https inside ikiwiki.cgi is problematic, because ikiwiki can't reliably know whether it was already accessed via https. If there is a reverse-proxy in use but the site admin has not set the
reverse_proxy
option (which is not always necessary even behind reverse proxies AIUI, and I suspect some reverse-proxied sites haven't set it correctly), then ikiwiki.cgi would infinitely redirect back to itself.For example, suppose your frontend web server is example.com and your ikiwiki backend is 127.0.0.1:8080.
- frontend web server sees an access to http://example.com/ikiwiki.cgi
- frontend web server reverse-proxies it to http://127.0.0.1:8080/ikiwiki.cgi
- backend web server invokes ikiwiki.cgi with
HTTPS
environment variable undefined- ikiwiki.cgi thinks "I'm being accessed via plain http" (this time correctly, as it happens)
- ikiwiki.cgi sends a redirect to https://example.com/ikiwiki.cgi
- web browser follows redirect
- frontend web server sees an access to https://example.com/ikiwiki.cgi
- frontend web server reverse-proxies it to http://127.0.0.1:8080/ikiwiki.cgi
- backend web server invokes ikiwiki.cgi with
HTTPS
environment variable undefined- ikiwiki.cgi thinks "I'm being accessed via plain http" (this time incorrectly!)
- ikiwiki.cgi sends a redirect to https://example.com/ikiwiki.cgi
- goto
I think this redirection is better achieved via web server configuration, like the Apache configuration set up by
redirect_to_https: 1
in ikiwiki-hosting.If you change ikiwiki's behaviour in this area, please add test-cases to
t/relativity.t
to cover the cases that changed.--smcv
I have a private ikiwiki (3.20170111) which is running on a host that serves HTTP and HTTPS, but ikiwiki is configured for (and only served on) HTTPS:
url: https://redacted/phd/
cgiurl: https://redacted/phd/cgi
However, form submissions from ikiwiki are going to a HTTP URL and thus not being served. Example headers from submitting a comment:
Request URL:https://redacted/phd/cgi
Request Method:POST
Status Code:302 Found
Remote Address:redacted:443
Referrer Policy:no-referrer-when-downgrade
Response Headers
HTTP/1.1 302 Found
Server: nginx/1.10.3
Date: Fri, 08 Dec 2017 11:53:35 GMT
Content-Length: 0
Connection: keep-alive
Status: 302 Found
Location: http://redacted/phd/blog/38th_Dec/?updated#comment-bd0549eb2464b5ca0544f68e6c32221e
Your form submission was in fact done successfully. The failing redirection to http is when ikiwiki follows up the successful edit by redirecting you from the form submission URL to the updated page, which is done by
IkiWiki::redirect
. --smcv
The CGI is served by lighttpd, but the whole site is front-ended by nginx, which reverse-proxies to lighttpd.
I think this might be to do with nginx not rewriting POST URLs when reverse-proxying, but I'm not sure why they would be generated in an HTTP form in any case, except perhaps by lighttpd's CGI handler since the back end is HTTP. A workaround is for nginx to redirect any HTTP URI to the HTTPS equivalent. I initially disabled that so as to have the path for letsencrypt negotiation not redirected.-- Jon
Do you have the
reverse_proxy
option set to 1? (It affects how ikiwiki generates self-referential URLs).Is the connection between nginx and lighttpd http or https?
I think this is maybe a bug in
IkiWiki::redirect
when used in conjunction withreverse_proxy: 1
: when marked as behind a reverse proxy,IkiWiki::redirect
sentLocation: /phd/foo/bar/
, which your backend web server might be misinterpreting. ikiwiki git master now sendsLocation: https://redacted/phd/foo/bar/
instead: does that resolve this for you?Assuming nginx has a reasonable level of configuration, you can redirect http to https for the entire server except
/.well-known/acme-challenge/
as a good way to bootstrap ACME negotiation. --smcv
Ikiwki sends me page change notifications like this:
A change has been made to http://joeyh.name/devblog/git-annex_devblog/day_423__ssh_fun/
This links to a page that says: "The page ?day 423 ssh fun does not exist."
The problem is caused because Joey's devblog includes posts syndicated from his git-annex devblog.
Either the notification e-mails should use the URL of the syndicated post, like http://git-annex.branchable.com/devblog/day_423__ssh_fun/ or the URL for the post on joey.name should redirect to the git-annex devblog post.
On the https://joeyh.name/ ikiwiki preference page I added an e-mail subscription PageSpec. Now when I view the preference page the PageSpec field is empty, but I'm still getting e-mails.
My guess at the cause of the problem is that I created an account using the e-mail login, then registered another account with a username. I think now when I login via either method I'm accessing the account with a username, while the e-mail only account has the PageSpec for the subscription.
The e-mail notifications including a link to http://joeyh.name/ikiwiki.cgi?do=prefs but they could include a login token so I can access the page and edit the PageSpec.
Steps to reproduce:
- visit an ikiwki site like http://ikiwiki.info/ or http://git-annex.branchable.com/
- trigger the login page by accessing preferences or trying to edit something
- login page is served without encryption
Firefox gives all kinds of warnings for unencrypted login pages.
The fix is for the login page to redirect to the https version of the wiki before showing the login form.
This is web server configuration for those sites, so not really a bug in the ikiwiki software. If you run an ikiwiki instance and you have a browser-trusted certificate, I would recommend:
- setting the
url
andcgiurl
options tohttps://...
- configuring your web server (frontend web server if you are using a reverse-proxy) to redirect from
http://...
tohttps://...
automatically, possibly excluding/.well-known/acme-challenge/
to make it easier to bootstrap Let's Encrypt certificatesIn ikiwiki-hosting the latter can be achieved by setting the
redirect_to_https
option to1
.When not using ikiwiki-hosting, the ikiwiki software does not control the web server configuration, so it can't do this for you. The CGI script could redirect from http to https if it knew you had a browser-trusted certificate, but it can't know that unless you tell it (by setting
url
andcgiurl
), and there's the potential for infinite redirect loops in misconfigured reverse-proxy setups if it did that (see login problem redux), so I think this is better solved at the web server level.The operator of ikiwiki.info and branchable.com can change the web server configuration for those sites, but other ikiwiki developers can't. --smcv
Hello.
I would like to be able to enable the description of the change to no longer being optional but forced instead simply, when enabled, change to ikiwiki cannot be approved without description (commit message).
I was not able to find it or google it.
Please add such option if it is not in ikiwiki already.
Thank you very much.
When I run ikiwiki
with the --rebuild
option (or only with the --setup file.setup
option a map directive like [[!map pages="*" show=title]]
generates a page map as if it didn't contain any show
parameter. Only after I manually edit something which causes the page containing the map directive to be rebuilt is the page map regenerated without ignoring the show
parameter.
I can't seem to do a password reset on this wiki. I am writing this through the anonymous git push interface (phew for that!).
I have tried three times now to reset my password through the user
interface - my account name is anarcat, and when i do the password
reset, I get a token. I go visit the website, set a passphrase, click
Save Preferences
and I end up on a login form. I enter my
passphrase, click Login
and I get the error:
1 error(s) were encountered with your submission. Please correct the fields highlighted below.
Name
[anarcat]
Password
[*************] Invalid entry
Password
is highlighted.
Even if I leave the password there (my cleartext password is in the login form by default after the password reset, which is strange), it still gives me that error. -- anarcat
Multimarkdown footnotes are pretty useful. If they are enabled in a wiki, they don't look so good with the default stylesheet, however, as the references are in the same size and positioning as everything else.
This particular wiki does not use multimarkdown, so there's no easy way to demonstrate this here, you'll have to trust me on this.
The following stylesheet should be added to style.css
:
a.footnote { vertical-align: super; font-size: xx-small; }
div.footnotes { font-size: small; }
This is a typical style that user-agents apply to the <sup>
tag. For
example, chromium has this builtin style for <sup>
:
vertical-align: super;
font-size: smaller;
Bootstrap uses this instead:
sup {
top: -.5em;
}
sub, sup {
position: relative;
font-size: 75%;
line-height: 0;
vertical-align: baseline;
}
I settled on xx-small
because it's the only size that doesn't affect
line-height here. However, Bootstrap's way may be better.
At any rate, the correct way to fix this is to avoid custom styling
and use the <sup>
tag for the footnote reference, as it has
meaning which is important to have proper semantic output (e.g. for
screen readers), as detailed in this Stack Overflow discussion.
--anarcat
ikiwiki code does not interpret Markdown or translate it into HTML. If I'm interpreting what you say correctly, you seem to be implying that you think Text::MultiMarkdown is producing incorrect HTML for footnotes (is an
<a>
with aclass
, should be a<sup>
). If so, please report that as a MultiMarkdown bug, not an ikiwiki bug, or alternatively don't use MultiMarkdown.The recommended backend for the mdwn plugin is Text::Markdown::Discount, which optionally implements footnotes using the same syntax as MultiMarkdown (originating in "PHP Markdown Extra"). However, ikiwiki doesn't currently enable that particular feature. Maybe it should, at least via a site-wide option.
What remains after eliminating the MultiMarkdown bug seems to be: ikiwiki's default stylesheet does not contain the necessary styling to work around the non-semantic markup produced by the non-default Text::MultiMarkdown Markdown implementation. Is that an accurate summary? --smcv
That is an accurate summary.
I didn't realize that Discount didn't actually support footnotes in Ikiwiki by default. I guess I enabled Multimarkdown exactly for that kind of stuff that was missing... It seems to me it would be reasonable to enable footnotes in Ikiwiki. There's already a lot of stuff that Discount does that is way more exotic (e.g. tables) and non-standard (e.g.
abbr:
).Ideally, users would get to configure which Discount flags are enabled in their configuration, but I understand that makes the configuration more complicated and error-prone.
Discount enables enough features by default that adding footnotes doesn't seem bad to me. I'm also tempted by something like
mdwn_enable: [footnotes] mdwn_disable: [alphalist, superscript]
where the default for anything that was neither specifically enabled nor specifically disabled would be to enable everything that we don't think is a poor fit for the processing model (pandoc-style document headers) or likely to trigger by mistake (typographic quotes and maybe alpha lists). --smcv
Makes perfect sense to me. --anarcat
I have now enabled footnotes in Discount by default, with a new
mdwn_footnotes
option that can switch them off if they become problematic. --smcvFor example, to enable footnotes, one needs to call Discount like this:
Text::Markdown::Discount::markdown($text, Text::Markdown::Discount::MKD_EXTRA_FOOTNOTE())
That being said, Discount generates proper semantic markup when footnotes, so this bug doesn't apply to the default Discount mode, if we ignore the fact that it doesn't support footnotes at all. Should I open a todo about this and the above?
Also, it seems this is a bug with multimarkdown - I have reported the issue there.
In the meantime, wouldn't it be better to have some styling here to workaround the problem in MMD?
Honestly, I'd rather have ikiwiki's level of support for the non-preferred Markdown implementation be: if you are stuck on a platform with no C compiler or Perl headers, you can use the pure-Perl Markdown flavours, and they will sort of mostly work (but might not look great).
I'm a little concerned that styling these rather generically-named classes might interfere with the implementations of footnotes in other Markdown implementations, or indeed non-Markdown - I wouldn't want to style
a.footnote
if the HTML produced by some other htmlize hook was<sup><a class="footnote" ...>[1]</a></sup>
for instance. But they're probably harmless.Alright, your call. At least this bug will be available as a workaround for others that stumble upon the same problem! --anarcat
Note that I also make the bottom <div>
small as well so that it has
less weight than the rest of the text. -- anarcat
I can't seem to login to ikiwiki sites reliably anymore.
I am not sure what is going on. It affects this wiki and the git-annex wiki. I am editing this through the anonymous git push interface.
OpenID is failing on me. That is, sometimes it works, sometimes it doesn't. For example, while writing this, I clicked the "Preferences" link and I seemed to have been logged in automatically without problem, even though I previously tried to login and failed with an error similar to Error: OpenID failure: time bad sig:, which of course I cannot reproduce anymore on ikiwiki.info now:
Error: OpenID failure: time_bad_sig: Return_to signature is not
valid.
I can still reproduce this on the git-annex wiki, however, which is odd. This could be because the OpenID host is screwing up, as I am not directly responsible for that box anymore... but then why would it work on one wiki and not the other?
But worse, I cannot login with my regular non-OpenID user, which I started using more regularly now. When I type the wrong password, the login form gives me "Invalid entry" next to the password field. But then if I do a password recall and reset my password, I get a different error:
Error: login failed, perhaps you need to turn on cookies?
That happens reliably on git-annex.branchable.com. ikiwiki.info seems to be more stable: i can eventually login. i can login to my personal wiki with OpenID fine. I can also login to branchable.com itself with openid without issues.
So I guess the problem is mostly with git-annex.branchable.com? Not sure how to debug this further.
Thanks. --anarcat
Update: now I can't login to the ikiwiki.info site anymore, getting the same errors as on the git-annex site.
Update2: it seems this is specific to the HTTP/HTTPS switch. If I use HTTPS, things work fine, but not with plain HTTP. So I'm moving this to the branchable wiki, as I am not having that problem on other ikiwiki sites. Maybe the bug specific to ikiwiki is the lack of clarity in figuring out wth is going on here... See http://www.branchable.com/bugs/login_failures_without_https
This seems to be a concacentation of multiple unrelated problems with different stuff, which is not a good bug report technique. Then to add to the fun, you filed the same bug on branchable too. Sigh!
The
time_bad_sig
problem with the perl openid library is a problem I am aware of but it's not clear if the problem is clock skew, or a protocol problem. At least one user to report it seemed to get it due to a http proxy. I'm pretty sure it could also happen if multiple openid logins were attempted at the same time (theconsumer_secret
which is stored server-side is used). The problem is not specific to ikiwiki.Ikiwiki says "login failed, perhaps you need to turn on cookies?" when the user successfully logged in, but their cookie does not indicate why they were logging in to begin with, so ikiwiki does not know what action to continue to. One way to get this when cookies are enabled is to re-post a login form after already using it, by eg using the back button to go back to a previous login form and try to reuse it.
--Joey
I am sorry. I thought the problem was originally related to ikiwiki then figured it was only happening on branchable sites, so I figured it was better to report it on the branchable.com forums.
I know that there's a OpenID-specific issue, but I had such issues in the past and succesfuly solved those. Because the timing of the emergence of the problem, i felt there was a correlation between the two issues.
And indeed, there seems to be a HTTPS-related issue: both login mechanisms work fine when on HTTPS, and both fail on HTTP. So I don't see those things as being necessarily distinct. -- anarcat
I've explained how the "login failed, perhaps you need to turn on cookies?" can happen and what it means. Clearly nothing to do with http; clearly not specific to branchable.
I just now logged into this site using openid over http, and it worked ok. I think it's more likely that the
time_bad_sig
problem occurs intermittently (which would make sense if it's a timing related issue), and so you've just so happened to see it when logging in with http and not https, than that there's actually a correlation. --Joey
Probably caused by something misconfigured about the comments plugin.
Config
My setup file:
# comments plugin
# PageSpec of pages where comments are allowed
comments_pagespec: forum/* or blog/posts/* or tarefa/*
# PageSpec of pages where posting new comments is not allowed
comments_closed_pagespec: ''
# Base name for comments, e.g. "comment_" for pages like "sandbox/comment_12"
comments_pagename: comment_
# Interpret directives in comments?
#comments_allowdirectives: 0
# Allow anonymous commenters to set an author name?
comments_allowauthor: 1
# commit comments to the VCS
comments_commit: 1
# Restrict formats for comments to (no restriction if empty)
comments_allowformats: mdwn txt
The moderatedcomments
plugins is not enabled
The anonok
plugin is not enabled
What are your complete
add_plugins
anddisable_plugins
options? Which version of ikiwiki are you running? Are you using any third-party plugins or patches? --smcvPasted here
I asked three questions and you gave one answer. Please answer the other two questions. --smcv
Steps
I've tried to place a comment clicking in the obvious Add a comment in a forum post.
I've not signed in because the sign in page didn't come up, instead a simple (You might want to Signin first?)
showed up, which I've haven't read and commented away.
Results
As a consequence of that, the user '' - that's a null character, have somehow logged in and it seems that there is no way to log it out.
None of this phantom user edits are being commited - this blog post was made with that user logged in via web.
It seems I can't log out from nowhere. I've rebuild the wiki from the command line and restarted the nginx server, the phantom user remains logged in and open to anyone willing to edit away the wiki.
I wonder whether this might be caused by the combination of the
httpauth
plugin with the nginx web server.httpauth
is known to work correctly with Apache, but you might be the first to use it with nginx.Specifically, I wonder whether
$cgi->remote_user()
might be returning the empty string. Looking at the code, we expect it to be either a non-empty username, orundef
.Please try installing this CGI script on your nginx server, making it executable and accessing its URL without carrying out any special HTTP authentication (you can delete the script immediately afterwards if you want). If my theory is right, you will see a line
REMOTE_USER=
in the output. Post the output somewhere, or mail it tosmcv
atdebian.org
if you don't want to make it public.#!/bin/sh printf 'Content-type: text/plain\r\n\r\n' env | LC_ALL=C sort
If you do not intend to use HTTP basic authentication, please do not enable the
httpauth
plugin. That plugin is intended to be used in conjunction with a web server configured to require HTTP basic authentication with one of a limited set of authorized usernames.--smcv
If my theory is correct, ikiwiki git master now works around this, and the httpauth documentation now recommends a more correct configuration. --smcv
Conclusion
If I wanted to do a totally anonnymous wiki, this would be the best setup ever.
For this particular installation, that's not the case.
Question
Is there a session file or something to logout this phantom user?
See inside dot ikiwiki.
.ikiwiki/userdb
is a Perl Storable file; there are instructions for inspecting it on that page..ikiwiki/sessions.db
is most likely a Berkeley DB file.I would be interested to see the contents of these two files and the complete
.setup
file. I would also be interested to see a tarball of the entire wiki source directory, if it isn't excessively large. If you'd be willing to share them, please contact smcv@debian.org. --smcvI think I've sent right away when you asked, anyway I still have the tarball hanging around. The last iikb domains will expire next month though, the wiki will only be accessible by mirror https://notabug.org/iikb/dev.iikb.org.
I see from the tarball that you have a lot of uncommitted changes. This is probably because whatever is causing the anonymous accesses to succeed is breaking other code paths by giving them an empty username: in particular it seems reasonably likely that the
git
plugin will refuse to commit changes in that situation.I would expect that you should be getting error messages on the ikiwiki CGI script's
stderr
in this situation. In Apache they would normally end up inerror.log
; I don't know how nginx does logging, but it is probably something similar. Please check that log for relevant-looking error messages. --smcv
I have an ikiwiki, where I activated the plug-in sidebar: * enable sidebar? yes * show sidebar page on all pages? yes
In order to create the page, I add a link to it on my entrance page: [[sidebar]]. I now can choose where to put it, I put it as "sidebar", not index/sidebar. Language is Markdown, nothing else installed on the system. After saving I can create the page clicking on the "?" and I fill it with links and the plugin pagestats. The sidbar appears properly on all pages, also the tag-cloud.
However, when I try to go into "preferences" I just get an error message, saying it has the format html: "Content-type: text/html " The entire html is \Content-type: text/html
What is happening?
Steps to reproduce:
- Running ikiwiki version 3.20130904.1ubuntu1 on Ubuntu 14.04 LTS
- ikiwiki accessed via
https://DOMAIN/wiki/ikiwiki.cgi
using fcgiwrap and Nginx - Start ikiwiki site
- Edit an existing page
What should happen:
- Change is immediately available
What happens instead:
- Change is sometimes not immediately available
- After (approx) 1-2 minutes, change is available
Other notes:
- Similarly for creating new pages
- Not consistent (the next edit may be visible immediately)
- If changes are visible from one browser, may not be visible from another browser on a different machine, logged in as the same user (admin)
- Seems to be happening less / not at all after running the site for approx 30-60 minutes
- fcgiwrap is invoked with Supervisor (aka supervisord)
- Related Nginx location blocks:
# non-wiki files at DOMAIN/...
location / {
try_files $uri $uri/ /index.html =404;
}
# wiki files at DOMAIN/wiki
location /wiki {
alias /home/USERNAME/public_html/WIKINAME;
}
# wiki script at DOMAIN/wiki/ikiwiki.cgi
location /wiki/ikiwiki.cgi {
fastcgi_pass unix:/tmp/fcgi.socket;
fastcgi_index ikiwiki.cgi;
fastcgi_param SCRIPT_FILENAME /home/USERNAME/public_html/WIKINAME/ikiwiki.cgi;
fastcgi_param DOCUMENT_ROOT /home/USERNAME/public_html/WIKINAME;
include /etc/nginx/fastcgi_params;
}
Please let me know if this is expected/known, and/or if there's anything helpful I can add to the report.
Saw a site using the po plugin crash with:
syntax error in pagespec "\"page(./tips/*)"
I suspect the relevant configuration is this:
po_translatable_pages: /index or /hugo or /hugo/keys or /about or /archive or /tips
or /talks or /imprint or /copyright or /blog or /posts or /law or /quotes or /quotes/*
Config problems in ikiwiki.setup should really not cause the whole site build to crash; this can make it hard to recover. --Joey
Given who's reporting this, am I right in assuming that's with ikiwiki 3.20150614? --smcv
I try to setup a small site with the auto-blog.setup and played a bit with it:
If I activate the po plugin and set po_translateable_pages to something meaningful (like the example: * and !*/Discussion
),
then I'll get the same error
syntax error in pagespec "\"page(./posts/*)"
but only after a second run of the ikiwiki --setup site.setup
My try to get a clue: deleting any po and pot files and run the rebuild again - works fine
run the rebuild a second time - error as above
tune any of the pagespec variables in the setup and at the inline directives of the blog or sidebar dosn't change anything except leaving the po_translateable_pages empty, than the rebuild works and doesn't create any po files (as expected).
Is this helpful or have I done anything stupid ? -- Michael
This would be helpful if I could reproduce the crash from your instructions, but I couldn't Which version of ikiwiki is this? --smcv
It was version 3.20141016.2 as it is in debian stable / jessie
I tried again with version 3.20160121 as it is in debian sid
same behavior as describedI did setup a new blog with auto-blog.setup, activated the po plugin with the defaults
and get the error again (running ikiwiki --setup twice) --Michael
links:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=775310 http://ikiwiki.info/forum/Your_local_changes_to_the_following_files_would_be_overwritten_by_merge:/
phil is kindly running ikiwiki on hands.com as http://rhombus-tech.net we continuously run into "merge" issues which need recovering on a near monthly basis.
i have a local checkout of the repository: i often need to upload images via that, doing the usual "git pull", followed by "git commit -a", followed by "git push", adding an HTML page that is edited by vim as well as the images.
i also often need to "recover" the wiki - for example by renaming pages that users have erroneously added, deleting pages that they should not have made, moving pages from locations that they should not have added or that i decide should be restructured.
these are the operations where everything usually gets completely fscked.
the really weird thing is that when i know that things are out-of-sync, a "git pull" gives a completely different head branch from the one shown through the RecentChanges log!
phil has often had to recover an entire set of files that are completely out of sync, that never enter the "git pull" stream onto my laptop, and are not visible on the wiki itself either.
this is all incredibly strange and mysterious, but it basically means that ikiwiki is not particularly robust and reliable for everyday use. i'd very much like it to be!
Calling ikiwiki with a bunch of options, including the --dumpsetup somefile.setup
option creates somefile.setup
for later reuse with the --setup
option. The wiki state dir however is not saved in the setup file, it has no wikistatedir
at all.
Strange since same kind of bugs have been fixed for destdir
.
--bbb
Options flagged as
internal
are not saved, and are not meant to be useful to save.wikistatedir
is one of those. Why do you wantwikistatedir
to be in the dumped setup?
wikistatedir
is always$config{srcdir}/.ikiwiki
unless you are doing something incredibly strange. --smcv
Hello, I stumbled upon this bug when writing a Python plugin. I think this is a ikiwiki bug, since I do not think my plugin does anything wrong.
Example
I set up an example wiki, containing the setup file and the plugin, on github.
Clone the repository
git clone https://github.com/paternal/ikiwiki-rpcbug.git
Change to the right directory
cd ikiwiki-rpcbug
Add the right ikiwiki directory to PYTHONPATH
export PYTHONPATH="$PYTHONPATH:$(dirname $(dpkg -L ikiwiki | grep proxy))"
Build the wiki
ikiwiki --setup wiki.setup --rebuild
The problem is in page http://localhost/~USERNAME/ikiwiki_bug_rpc/foo/ (for instance, http://localhost/~USERNAME/ikiwiki_bug_rpc is fine.
Problem
Page foo
contains the directive [[!rpcbug ]]
(rpcbug
being the name of the plugin). Calling proxy.rpc("srcfile", "bar")
in the preprocess function seems to mess up the RPC communication between ikiwiki and the plugin, and the result is that the generate foo/index.html
page is a text file containing the return value of the preprocess
call.
What I do not understand is that disabling the format
function (by commenting line 46 of the plugin) solves the problem. Half of an explaination is that I think that when the the proxy.rpc
function is called, the RPC communication is messed up in such a way that the format
function is not called, and the return value of preprocess
is considered to be the whole html code of the resulting page.
I understood that: as the first rpc call messes up the communication, more rpc calls are messed up, but if there is only one rpc call, not that much is broken. -- Louis
I hope someone will understand the problem better than I do, because I have no idea about how to solve this.
Regards,
-- Louis
I used the debug feature provided with
proxy.py
and rebuilt the wiki. I ran this with this version of my minimal bug example.
- The bug happens in function preprocess (in call to srcfile, to be more precise).
- The directive causing the bug is called on page foo.
- Communication between Ikiwiki and the plugin is here.
- The resulting HTML (for page
foo
) looks like:[[!rpcbug Erreur: internal error: foo cannot be found in /home/louis/projets/ikiwiki/rpcbug or underlay]]
Calling srcfile(foo): page
Calling srcfile(README.md): /home/louis/projets/ikiwiki/rpcbug/README.mdMy analysis:
- The call to
srcfile(foo)
fails (because Ikiwiki thinks that pagefoo
does not exist).- Ikiwiki thinks that processing of the directive is finished, whereas the plugin still waits for the answer of Ikiwiki.
- Ikiwiki asks the plugin to render a new directive, but the plugin interprets the request as the return value for its previous request. Thus, the plugin thinks that
srcfile(foo)
ispage
(thispage
being a misinterpretation of the Ikiwiki request).So, I think that this might be an error in the
rpc_call
function of theexternal
plugin: when the called method fails, it should return something (or raise an exception, if this is possible in RPC) to notify the plugin that something went wrong.-- Louis
Update: This can actually be a proxy error. Indeed:
- Ikiwiki sends a
methodCall
message to the plugin (which is a call to thepreprocess
function);- the plugin sends a
methodCall
message to ikiwiki (which is a call to thesrcfile
function);- Ikiwiki answers with a
methodCall
message:
- Ikiwiki answers this because the function call failed, and it is already processing the next directive;
- the plugin thinks that it is its request answer, and misinterprets it.
Thus, I think that the bug is in the
proxy.py
python file. On receiving amethodCall
(instead of amethodResponse
) as an answer to amethodCall
request,proxy.py
should notice the type of request, and call the requested function.I know Python better than I know Perl, so I can try to fix this.
-- Louis
I fixed this bug in a branch on my Ikiwiki clone. Please review
-- Louis
The meta plugin, when used to add a stylesheet to a page, adds the following attributes by default:
rel="alternate stylesheet"
title="mystylesheet"
The intent of this feature, according to the documentation is to "add a stylesheet to a page".
- By setting the
rel="alternate stylesheet"
, the additional stylesheet is treated as an "alternate stylesheet" as described in http://www.w3.org/Style/Examples/007/alternatives.en.html and is not activated by default in the browser. The user is responsible for activating them somehow. - The
title
attribute is used to group several alternate style sheets into a single one. This attribute is otherwise "purely advisory" as defined in http://www.w3.org/TR/html5/document-metadata.html#attr-link-title.
The current default behavior of the plugin implies having the additional
stylesheet not activated (if you don't set rel="stylesheet"
) or only
one of them activated (if you add two stylesheets and not set the same
title for both). This was hard to understand for two of us while working
on https://labs.riseup.net/code/issues/9314 and until we went and read
those W3C documents.
I think that to match better the description of that feature, and to be easier to comprehend in its default setting, the meta plugin should by default:
- Set
rel="stylesheet"
. - Not set any
title
.
If we agree on this proposal, I'm willing to provide a patch.
This applies to all versions since c8b4ba3 and until today.
A simple use of this plugin seems to fail now, yeilding either a blank map or some javascript errors.
The javascript errors I saw are:
SyntaxError: syntax error
OpenLayers.js (line 476, col 64)
ReferenceError: OpenLayers is not defined
osm.js (line 30, col 1)
--Joey
I guess OpenLayers made a backwards-incompatible change... At reseaulibre it seems we have survived this because we have a local copy of the OpenLayers source code:
osm_openlayers_url: http://wiki.reseaulibre.ca/OpenLayers-2.12/OpenLayers.js
Try specifying a versionned URL for the source:
osm_openlayers_url: http://openlayers.org/api/2.12/OpenLayers.js
... and see if that fixes the problem. Then we can start looking at the release notes to figure out what change they did that broke us and upgrade. Or pin the version on our side. Or simply switch to something else. --anarcat
Now I know it's "bad" to rewrite history in git, but sometimes, and especially with public sites such as a wiki, if confidential information gets transmitted in the wiki, it can be pretty important to remove it, and the only way to do this on a public git repo is by rewriting history.
(This happened as part of my implementation of git-annex support to be honest, but i think it applies to other situations as well.)
The problem is that ikiwiki keeps track of the last commit it saw in $srcdir/.ikiwiki/indexdb
. Then it uses this to infer which files changed. If history changed, this will fail with a fairly dramatic:
Error: 'git log --pretty=raw --raw --abbrev=40 --always -c --no-renames --reverse -r f9330f40527ba1f7df6656490cacb9d5ae9e2cd6..HEAD -- .' failed:
Notice how the error message from git isn't present. It's in the error.log
:
[Mon Mar 30 20:20:04.393466 2015] [cgi:error] [pid 21463] [client 2001:1928:1:9::1:54315] AH01215: fatal: Invalid revision range f9330f40527ba1f7df6656490cacb9d5ae9e2cd6, referer: http://anarc.at/ikiwiki.cgi?do=edit&page=services%2Fwiki
The workaround I have found was to remove the indexdb
file, because that's apparently legit. But it would be nice to have (1) a proper error message (it had to dig around the error.log to understand what's going on), (2) to have a proper fallback if the git log
fails and (3) to recover with the newer commit ID when we fallback. --anarcat
FWIW, I had a
500 Internal Server Error
while submitting this bug at first.I've just hit this, and fixed it thanks to you reporting what you did. Thanks! (fix in opinionated ikiwiki) — Jon, 2021-01-08
The $srcdir/.ikiwiki/sessions.db
file gets big. Does anything clean it?
-rw------- 1 jon jon 81M Mar 10 15:41 sessions.db
That's 155,990 records, with the earliest having an atime of Sat, 29 Nov 2008 17:33:43 GMT… — Jon
The following could would be expected to produce a 3-month output similar to gcal .
[[!calendar type="month" month="-1"]]
[[!calendar type="month" ]]
[[!calendar type="month" month="+1"]]
Behaviour: The 3rd entry doesn't show the next month, but the 1st month of the year (aka January).
Problem: Since there are no negative month numbers (unless someone starts with march because of Feb 29), –1 is interpreted correctly. Explicitely positive numbers aren't recognized as being relative. Possibly it is the numerical interpretation of the value, there is no difference between n and +n.
Solution: treat the value as string, check for a leading +, set a relativeMonth flag (which then also should happen on negative values, if it does not happen yet). If then it is set for the month in question, first calculate month_year and then go on as usual.
Idea: since i mentioned gcal earlier, how about some of the shorthanded sytax as "." for this, ".-" for previous, ".+" for next month together with its neighbours?
-- EdePopede
On my blog, i have setup a simple calendar and sparkline on the sidebar, similar to joey's. Unfortunately, in my case it looks like all posts were done in february, date at which i converted from drupal.
This is how i did the directives:
[[!calendar pages="blog/* and !blog/*/* and !*/Discussion"]] [[!calendar pages="blog/* and !blog/*/* and !*/Discussion" month=-1]] Temps passé entre les articles: [[!postsparkline pages="blog/* and !blog/*/* and !link(foo) and !link(unfinished)" max=50 formula=interval style=bar barwidth=2 barspacing=1 height=13]] Articles par mois: [[!postsparkline pages="blog/* and !blog/*/* and !link(foo) and !link(unfinished)" max=23 formula=permonth style=bar barwidth=2 barspacing=1 height=13]]
Is it possible the meta(date)
directives are being ignored by those plugins? --anarcat
For background, each page has two dates: creation date (
ctime
,meta(date)
) and last modification date (mtime
,meta(updated)
). postsparkline defaults to showing the ctime but can be configured to use the mtime instead; calendar always uses ctime. So what you're doing should work like you expect.The plugins don't get to choose whether they ignore meta(date); the effect of a meta(date) directive in
$page
is to set$pagectime{$page}
during scanning (overriding whatever was found in the filesystem), and that data structure is what the plugins read from. So the first thing to investigate is whether the ctime in your .ikiwiki/indexdb is correct. --smcv
< thm> joeyh: ping
< thm> can you update the embedded jquery-ui? (for cve
2010-5312, and/or 2012-6662)
I'll do this next time I spend some time on ikiwiki unless Joey or Amitai gets there first.
It doesn't look as though we actually use the vulnerable functionality.
--smcv
This is more complicated than it looked at first glance because both jquery and jquery-ui have broken API since the version we embed, and we also ship other jquery plugins for attachment. Perhaps someone who knows jquery could check compatibility and propose a branch? --smcv
I have here a site that uses the po plugin, and recently had this change committed to its setup:
po_slave_languages: - de|Deutsch - fr|Français -- ja|日本語 -- tr|Türkçe
The change was made by the web UI, so it must have involved a site rebuild
at the time, as that configuration item has rebuild => 1
.
Some days after that config change, a push caused ikiwiki refresh to fail:
remote: /home/b-udm/public_html/Discussion/index.ja.html independently created, not overwriting with version from Discussion.ja
Rebuilding the wiki cleared that up, but it seems that po plugin config changes can lead to follow-on problems of this sort.
The site still has a source/index.ja.po
. And it has
public_html/index.ja.html
, as well as public_html/index.ja/index.html
.
--Joey
What I did
A friend reported this, and I'm seeing it too. With 3.20140916, on a system with Python 2.7 and 3.4 (and little else) installed, I tried to run the auto.setup:
:; ikiwiki --setup /etc/pkg/ikiwiki/auto.setup
What will the wiki be named? Import Errors
What revision control system to use? git
Which user (wiki account or openid) will be admin? schmonz
Setting up Import Errors ...
Importing /Users/schmonz/ImportErrors into git
Initialized empty shared Git repository in /Users/schmonz/ImportErrors.git/
Initialized empty Git repository in /Users/schmonz/ImportErrors/.git/
[master (root-commit) 20b1128] initial commit
1 file changed, 1 insertion(+)
create mode 100644 .gitignore
Counting objects: 3, done.
Writing objects: 100% (3/3), 230 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To /Users/schmonz/ImportErrors.git
* [new branch] master -> master
Directory /Users/schmonz/ImportErrors is now a clone of git repository /Users/schmonz/ImportErrors.git
Traceback (most recent call last):
File "/usr/pkg/lib/ikiwiki/plugins/rst", line 45, in <module>
from proxy import IkiWikiProcedureProxy
File "/usr/pkg/lib/ikiwiki/plugins/proxy.py", line 41, in <module>
import xml.parsers.expat
File "/usr/pkg/lib/python3.4/xml/parsers/expat.py", line 4, in <module>
from pyexpat import *
ImportError: No module named 'pyexpat'
Creating wiki admin schmonz ...
Choose a password:
[...]
What I expected
I expected to get a basic site.
What happened instead
I got a basic site with some Python error messages.
Likely fix
Looks like proxy.py
needs the trick from Debian bug #637604 so
that it can defer a few imports (at least xml.parsers.expat
and
the XML-RPC libs) until the methods using them are called. --schmonz
It's more complicated than I thought. Findings and questions so far:
Failing to load an external plugin should be an error
When a typical Perl plugin fails to load (say, by failing to compile),
IkiWiki::loadplugin()
throws an exception. For XML-RPC plugins
written in any language, ikiwiki assumes loading succeeded.
Let's take plugins/rst as an example. It's written in
Python and uses proxy.py
to handle XML-RPC communication with
ikiwiki. Let's say that proxy.py
compiles, but rst
itself
doesn't. We'd like ikiwiki to know the plugin isn't loaded, and
we'd like an error message about it (not just the Python errors).
Now let's say rst
would be fine by itself, but proxy.py
doesn't
compile because some of the Python modules it needs are missing
from the system. (This can't currently happen on Debian, where
libpython2.7
includes pyexpat.so
, but pkgsrc's python27
doesn't; it's in a separate py-expat
package.) As before, we'd
like ikiwiki to know rst
didn't load, but that's trickier when
the problem lies with the communication mechanism itself.
For the tricky case, what to do? Some ideas:
- Figure out where in
auto.setup
we're enablingrst
by default, and stop doing that - In pkgsrc's
ikiwiki
package, add a dependency on Python andpy-expat
just in case someone wants to enablerst
or other Python plugins
For the simple case, I've tried the following:
- In
IkiWiki::Plugin::external::import()
, capture stderr - Before falling off the end of
IkiWiki::Plugin::external::rpc_call()
, if the command had been 'import' and stderr is non-empty, throw an exception - In
IkiWiki::loadplugin()
, try/catch/throw just like we do with regular non-external plugins
With these changes, we have a test that fails when an external plugin can't be loaded (and passes, less trivially, when it can). Huzzah! (I haven't tested yet whether I've otherwise completely broken the interface for external plugins. Not-huzzah!) --schmonz
I'm trying to put a list of tags in a table, so I carefully make a newline-free taglist.tmpl and then do:
| [[!inline pages="link(/category/env)" feeds=no archive=yes sort=title template=taglist]] |
but there's a line in inline.pm
that does:
return "<div class=\"inline\" id=\"$#inline\"></div>\n\n";
And the extra newlines break the table. Can they be safely removed?
If you want an HTML table, I would suggest using an HTML table, which should pass through Markdown without being interpreted further. To avoid getting the
<div>
inside the<table>
you can use:[[!inline pages="link(/category/env)" feeds=no archive=yes sort=title template=tagtable]]
where tagtable.tmpl looks like
<TMPL_IF FIRST> <table><tr> </TMPL_IF> <td>your tag here</td> <TMPL_IF LAST> </tr></table> </TMPL_IF>
I don't think you're deriving much benefit from Markdown's table syntax if you have to mix it with HTML::Template and ikiwiki directives, and be pathologically careful with whitespace. "Right tool for the job" and all that
When I edited this page I was amused to find that you used HTML, not Markdown, as its format. It seems oddly appropriate to my answer, but I've converted it to Markdown and adjusted the formatting, for easier commenting. --smcv
templates expose odd behavior when it comes to composing links and directives:
the parameters are passed through the preprocessor twice, once on per-parameter basis and once for the final result (which usually contains the preprocessed parameters).
one of the results it that you have to write:
[[!template id="infobox" body=""" Just use the \\\[[!template]] directive! """]]
(that'd be three backslashes in front of the opening [.)
this also means that parts which are not used by the template at all still have their side effects without showing.
furthermore, the evaluation sequence is hard to predict. this might or might not be a problem, depending on whether someone comes up with a less contrived example (this one assumes a
[[!literal value]]
directive that just returns value but protects it from the preprocessor):we can use
[[!literal """[[!invalid example]]"""]]
, but we can't use[[!template id=literalator value="""[[!invalid example]]"""]]
with a 'literalator' template<span class="literal">[[!literal """<TMPL_VAR value>"""]]</span>
because then theinvalid
directive comes to action in the first (per-argument) preprocessor runlinks in templates are not stored at all; they appear, but the backlinks don't work unless the link is explicit in one of the arguments.
[[!template id="linker" destination="foo"]]
with a 'linker' template like
Go to [[<TMPL_VAR destination>]]!
would result in a link to 'destination', but would not be registered in the scan phase and thus not show a backlink from 'foo'.
(a
[[!link to=...]]
directive, as suggested in flexible relationships between pages, does get evaluated properly though.)this seems to be due to linkification being called before preprocess rather than as a part of it, or (if that is on purpose) by the template plugin not running linkification as an extra step (not even once).
(nb: there is a way to include the raw_
value of a directive, but that only
refers to htmlification, not directive evaluation.)
both those behaviors are non-intuitive and afaict undocumented. personally, i'd
swap them out for passing the parameters as-is to the template, then running
the linkifier and preprocessor on the final result. that would be as if all
parameters were queried raw_
-- then again, i don't see where raw_
makes
anything not work that worked originally, so obviously i'm missing something.
i think it boils down to one question: are those behaviors necessary for compatibility reasons, and if yes, why?
--chrysn
The FormattingHelp
link in the edit form of any page points to the same ikiwiki/formatting help text for Markdown, regardless of page type (which could be HTML, reStructuredText, etc.) On the wiki I run, this is confusing users.
What I would like is that either the FormattingHelp
link changes with page type (requires Javascript, if one is going to change the page type for new pages), or that the ikiwiki/formatting page is an index of supported page types with a further link to help text for each one (less user-friendly but likely easier to implement).
If you have a page like
[[!if test="enabled(smileys)" then=":-P"]]
then enabling or disabling the smileys plugin will not rebuild it.
Unfortunately, I can't think of a good way to solve this without
introducing a special case for enabled()
in Render.pm, either a
new dependency type "enabled(smileys)" => $DEPENDS_ENABLED
or a special case that treats "enabled(smileys)" => $DEPENDS_PRESENCE
differently. --smcv
Similar to syslog fails with non-ASCII wikinames, this bug happens when the wiki name has non-ascii characters in the site name. In my case, it has the "CⒶTS" string.
We get the following error in a password reset:
Error: Wide character in subroutine entry at /usr/share/perl5/Mail/Sendmail.pm line 308.
Help! --anarcat
I assume this means Mail::Sendmail doesn't know how to send Unicode strings, so any string passed to it (or any message body, or something?) will need to be passed through
encode_utf8()
. It looks as though Mail::Sendmail also defaults toContent-Type: 'text/plain; charset="iso-8859-1"'
so it'll need a
'Content-Type' => 'text/plain; charset="utf-8"'
too.I'm disappointed to see how many of the library modules used by ikiwiki are not Unicode-clean... but then again, Mail::Sendmail was last released in 2003 so it's hardly surprising. I wonder whether Email::Sender is any better?
(If you know Python 2, the analogous situation would be "doesn't know how to send unicode objects, so you have to get a str object with
a_unicode_object.encode('utf-8')
".) --smcvShameless plug: passwordauth: sendmail interface. Though, I have no idea whether that is UTF-8-safe. --tschwinge
For some more flexibility in creating a stylesheet for ikiwiki, it would be nice if there were a few unused elements on the page that one can move around and assign content to using CSS.
For instance, something like this:
<div class='aux' id='aux1'></div>
<div class='aux' id='aux2'></div>
etc. For bonus points, the number could be configurable. To avoid empty content, style.css should have something like this:
.aux {
display: none;
}
This can then be used to move things around. For instance, I have on my website's CSS stylesheet the following:
#aux1 {
position: fixed;
width: 150px;
height: 150px;
bottom: 0px;
left: 0px;
background-image: url("wouter3.png");
background-position: top right;
background-repeat: no-repeat;
background-origin: content-box;
display-block;
}
which adds my hackergochi to the bottom left of the webpage, with some margin.
I tried looking for something like this, but I couldn't find it. Perhaps I just didn't look in the right places, though; apologies if that is the case.
This can easily be achieved by modifying templates. Simply copy the default page template to the template directory of your wiki, and modify it to add your empty divs.
-- Louis
Hunting down what was generating
utf8 "\xEB" does not map to Unicode at /usr/share/perl5/IkiWiki.pm line 873, <$in> chunk 1.
lead me to a call to utf8::valid
, which lead to http://perldoc.perl.org/utf8.html which says this is an "INTERNAL" function:
Main reason for this routine is to allow Perl's testsuite to check that operations have left strings in a consistent state. You most probably want to use
utf8::is_utf8()
instead.
Apparently the main point of the function is to emit the warning in unit tests - problem is, in the ikiwiki context, the only useful thing to warn about would be the name of the file you're trying to parse, not the name of the source code. Alternatively, since the code does continue on with the data, not whining about it might be an option but an actionable message would be better.
Put something like this in the setup file:
conversion:
- from: odt
to: pdf
command: [unoconv, -f, pdf, -o, $OUTPUTDIR, $INPUTFILE]
- from: ditaa
to: png
command: [ditaa, $INPUTFILE, $OUTPUTFILE, -s, 0.7]
However Dumper($config{conversion})
shows:
$VAR1 = [
'HASH(0x164e1a0)',
'HASH(0x164e3c8)'
];
I think it is getting mangled in sub merge
in IkiWiki/Setup.pm
and its calls to possibly_foolish_untaint
Workaround: force the array values to be strings, and then re-parse them using YAML::XS::Load:
conversion:
- |
from: [odt, odp]
to: pdf
command: [unoconv, -f, pdf, -o, $OUTPUTDIR, $INPUTFILE]
- |
from: ditaa
to: png
command: [ditaa, $INPUTFILE, $OUTPUTFILE, -s, 0.7]
...
sub checkconfig {
if (!defined $config{conversion} || ref $config{conversion} ne "ARRAY") {
error(sprintf(gettext("Must specify '%s' and it must be a list"), "conversion"));
}
for (my $i=0; $i < @{$config{conversion}}; $i++) {
$config{conversion}->[$i] = YAML::XS::Load($config{conversion}->[$i]) if
ref $config{conversion}->[$i] ne 'HASH';
}
}
getsetup
defines config options to be one of: boolean, string, integer, pagespec, "internal" (non-user-visible string), ref to an array of one of those scalar types, or ref to a hash { string => one of those scalar types }. IkiWiki::Setup also appears to support regexps (qr//), although that's not documented (presumably they're treated the same as strings).Supporting arbitrary arrays/hashes as values would require some way to untaint the values recursively.
Complex config data also can't be used with the websetup plugin, which currently supports everything that IkiWiki::Setup does, except for hashes. --smcv
this feature made it so syslog doesn't work anymore if the site being logged has non-ASCII characters it in.
Specifically, my wiki was named "CⒶTS", and nothing was showing up in syslog. When I changed that to "C@TS", it worked again.
My guess is this sits somewhere here:
return eval {
Sys::Syslog::syslog($type, "[$config{wikiname}] %s", join(" ", @_));
};
Yet I am not sure how to fix that kind of problem in Perl... --anarcat
If I remove the "eval" above, I get:
Error: Wide character in syswrite at /usr/lib/perl/5.14/Sys/Syslog.pm line 485.
I have improved a little the error handling in log_message() so that we see something when syslog fails, see the branch documented above. I can also confirm that reverting syslog should show wiki name fixes the bug. Finally, I have a unit test that reproduces the problem in git, and a working patch for the bug, again in git.
One last note: I noticed that this problem also happens elsewhere in ikiwiki. For example, the notifyemail plugin will silently fail to send notifications if the pages contain unicode. The ?notifychanges plugin I am working on (in option to send only the diff in notifyemail) seems to be working around the issue so far, but there's no telling which similar problem are out there.
I'd merge it. --smcv
I've merged it, but I don't feel it fixes this bug. --Joey
(I removed the patch tag to take it off the patches list.)
What else is needed? Systematic classification of outputs into those that do and don't cope with Unicode? --smcv
I just got this message trying to post to this wiki:
Error: Sorry, but that looks like spam to blogspam: No reverse DNS entry for 2001:1928:1:9::1
So yeah, it seems I have no reverse DNS for my IPv6 address, which may be quite common for emerging IPv6 deployments...
This may be related to ?blogspam options whitelist vs. IPv6?.
Given an uploaded image via: [[!img NAME.svg alt="image"]]
Viewing the generated page shows the following error:
"[[!img Error: failed to read name.svg: Exception 420: no decode delegate for this image format `/home/user/path/name.svg' @ error/svg.c/ReadSVGImage/2815]]"
The caps in the image title were somehow converted to small letters and then the image is saved as a directory. Very puzzling.
I get the same error when image names are small letters.
The error also occurs with png images.
How do I fix this?
Later investigation ... I got around the problem by creating the mark-up in a new directory. However, if I try to create a new directory with the same name as the directory containing the problem code, the problem re-emerges -- the old directory is apparently not overwritten. Perhaps this is an issue with the git storage.
I turned on the sidebar plugin, with global_sidebars on (in the web setup page), created a sidebar page in the root, and edited the sidebar a few times.
I then noticed that all pages on the root had been updated with a sidebar, but no subpages (i.e. a/b). Only after editing a subpage did it get a sidebar. Editing sidebar itself only updated subpages with sidebars, the other subpages had not been refreshed (proven by their unchanged filesystem date)
After calling ikiwiki --setup on the command line all pages were updated. So this seems to be a difference between web-started --setup and command-line --setup. Or it just doesn't work the first time --setup is called after sidebars are enabled.
A site got stuck like this:
/home/b-fusioninventory/public_html/documentation/index.es.html independently created, not overwriting with version from documentation.es
I tried rebuilding it, and the rebuild failed like this:
building recentchanges/change_ef4b9f92821335d96732c4b2c93ed96bc84c2f0d._change, which depends on templates/page.tmpl removing recentchanges/change_9ca1de878ea654566ce4a8a031d1ad8ed135ea1c/index.html, no longer built by recentchanges/change_9ca1de878ea654566ce4a8a031d1ad8ed135ea1c internal error: recentchanges/change_9ca1de878ea654566ce4a8a031d1ad8ed135ea1c._change cannot be found in /home/b-fusioninventory/source or underlay
This internal error seems like the root cause of the original failure. ikiwiki crashed and did not record that it wrote the index.es.html file.
Deleting the indexdb and rebuilding cleaned up the problem.
This needs more investigation. --Joey
The toc directive scrapes all headings from the page, including those in the sidebar. So, if the sidebar includes navigational headers, every page with a table of contents will display those navigational headers before the headers in that page's content.
I'd like some way to exclude the sidebar from the table of contents. As discussed via Jabber, perhaps toc could have a config option to ignore headers inside a nav tag or a tag with id="sidebar".
I accidentally made a typo spelling "surprises" and changed my URL from
http://natalian.org/archives/2012/12/04/Singapore_banking_suprises/ to http://natalian.org/archives/2012/12/04/Singapore_banking_suprises/
Using the meta redir. However the meta redir now appears in the index of http://natalian.org/
Any ideas how to handle this situation?
Well, you can adjust the inline's pagespec to exclude it, or even tag it with a tag that the pagespec is adjusted to exclude. --Joey
I did it by making a new tag called "redir", tagging the redir page with it and then modifying the pages attribute of my inline to exclude pages with that tag. However, there is the same problem with the archives, probably the calendar if you use that and likely some other cases that I haven't thought about. In all these places you need to explicitly exclude redir pages. I think that ideally redir pages should have some special treatment that excludes them by default in most situations, because they are not real pages in a sense. They can have a body but if the browser is working properly it will never be shown.
How about adding a new PageSpec called redir(glob) and excluding such pages from the post(glob) PageSpec? I think this behaviour makes more sense and thus should be the default, but if a user wants the old behaviour that's still available as "page(glob) or redir(glob)".
Good URL redirections are important because they allow you to move things around without breaking incoming links from external sites and people's browsing history (which you can't fix, unlike internal links). --anton, 2016-01-31
For some time now, in circumstances that I've had enormous troubles trying to track, I've seen feeds getting removed by ikiwiki when apparently unrelated pages got changed, with the message:
removing somepath/somepage/somefeed, no longer built by some/unrelated/page
I've finally been able to find how and why it happens. The situation is the following:
- page A has an inline directive that (directly) generates a feed F
- page B inlines A, thus (indirectly) generating F again
- page B is rendered after page A
The feed removal happens when changes are made to prevent B from inlining A; for example, because B is a tag page and A is untagged B, or because B includes A through a pagespec that no longer matches A. In this case, this happens:
- page A is built, rendering F
- page B is built, not rendering F, which it used to render
- F is removed because it is not built by B anymore
Note that although this issue is triggered (for me) from the changes I proposed last year to allow feed generation from nested inlines coalescing it to be page-based instead of destpage-based (bb8f76a4a04686def8cc6f21bcca80cb2cc3b2c9 and 72c8f01b36c841b0e83a2ad7ad1365b9116075c5) there is potential for it popping up in other cases.
Specifically, the logic for the removal of dependent pages currently relies on the assumption that each output has a single generator. My changes caused this assumption to be violated, hence the error, but other cases may pop up for other plugins in the future.
I have a [patch] fixing this issue (for feeds specifically, i.e. only
the problem I am actually having) on top of my mystuff
branch, but
since that also has heaps of other unrelated stuff, you may want to just
pick it from my gitweb.
The patch changes the will_render()
for feeds to be based on the page
rather than on the destpage, matching the fact that for nested inlines
it's the inner page that is ultimately responsible for generating the
feed.
I've noticed that it requires at least two full rebuilds before the index is again in a sensible state. (On the first rebuild, all feeds from nested inlines are actually removed.)
While the patch is needed because there are legitimate cases in which nested feeds are needed (for example, I have an index page that inlines index pages for subsection of my site, and I want those feed from being visible), there are other cases when one may want to skip feed generation from nested inlines.
Say you are commenting on this report. The Navbar on top will look like
ikiwiki/ bugs/ commenting on Navbar does not link to page being commented on while commenting
while either of those two options would be better:
ikiwiki/ bugs/ commenting on Navbar does not link to page being commented on while commenting
ikiwiki/ bugs/ Navbar does not link to page being commented on while commenting / New comment
-- RichiH
Hi folks,
This is a fairly fresh wiki. I recently noticed the Links: section the the bottom looked like this:
Links: index recentchanges/change 0b2f03d3d21a3bb21f6de75d8711c73df227e17c recentchanges/change 1c5b830b15c4f2f0cc97ecc0adfd60a1f1578918 recentchanges/change 20b20b91b90b28cdf2563eb959a733c6dfebea7a recentchanges/change 3377cedd66380ed416f59076d69f546bf12ae1e4 recentchanges/change 4c53d778870ea368931e7df2a40ea67d00130202 recentchanges/change 7a9f3c441a9ec7e189c9df322851afa21fd8b00c recentchanges/change 7dcaea1be47308ee27a18f893ff232a8370e348a recentchanges/change 963245d4e127159e12da436dea30941ec371c6be recentchanges/change cd489ff4abde8dd611f7e42596b93953b38b9e1c ...
All of those "recentchanges/ change xxxxxxx" links are clickable, but all yield 404 when clicked.
When I disable the CamelCase plugin and rebuild the wiki, all the Links other than index disappear, as they should. Re-enable CamelCase, and they're back.
This is a very simple wiki. Just fresh, only one page other than index (this one), and nothing at all fancy/weird about it.
If I use the linkmap directive twice on a single page, I get the same image appearing in both locations, even though the parameters for the two directives may have been different.
-- Martin
If you look at org mode, the link to the Discussion page is not there (has a question mark), as if it didn't exist. But--through the search--I discovered that the Discussion page does exist actually: Discussion.
So, there is a bug that prevents a link to the existing Discussion page from appearing in the correct way on the corresponding main page. --Ivan Z.
Perhaps, this has something to do with the same piece of code/logic (concerning case-sensitivity) as the fixed unwanted discussion links on discussion pages? --Ivan Z.
I have heard repeated reports on http://mesh.openisp.ca/ that editing a page that has a waypoint in it will sometimes make that waypoint disappear from the main map. I have yet to understand why that happens or how, but multiple users have reported that.
A workaround is to rebuild the whole wiki, although sometimes re-editing the same page will bring the waypoint back on the map.
I have been able to reproduce this by simply creating a new node. It will not show up on the map until the wiki is rebuilt or the node is resaved. -- anarcat
The listdirectives` directive doesn't register a link between the page and the subpages. This is a problem because then the orphans directive then marks the directives as orphans... Maybe it is a but with the orphans directive however... A simple workaround is to exclude those files from the orphans call... --anarcat
There's a distinction between wikilinks (matched by
link()
,backlink()
etc.) and other constructs that produce a hyperlink. Some directives count as a wikilink (liketag
) but many don't (notablyinline
,map
,listdirectives
, andorphans
itself). As documented in orphans, orphans will tend to list pages that are only matched by inlines/maps, too.The rule of thumb seems to be that a link to a particular page counts as a wikilink, but a directive that lists pages matching some pattern does not; so I think
listdirectives
is working as intended here.orphans
itself obviously shouldn't count as a wikilink, because that would defeat the point of itAnything that uses a pagespec to generate links, like
inline
andmap
, can't generate wikilinks, because wikilinks are gathered during the scan phase, and pagespecs can't be matched until after the scan phase has finished (otherwise, it'd be non-deterministic whether all wikilinks had been seen yet, andlink()
in pagespecs wouldn't work predictably).I suggest just using something like:
[[!orphans pages="* and !blog/* and !ikiwiki/directive/*"]]
This wiki's example of listing orphans has a more elaborate pagespec, which avoids bugs, todo items etc. as well.
--smcv
No follow-up or objection for a while, so considering this to be working as designed. --smcv
Seems I'm a bit late to butt in, but would it be possible to have two further phases after the scan phase, the first running map and inline and the second orphan? Then map and inline could log or register their links (obviously somewhere were it won't change the result of the link function) and orphan could take them into account. This logging could be turned on by parameter to not waste time for users not needing this and make it tunable (i.e. so that the user can decide which map directives count and which don't)
For someone using map and especially autoindex the output of the orphans directive is simply wrong/useless (at least it is for me). And there is no easy workaround like for listdirectives -- holger
Hmm. I think this can be done without introducing any "phases", even, but it would require each plugin that generates links according to a pagespec to have either a conditional call into the orphans plugin, or a call to a new core function in ikiwiki that exists solely to support the orphans plugin. Something like this, maybe:
# in map.pm, inline.pm, pagestats.pm etc., at scan time if (IkiWiki::Plugin::orphans->can("add_reachable")) { IkiWiki::Plugin::orphans::add_reachable($page, $pagespec); } # in orphans.pm (pseudocode; note that this does not *evaluate* # $pagespec, only stores it, so it's OK to do this at scan time) sub needsbuild ($pages) for each page in $pages clear $pagestate{location}{orphans}{reachable} sub reachable ($location, $pagespec) add $pagespec to @{$pagestate{location}{orphans}{reachable}} # in preprocess function in orphans.pm (pseudocode) # executed at build time, not at scan time, so pagespecs work for each maybe_orphan with no links to it for each location with a list of reachable pagespecs make the page with the orphans directive depend on \ the page that is the location for each of those pagespecs if pagespec matches orphan take orphan off the list go to next orphan output list of orphans
(Maybe parentlinks should also annotate the parent/ancestors of each page as reachable from that page.)
Do other people (mainly Joey) think that'd be acceptable, or too intrusive?
Taking this off the list of resolved bugs again while we think about it.
I suspect that in the presence of autoindex, what you really want might be less "there's a link to it" and more "there's a path to it from the root of the wiki", which is why I called the proposed function "add_reachable". On the other hand, maybe that's too computationally intensive to actually do; I haven't tried it. --smcv
(I'll interpet Joeys silence as a good sign ;-). Is there a difference between "link to it" and "path to it"? If we assume autoindex produces bonafide "first class" links there shouldn't be one!?
So far your idea sounds great, says me without any knowledge of the source. I'll try to grok it. Is there a medium for silly questions, a wiki seems not the right fit for that? -- holger
Yes, there has to be a difference between a first class wikilink and the thing to which
map
andinline
can contribute.map
andinline
use a pagespec to decide what they include, and pagespecs can't be evaluated and get a correct answer until the set of links has been collected, because their results often depend on the set of links. Otherwise, suppose you had a pagefoo
whose only contents were this:[[!inline pages="!backlink(foo)"]]
If
inline
generated links, it would inline exactly those pages that it doesn't inline. That's never going to end well --smcvWe have to differentiate between what users of ikiwiki consider first class links and what internally is happening. For the user any link contributing to the structured access tree is first class. The code on the other hand has to differentiate between the static links, then generated links, then orphan links. Three "passes", even your proposed solution could be seen as adding another pass since the orphan plugin has to run after all the plugins generating (first class user) links. -- holger
I think the difference between your point of view, and what ikiwiki currently implements / what its design is geared towards, is this: ikiwiki says A links to B if the source code of A contains an explicit link to B. You say A links to B if the compiled HTML of A contains a link to B.
Would you agree with that characterization?
I suspect that "link in the source code" may be the more useful concept when using links for backlinks (I think the original implementation is http://c2.com/cgi/wiki?BackLink) and as pseudo-tags (http://c2.com/cgi/wiki?WikiCategories). The fact that this is what
link()
andbacklink()
mean could be better-documented: it's entirely possible that the author of their documentation (Joey?) thought it was obvious that that's what they mean, because they were coming from a compiler/source-code mindset.Also, backlinks become rather all-engulfing if their presence in the compiled output counts as a link, since after a render pass, they would all become bidirectional; and as I noted previously, if pagespecs can match by linkedness (which we want) and plugins can generate lists of links according to pagespecs (which we also want), then links in the compiled output can certainly get into Russell's paradox-like situations, such as the page that links to every page to which it does not link.
For the special case of deciding what is orphaned, sure, it's the compiled HTML that is the more relevant thing; that's why I talked about "reachability" rather than "links".
--smcv
Definition lists do not look great here...
Here is an example.
- this is a term
- and this is its definition.
(This wiki doesn't support Markdown's extended definition lists, but still, this is valid markup.)
I believe <dt>
should be made bold. I have added this to my local.css
, and I would hate to add this all the time forever:
/* definition lists look better with the term in bold */
dt
{
font-weight: bold;
}
How does that look? I can provide a patch for the base wiki if you guys really want... -- anarcat
What you dislike seems to be the default rendering of definition lists by browsers. I don't think it's ikiwiki's place to override browser defaults for standard markup in the document body, at least not in the default antitheme. --Joey
How about in the actiontab theme then?
Observed behavior:
When I create a link like [[cmd_test]] , the link appears as 'cmd test'.
Expected behavior:
I would like to be able to create links with underscores. I realize this is a feature, and I searched for ways to escape the underscore so it would appear, but I didn't find any.
as a workaround, you can use [[cmd__95__test|cmd_test]] (which will link to a page named "cmd test" at the url location "cmd_test") or [[cmd__95__test]] (which will link to a page named "cmd_test" at the url location "cmd__95__test"). i would, from my limited understanding of ikiwiki internals, consider the bug valid, and suggest that
- explicit link text be not subject to de-escaping (why should it; this would be the short term solution)
- escaped page names never be used in user visible parts of ikiwiki (in my opinion, a user should not need to know about those internals, especially as they are configuration dependant (wiki_file_regexp))
note that in wikilink, that very behavior is documented; it says that "[[foo_bar|Sandbox]]" will show as "foo bar". (although you can't tell that apart from "foo_bar" easily because it's a hyperlink).
i assume that this behavior stems from times when wikilinks and directives were not distinguished by [[ vs [[! but by the use of whitespace in directives, so whitespace had to be avoided in wikilinks.
--chrysn
having hacked around in the link plugin, i can confirm that the link texts are explicitly de-escaped, and that when no pipe is inside the link (ie links like
[[cmd_test]]
), the string"cmd_test"
is regarded as a link (that will subsequently be converted to a readable text) rather than as a readable text (for which a suitable link target is found automatically). --chrysn
When an ikiwiki
instance is holding a lock, a web user clicking on "add comment" (for example) will have to wait for the lock to be released. However, all they are then presented with is a web form. Perhaps CGI requests that are read-only (such as generating a comment form, or perhaps certain types of edits) should ignore locks? Of course, I'd understand that the submission would need to wait for a lock. — Jon
Ikiwiki has what I think of as the Big Wiki Lock (remembering the "Big Kernel Lock"). It takes the exclusive lock before loading any state, to ensure that any changes to that state are made safely.
A few CGI actions that don't need that info loaded do avoid taking the lock.
In the case of showing the comment form, the comments plugin needs CGI session information to be loaded, so it can check if the user is logged in, and so it can add XSRF prevention tokens based on the session ID. (Actually, it might be possible to rely on
CGI::Session
's own locking of the sessions file, and have a hook that runs with a session but before the indexdb is loaded.)But, the comment form also needs to load the indexdb, in order to call
check_canedit
, which matches a pagespec, which can need to look things up in the indexdb. (Though the pagespecs that can do that are unlikely to be relevant when posting a comment.)I've thought about trying to get rid of the Big Wiki Lock from time to time. It's difficult though; if two ikiwikis are both making changes to the stored state, it's hard to see a way to reconcile them. (There could be a daemon that all changes are fed thru using a protocol, but that's really complicated, and it'd almost be better to have a single daemon that just runs ikiwiki; a major architectural change.)
One way that almost seems it could work is to have a entry path that loads everything read-only, without a lock. And then in read-only mode,
saveindex
would be an error to run. However, both the commenting code and the page edit code currently have the same entry path for drawing the form as is used for handling the posted form, so they would need to be adapted to separate that into two code paths. --Joey
This is possibly/probably due to my weird setup, which is that I have apache behind nginx, with the result that apache sees the client's IPv4 address as having been mapped to IPv6. i.e. ::ffff:10.11.12.13. That being the case, I currently need to specify that (with the ::ffff: prepended) if I want to whitelist (or more importantly blacklist) and IPv4 address.
It strikes me that this is liable to become more of a problem as people finally start using IPv6, so it might be worth ensuring that the code that compares IP addresses be able to treat the two formats (with and without the ffff's) as equivalent. --fil
Can't appear to get 'wiki' functions (i.e. editing) running when ikiwiki is running on a port other than the default (port 80). Somewhere in the processing it considers the base URL to exclude the port number and the websever throws back an error finding the page.
For example if you run on 'http://my.gear.xxx:8080/' then after clicking login (using default password auth) it will process and try to redirect you to 'http://my.gear.xxx/cgi-bin/ikiwiki.cgi'. I'm assuming that somewhere we've used the 'path' and the 'host' and dropped the remainder. I can figure out where this is yet but I'll post back if I get lucky.
-- fergus
NB: both the 'url' and the 'cgiurl' include the port and removing the port element provides the expected functionality.
I tried to reproduce this by making my laptop's web server use port 8080. Set up ikiwiki to use that in cgiurl and url, and had no problem with either openid or password auth login.
Ikiwiki has had some changes in this area in the past year; you don't say what version you were using. It could also be a problem with your web server, conceviably, if didn't correctly communicate the port to the cgi program. --Joey
I did think of that so threw a 'printenv' script to check the port was arriving right.
SERVER_PORT=8181 HTTP_HOST=zippy0.ie0.cobbled.net
[ ... ]
In apache,
HTTP_HOST
includes the port. This is not part of the CGI spec it seems, but perl'sCGI
module seems to rely on it, invirtual_port
:my $vh = $self->http('x_forwarded_host') || $self->http('host'); my $protocol = $self->protocol; if ($vh) { return ($vh =~ /:(\d+)$/)[0] || ($protocol eq 'https' ? 443 : 80);
The
CGI
module only looks atSERVER_PORT
when there's noHTTP_HOST
. So this is either a bug in perl's CGI or thttpd. --Joey
[ ... ]
This is interesting. If HTTP_HOST is wrong then
- the client header must be wrong (i.e. not including the PORT)
perl
's doing something bad[tm] (or at least lazy)apache
is adding itthttpd
is stripping itQuick hack shows that
thttpd
must be stripping the port number from theHost:
header. That can be fixed.Thanks for the assist. -- fergus
Patch for thttpd-2.25b
for posterity and completeness
diff --git a/libhttpd.c b/libhttpd.c
index 73689be..039b7e3 100644
--- a/libhttpd.c
+++ b/libhttpd.c
@@ -2074,9 +2074,6 @@ httpd_parse_request( httpd_conn* hc )
cp = &buf[5];
cp += strspn( cp, " \t" );
hc->hdrhost = cp;
- cp = strchr( hc->hdrhost, ':' );
- if ( cp != (char*) 0 )
- *cp = '\0';
if ( strchr( hc->hdrhost, '/' ) != (char*) 0 || hc->hdrhost[0] == '.' )
{
httpd_send_err( hc, 400, httpd_err400title, "", httpd_err400form, "" );
-- fergus
I've gone ahead and filed a bug on CGI.pm too: https://rt.cpan.org/Ticket/Display.html?id=72678 --Joey
That'll be an interesting discussion as I'd suggest that HTTP headers are defined in the CGI specification as client headers and thus what thttpd
is doing is wrong (i.e. mangling the client's own representation). Whether a CGI client should trust HTTP header over the server is probably already settled by convention.
-- fergus
I originally set up ikiwiki by using the debian package, but had some odd issues, so i figured i'd try installing from git. To do that i uninstalled the debian package and then did the Makefile dance from the git dir. In that process the original dirs configured in templatedir underlaydir in my wiki were deleted; HOWEVER when rebuilding the script just went ahead and did not even note the lack of those dirs. It would be nice if it threw errors if the dirs were configured, but non-existant.
Hmm. This behavior was explicitly coded into ikiwiki for underlay dirs: commit. Pity I didn't say why, but presumably there are cases where one of the underlaydirs is expected to be missing, or where this robustness of not crashing is needed.
The situation with missing templatedirs is more clear: When it's looking for a given template file it just tries to open it in each directory in turn, and uses the first file found; checking that a directory exists would be extra work and there's a nice error message if a template cannot be found. --Joey
I'd agree with the thought behind that ... if it actually had thrown an error. However it did not. How about just checking the config variables when the template and/or config is set up? --Mithaldu
I just tried to clone the git repo onto a windows machine to test things out a bit and it turns out i cannot even successfully checkout the code because of those colons. Would a patch changing those to underscores be accepted? --Mithaldu
Well, this is a difficult thing. Ikiwiki has a configuration setting to prevent it writing filenames with colons, but for backwards compatability that is not enabled by default. Also nothing would stop people from making commits that added filenames with colons even if it were disabled in ikiwiki. I don't know that trying to work around obscure limitations in OSs that I've never heard of ikiwiki being used on is worth the bother TBH, but have not really made up my mind. --Joey
I'm not trying to run it there. Ikiwiki is way too friggin' weird to try that. I just want to be able to check out the main repo so i can work in a native editor. Right now your core repository is downright hostile to cross-platform development in any way, shape or form. (Just plain splitting the docs from the code would work too.) --Mithaldu
Does(n't) cygwin handle the filename limitation/translations? If so, can you check out via git inside a cygwin environment? — Jon
That actually allows me to check things out, but the resulting repo isn't compatible with most of the rest of my system, so it's extremely painful. --Mithaldu
I'm using the most recent release of ikiwiki (3.20110905), the Perl shipped with SuSE 11.4 (v5.12.3), and built and installed xapian 1.2.7 from source, as it seems the current stable version that's encouraged for use by xapian.
After enabling the search plugin and pointing ikiwiki to the omega program, rerunning ikiwiki --setup, and attempting a search, all searches return 0 results. No errors are reported by omindex or ikiwiki while producing the indexes in .ikiwiki/xapian/*, and the files appear to contain the indexed data. I don't think it's a problem in indexing.
When running omega by hand in the .ikiwiki/xapian directory, providing queries on the command-line, runs correctly but again provides no results.
I found that Debian stable is currently shipping 1.2.3, and on a hunch, I built that version, and searching now works fine. This looks like the usage of xapian's query template has changed somewhere between 1.2.3 and 1.2.7. Someone more familiar with xapian's query template language should be able to figure out what needs to be changed more specifically.
Debian has 1.2.7 now, and I have it installed and searching is working fine with it. --Joey
I have this same issue. I tried xapian version 1.2.5. 1.2.8, 1.2.13. I will try and see if installing 1.2.3 fixes this issue. --Ramsey
1.2.3 didn't fix the issue either --Ramsey
When I create a new page and upload an attachment all is fine.
If I try to upload a second attachment (or remove the previously uploaded attachment), no upload happens. Instead the page gets created. No matter what I typed in, I just get a map to show the attachment. Now I can edit this page and everything is fine again.
Another workaround is to first save the text and then edit and upload the rest.
Is this a problem on my site or does anyone else see this?
(If it's my fault feel free to move this to forum.)
I don't see a behavior like that. I don't know what you mean when you say "I just get a map to show the attachment" A map?
What version of ikiwiki? What browser? Is javascript enabled? --Joey
I mean the map directive. It was ikiwiki 3.20110430. Tried Firefox and uzbl (webkit) with or without javascript.
Just updated to 3.20110905. Now the problem has changed. Instead of saving the page with the second upload and leading me to it, it leaves me in the editform but creates the page anyway. When saving I get informed, that someone else created the page. Obviously it was ikiwiki itself with the mentioned map: [[!map pages="path/to/page/* and ! ...
This told me that autoindex is the bad guy. Deactivating this plugin helps out. Don't know if this is worth fixing... I can live without that plugin. --bacuh
The right fix would probably be for
do=create
to allow replacing a page in the transient underlay without complaining (like the behaviour thatdo=edit
normally has).... which it turns out it already does. --smcv
That wouldn't help you unless autoindex defaulted to making transient pages (
autoindex_commit => 0
), but if we can fix removal of transient pages then maybe that default can change? --smcvIt turns out that with
autoindex_commit => 0
, the failure mode is different. The transient map is created when you attach the attachment. When you save the page, it's written into the srcdir, the map is deleted from the transientdir, and the ctime/mtime in the indexdb are those of the file in the srcdir, but for some reason the HTML output isn't re-generated (despite a refresh happening). --smcv
When the aggregate plugin was used for a feed and this is removed (or the same feed name given a different rss feed), the old entries don't automatically vanish.
I think that if it was just removed, they are never GC'd, because the expiry code works on the basis of existing feeds. And if it was replaced, old items won't go away until expirecount or expireage is met.
To fix it probably needs an explicit check for items aggregated by feeds that no longer provide them, Catching old items for feeds that were changed to a different url may be harder yet. --Joey
Wikis are great tools for collaborative content of all types, but the choice for website creators who want a level of collaboration seem to have to choose between a static website, a wiki that anyone (or all members) can edit, or an overkill customized web app.
A simple innovation that needs to propagate through wiki software is adding the ability to suggest edits and accept those edits. Perhaps you want a wiki that anyone can suggest and edit, but only registered users can edit freely or accept edits. Or you want anyone, including members, to only be able to suggest edits, and only have moderators able to approve edits and edit freely. Etc, etc.
Ikiwiki always has some work in this area; there is the moderatedcomments plugin and the
checkcontent
hook. The hook allows, for example a plugin to reject changes with spam links or swear words. A plugin could also use it to save the diff for later moderation.I think the difficulty is in the moderation interface, which would need to apply the diff and show the resulting page with the changes somehow evident (for users who can't just read diffs), and would have to deal with conflicting edits, etc. --Joey
Two examples of encoding breakage observed in the wild. In both cases the ampersand needs to be escaped. --Joey
<link href="http://www.youtube.com/watch?v=Z9hP9lhBDsI&feature=youtube_gdata"/>
<category term="vicky&alice" />
Hi, I created [[sandbox/subpage]] then I deleted it with the "remove" button. After confirmation there was a message about a xapian error (My bad, I did not write down the exact error message). Now, accessing ?sandbox/subpage leads my browser complains about a redirect loop. JeanPrivat
Uh. Now the bug of redirect loop seems to have solved itself. However, I don't know if the xapian error need to be investigated. But I found another bug. JeanPrivat
Apache will return 403 (Forbidden) instead of 404 (Not Found) if the
Indexes
option is turned off. This is because with Indexes
turned on,
it considers it something it might be able to serve in the future. With
Indexes
off, it will never serve that page in the future (unless
Indexes
is turned back on).
The 404 plugin code only checks for 404, not 403. It should check for both.
There are plenty of reasons a web server might 403. In most of those cases, trying to create a page where the forbidden content is is not the right thing for ikiwiki to do. --Joey
See Also:
The table plugin seems to be unable to read a CSV file that uses \r\n for line delimiters. The same file with \r works fine. The error message is "Empty data". --liw
I was seeing this as well on an Ubuntu 11.04 system with Ubuntu 11.04, Perl 5.10.1, IkiWiki 3.20110124ubuntu1, and libtext-csv-perl 1.21-1, all installed from APT. However, when I removed libtext-csv-perl from APT and installed it from CPAN, the problem went away. FWIW, what CPAN grabbed was MAKAMAKA/Text-CSV-1.21.tar.gz. --micahrl
When I add a comment to a page, its title should be a hyperlink. This would make re-opening it to re-read parts of it, either.
I.e. when adding a comment to this page, the last part should be a hyperlink, as well:
ikiwiki/ bugs/ creating Site title not clickable while adding a comment
Richard
I noticed this a few times in Google Chrome 12 (dev channel) a few times, already:
I added a comment to
http://git-annex.branchable.com/forum/performance_improvement:_git_on_ssd__44___annex_on_spindle_disk/
and left the page. Later, I revisited
http://git-annex.branchable.com/forum/
and clicked on
http://git-annex.branchable.com/forum/performance_improvement:_git_on_ssd__44___annex_on_spindle_disk/
My own comment did not appear. I pressed F5 and eh presto.
My assumption is that ikiwiki does not tell Chrome to reload the page as the cache is stale.
Richard
There is some lurking bug with certian web browsers, web servers, or combination of the two that makes modifications to html files not always be noticed by web browsers. See firefox doesn't want to load updated pages at ikiwiki.info see also http://bugs.debian.org/588623.
On Branchable, we work around this problem with an apache configuration: «ExpiresByType text/html "access plus 0 seconds"»
There seems to be no way to work around it in ikiwiki's generated html, aside from using the cache-control setting that is not allowed in html5.
And, which browsers/web servers have the problem, and where the bug is, seems very hard to pin down. --Joey
Similarly to po: apache config serves index.rss for index, the po apache config has another bug.
The use of "DirectoryIndex index", when combined with multiviews, is intended to serve up a localized version of the index.??.html file.
But, if the site's toplevel index page has a discussion page, that is "/index/discussion/index.html". Or, if the img plugin is used to scale an image on the index page, that will be "/index/foo.jpg". In either case, the "index" directory exists, and so apache happily displays that directory, rather than the site's index page!
--Joey
Ack, we do have a problem. Seems like ikiwiki's use of
index/
as the directory for homepage's sub-pages and attachments makes it conflict deeply with Apache'sMultiViews
: as the MultiViews documentation says,index.*
are considered as possible matches only if theindex/
directory does not exist. Neither type maps normod_mime
config parameters seem to allow overriding this behavior. Worse even, I guess any page calledindex
would have the same issues, not only the wiki homepage.I can think of two workarounds, both kinda stink:
- Have the homepage's
targetpage
be something else thanindex.html
.- Have the directory for the homepage's sub-pages and attachments be something else than
index
.I doubt either of those can be implemented without ugly special casing. Any other idea? --intrigeri
As I understand it, this is how you'd do it with type maps:
- turn off MultiViews
AddHandler type-map .var
DirectoryIndex index.var
- make
index.var
a typemap (text file) pointing toindex.en.html
,index.fr.html
, etc.I'm not sure how well that fits into IkiWiki's structure, though; perhaps the master language could be responsible for generating the type-map on behalf of all slave languages, or something?
Another possibility would be to use filenames like
index.html.en
andindex.html.fr
, and setDirectoryIndex index.html
? This could get problematic for languages whose ISO codes conventionally mean something else as extensions (Polish,.pl
, is the usual example, since many sites interpret.pl
as "this is a (Perl) CGI"). --smcvThere is something to be said about "index/foo" being really ugly and perhaps it would be nice to use something else. There does not appear to even be one function that could be changed; "$page/foo" is hardwired into ikiwiki in many places as a place to dump subsidiary content -- and it's not even consistent, since there is also eg, "$page.rss". I agree, approaching it from this direction would be a mess or a lot of work.
Type maps seem like a valid option, but also a lot of clutter.
index.html.pl
does seem to be asking for trouble, even if apache can be configured to DTRT. It would make serving actual up perl scripts hard, at least. But that is some good out of the box thinking.. perhaps "index.foo.pl.html"?However, that would mean that web servers need to be configured differently to serve translated and non-translated sites. The current apache configuration for po can be used with non-po sites and they still work. --Joey
I am vulnerable to the same problem because I use MultiViews, though I don't use the
po
module; I have to serve both Australian English and American English for my company's website (for SEO purposes; certain words that relate to our products are spelt differently in US and Australian English, and we need to be able to be googled with both spellings). I'm just fortunate that nobody has thought to add attachments to the front page yet. I raise this to point out that this is going to be a recurring problem that won't necessarily be fixed by changing thepo
module in isolation.One could argue that "index" is already a special case, since it is the top page of the site. Things like parentlinks already use a special case for the top page (checking the variable HAS_PARENTLINKS). Likewise, when --usedirs is true, index is treated as a special case, since it generates "index.html" and not "index/index.html".
Unfortunately, I'm not sure what the best approach to solving this would be. --KathrynAndersen
For security reasons, one of the sites I'm in charge of uses a Reverse Proxy to grab the content from another machine behind our firewall. Let's call the out-facing machine Alfred and the one behind the firewall Betty.
For the static pages, everything is fine. However, when trying to use the search, all the links break. This is because, when Alfred passes the search query on to Betty, the search result has a "base" tag which points to Betty, and all the links to the "found" pages are relative. So we have
<base href="Betty.example.com"/>
...
<a href="./path/to/found/page/">path/to/found/page</a>
This breaks things for anyone on Alfred, because Betty is behind a firewall and they can't get there.
What would be better is if it were possible to have a "base" which didn't reference the hostname, and for the "found" links not to be relative. Something like this:
<base href="/"/>
...
<a href="/path/to/found/page/">path/to/found/page</a>
The workaround I've come up with is this.
- Set the "url" in the config to ' ' (a single space). It can't be empty because too many things complain if it is.
- Patch the search plugin so that it saves an absolute URL rather than a relative one.
Here's a patch:
diff --git a/IkiWiki/Plugin/search.pm b/IkiWiki/Plugin/search.pm
index 3f0b7c9..26c4d46 100644
--- a/IkiWiki/Plugin/search.pm
+++ b/IkiWiki/Plugin/search.pm
@@ -113,7 +113,7 @@ sub indexhtml (@) {
}
$sample=~s/\n/ /g;
- my $url=urlto($params{destpage}, "");
+ my $url=urlto($params{destpage}, undef);
if (defined $pagestate{$params{page}}{meta}{permalink}) {
$url=$pagestate{$params{page}}{meta}{permalink}
}
It works for me, but it has the odd side-effect of prefixing links with a space. Fortunately that doesn't seem to break browsers. And I'm sure someone else could come up with something better and more general.
The
<base href>
is required to be genuinely absolute (HTML 4.01 §12.4). Have you tried settingurl
to the public-facing URL, i.e. withalfred
as the hostname? That seems like the cleanest solution to me; if you're one of the few behind the firewall and you access the site viabetty
directly, my HTTP vs. HTTPS cleanup in recent versions should mean that you rarely get redirected toalfred
, because most URLs are either relative or "local" (start with '/'). --smcvI did try setting
url
to the "Alfred" machine, but that doesn't seem clean to me at all, since it forces someone to go to Alfred when they started off on Betty. Even worse, it prevents me from setting up a test environment on, say, Cassandra, because as soon as one tries to search, one goes to Alfred, then Betty, and not back to Cassandra at all. Hardcoded solutions make me nervous.I suppose what I would like would be to not need to use a
<base href>
in searching at all. --KathrynAndersen
<base href>
is not required to be absolute in HTML5, so whenhtml5: 1
is used, I've changed it to be host-relative in most cases. I think that at least partially addresses this bug report, particularly if we generate HTML5 by default like I've suggested.The
<base>
is there so we can avoid having to compute how to get to (the virtual directory containing) the root of the wiki fromikiwiki.cgi
, which might well be somewhere odd like/cgi-bin/
. I think there are probably other things that it fixes or simplifies. --smcv
On FreeBSD, perl defaults to installation in
/usr/local/bin/perl
since it is not a part of the base system. If the option to create symlinks in/usr/bin
is not selected, > building and running ikiwiki will fail because the shebang lines use#!/usr/bin/perl [args]
. Changing this to#!/usr/bin/env -S perl [args]
fixes the issue.
I think this should be a concern of ikiwiki's official FreeBSD port.
At any rate, even if it is decided that ikiwiki should be fixed, then it is probably better to use
$installbin/perl
from -MConfig
and not the env
hack.
The inline and comments plugins both generate feed links.
In both cases, the generated markup include an element with id="feedlink"
.
XHTML 1.0 Strict (Ikiwiki's default output type) forbids multiple elements with the same ID:
In XML, fragment identifiers are of type ID, and there can only be a single attribute of type ID per element. Therefore, in XHTML 1.0 the id attribute is defined to be of type ID. In order to ensure that XHTML 1.0 documents are well-structured XML documents, XHTML 1.0 documents MUST use the id attribute when defining fragment identifiers on the elements listed above. See the HTML Compatibility Guidelines for information on ensuring such anchors are backward compatible when serving XHTML documents as media type text/html.
As does W3C's HTML5.
Any page with both a comments feed and an inline feed will be invalid XHTML 1.0 Strict or HTML 5.
-- Jon
?version 3.2011012 suggests this is fixed for
inline
, at least, I will test to see if it is cleared up for comments too. -- Jon
At least my setup on kapsi.fi always prints 404 Not Found after adding a page with non-ascii characters in name. But the page exists and is visible after the 404 with url encoding and the blog page is inlined correctly on the feed page.
Apparently ikiwiki.info does not complain with 404. Should the character encoding be set in wiki config?
Happens also after editing the page. Here's an example:
- page name displayed in 404: http://mcfrisk.kapsi.fi/skiing/posts/Iso-Sy%F6te%20Freeride%202011%20Teaser.html?updated
- page name in the blog feed: http://mcfrisk.kapsi.fi/skiing/posts/Iso-Sy%C3%B6te%20Freeride%202011%20Teaser.html
Difference is in the word Iso-Syöte. Pehaps also the browsers is part of the game, I use Iceweasel from Debian unstable with default settings.
I remember seeing this problem twice before, and both times it was caused by a bug in the web server configuration. I think at least one case it was due to an apache rewrite rule that did a redirect and mangled the correct encoding.
I recommend you check there. If you cannot find the problem with your web server, I recommend you get a http protocol dump while saving the page, and post it here for analysis. You could use tcpdump, or one of the browser plugins that allows examining the http protocol. --Joey
Server runs Debian 5.0.8 but I don't have access to the Apache configs. Here's the tcp stream from wireshark without cookie data, page name is testiä.html. I guess page name is in utf-8 but in redirect after post it is given to browser with 8859-1.
POST /ikiwiki.cgi HTTP/1.1
Host: mcfrisk.kapsi.fi
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.16) Gecko/20110107 Iceweasel/3.5.16 (like Firefox/3.5.16)
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: http://mcfrisk.kapsi.fi/ikiwiki.cgi
Cookie: XXXX
Content-Type: multipart/form-data; boundary=---------------------------138059850619952014921977844406
Content-Length: 1456
-----------------------------138059850619952014921977844406
Content-Disposition: form-data; name="_submitted"
2
-----------------------------138059850619952014921977844406
Content-Disposition: form-data; name="do"
edit
-----------------------------138059850619952014921977844406
Content-Disposition: form-data; name="sid"
93c956725705aa0bbdff98e57efb28f4
-----------------------------138059850619952014921977844406
Content-Disposition: form-data; name="from"
-----------------------------138059850619952014921977844406
Content-Disposition: form-data; name="rcsinfo"
5419fbf402e685643ca965d577dff3dafdd0fde9
-----------------------------138059850619952014921977844406
Content-Disposition: form-data; name="page"
testi..
-----------------------------138059850619952014921977844406
Content-Disposition: form-data; name="type"
mdwn
-----------------------------138059850619952014921977844406
Content-Disposition: form-data; name="editcontent"
test
-----------------------------138059850619952014921977844406
Content-Disposition: form-data; name="editmessage"
-----------------------------138059850619952014921977844406
Content-Disposition: form-data; name="_submit"
Save Page
-----------------------------138059850619952014921977844406
Content-Disposition: form-data; name="attachment"; filename=""
Content-Type: application/octet-stream
-----------------------------138059850619952014921977844406--
HTTP/1.1 302 Found
Date: Wed, 02 Feb 2011 19:45:49 GMT
Server: Apache/2.2
Location: /testi%E4.html?updated
Content-Length: 0
Keep-Alive: timeout=5, max=500
Connection: Keep-Alive
Content-Type: text/plain
GET /testi%E4.html?updated HTTP/1.1
Host: mcfrisk.kapsi.fi
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.16) Gecko/20110107 Iceweasel/3.5.16 (like Firefox/3.5.16)
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: http://mcfrisk.kapsi.fi/ikiwiki.cgi
Cookie: XXXX
HTTP/1.1 404 Not Found
Date: Wed, 02 Feb 2011 19:45:55 GMT
Server: Apache/2.2
Content-Length: 279
Keep-Alive: timeout=5, max=499
Connection: Keep-Alive
Content-Type: text/html; charset=iso-8859-1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p>The requested URL /testi..html was not found on this server.</p>
<hr>
<address>Apache/2.2 Server at mcfrisk.kapsi.fi Port 80</address>
</body></html>
Getting the pages has worked every time:
GET /testi%C3%A4.html HTTP/1.1
Host: mcfrisk.kapsi.fi
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.16) Gecko/20110107 Iceweasel/3.5.16 (like Firefox/3.5.16)
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Cookie: XXXX
If-Modified-Since: Wed, 02 Feb 2011 19:45:54 GMT
If-None-Match: "1b518d-7c0-49b51e5a55c5f"
Cache-Control: max-age=0
HTTP/1.1 304 Not Modified
Date: Wed, 02 Feb 2011 20:01:43 GMT
Server: Apache/2.2
Connection: Keep-Alive
Keep-Alive: timeout=5, max=500
ETag: "1b518d-7c0-49b51e5a55c5f"
It looks like there is no way to logout of ikiwiki at present, meaning that if you edit the ikiwiki in, say, a cybercafe, the cookie remains... is there some other security mechanism in place that can check for authorization, or should I hack in a logout routine into ikiwiki.cgi?
Click on "Preferences". There is a logout button there. --liw
It would be nice if it were not buried there, but putting it on the action bar statically would be confusing. The best approach might be to use javascript. --Joey
I agree that javascript seems to be a solution, but my brain falls off the end of the world while looking at ways to manipulate the DOM. (I'd argue also in favor of the openid_provider cookie expiring in less time than it does now, and being session based)
(The
openid_provider
cookie is purely a convenience cookie to auto-select the user's openid provider the next time they log in. As such, it cannot be a session cookie. It does not provide any personally-identifying information so it should not really matter when it expires.) --JoeyIt would be nice to move navigational elements to the upper right corner of the page...
I have two kinds of pages (wiki and blog), and three classes of users
anonymous users - display things like login, help, and recentchanges,
non-admin users - on a per subdir basis (blog and !blog) display logout, help, recentchanges, edit, comment
admin users - logout, help, recentchanges, edit, comment, etc
I was referred to this page from posting to the forum. I am also interested in being able to use user class and status to modify the page. I will try to put together a plugin. From what I can see there needs to be a few items in it.
It should expose a link to a dedicated login page that, once logged in, returns the user to the calling page, or at least the home page. I have started a plugin to do this: justlogin
it needs to expose a link to a little json explaining the type of user and login status.
it should expose a link that logs the person out and returns to the calling page, or at least the home page.
Then there would need to be a little javascript to use these links appropriately. I have little javascript experience but I know that can be done. I am less sure if it is possible to add this functionality to a plugin so I'll start with that. If no one objects I will continue to post here if I make progress. If anyone has any suggestions on how to modify my approach to code it in an easier way I'd appreciate the input. justint
I'd like the more plugin and RSS to play better together. In the case of the html generation of the main page of a blog, I'd like to get the first paragraph out, but keep RSS as a full feed.
Maybe there is a different plugin (I also tried toggle)?
I am not a fan of the more directive (thus the rant about it sucking embedded in its example). But I don't think that weakening it to not work in rss feeds is a good idea, if someone wants to force users to go somewhere to view their full content, they should be able to do it, even though it does suck.
The toggle directive will degrade fairly well in an rss feed to display the full text. (There is an annoying toggle link that does nothing when embedded in an rss feed). --Joey
I also note, that at least currently, more seems to break on a few pages, not being parsed at all when aggregated into the front page.
It's just a simple directive, it should work anywhere any directive will, and does as far as I can see. Details? --Joey
see also: rss feeds do not use recommended encoding of entities for some fields
Wide characters should probably be supported, or, at the very least, warned about.
Test case:
mkdir -p ikiwiki-utf-test/raw ikiwiki-utf-test/rendered
for page in txt mdwn; do
echo hello > ikiwiki-utf-test/raw/$page.$page
for text in 8 16 16BE 16LE 32 32BE 32LE; do
iconv -t UTF$text ikiwiki-utf-test/raw/$page.$page > ikiwiki-utf-test/raw/$page-utf$text.$page;
done
done
ikiwiki --verbose --plugin txt --plugin mdwn ikiwiki-utf-test/raw/ ikiwiki-utf-test/rendered/
www-browser ikiwiki-utf-test/rendered/ || x-www-browser ikiwiki-utf-test/rendered/
# rm -r ikiwiki-utf-test/ # some browsers rather stupidly daemonize themselves, so this operation can't easily be safely automated
BOMless LE and BE input is probably a lost cause.
Optimally, UTF-16 (which is ubiquitous in the Windows world) and UTF-32 should be fully supported, probably by converting to mostly-UTF-8 and using &#xXXXX;
or &#DDDDD;
XML escapes where necessary.
Suboptimally, UTF-16 and UTF-32 should be converted to UTF-8 where cleanly possible and a warning printed where impossible.
Reading the wikipedia pages about UTF-8 and UTF-16, all valid Unicode characters are representable in UTF-8, UTF-16 and UTF-32, and the only errors possible with UTF-16/32 -> UTF-8 translation are when there are encoding errors in the original document.
Of course, it's entirely possible that not all browsers support utf-8 correctly, and we might need to support the option of encoding into CESU-8 instead, which has the side-effect of allowing the transcription of UTF-16 or UTF-32 encoding errors into the output byte-stream, rather than pedantically removing those bytes.
An interesting question would be how to determine the character set of an arbitrary new file added to the repository, unless the repository itself handles character-encoding, in which case, we can just ask the repository to hand us a UTF-8 encoded version of the file.
-- Martin Rudat
Consider this:
$ wget http://nic-nac-project.de/~schwinge/ikiwiki/cutpaste_filter.tar.bz2
$ wget http://nic-nac-project.de/~schwinge/ikiwiki/0001-cutpaste.pm-missing-filter-call.patch
$ tar -xj < cutpaste_filter.tar.bz2
$ cd cutpaste_filter/
$ ./render_locally
$ find "$PWD".rendered/ -type f -print0 | xargs -0 grep -H -E 'FOO|BAR'
[notice one FOO in there]
$ rm -rf .ikiwiki "$PWD".rendered
$ cp /usr/share/perl5/IkiWiki/Plugin/cutpaste.pm .library/IkiWiki/Plugin/
$ patch -p0 < ../cutpaste_filter.patch
$ ./render_locally
$ find "$PWD".rendered/ -type f -print0 | xargs -0 grep -H -E 'FOO|BAR'
[correct; notice no more FOO]
I guess this needs a general audit -- there are other places where preprocess
is being doing without filter
ing first, for example in the same file, copy
function.
So, in English, page text inside a cut directive will not be filtered. Because the cut directive takes the text during the scan pass, before filtering happens.
Commit 192ce7a238af9021b0fd6dd571f22409af81ebaf and po vs templates has to do with this. There I decided that filter hooks should only act on the complete text of a page.
I also suggested that anything that wants to reliably s/FOO/BAR/ should probably use a sanitize hook, not a filter hook. I think that would make sense in this example.
I don't see any way to make cut text be filtered while satisfying these constraints, without removing cutpaste's ability to have forward pastes of text cut laster in the page. (That does seems like an increasingly bad idea..) --Joey
OK -- so the FOO/BAR thing was only a very stripped-down example, of course, and the real thing is being observed with the getfield plugin. This one needs to run before
preprocess
ing, for its{{$page#field}}
syntax is (a) meant to be usable inside ikiwiki directives, and (b) the field values are meant to still bepreprocess
ed before being embedded. That's why it's using thefilter
hook instead ofsanitize
.Would adding another kind of hook be a way to fix this? My idea is that cut (and others) would then take their data not during
scan
ning, but afterfilter
ing.
If comments_allowdirectives
is set, previewing a comment can run
directives that create files. (Eg, img.) Unlike editpage, it does not
keep track of those files and expire them. So the files will linger in
destdir forever.
Probably when the user then tries to save the comment, ikiwiki will refuse to overwrite the unknown file, and will crash. --Joey
I'd like a way to always ask the RCS (Git) to update a file's mtime in
refresh mode. This is currently only done on the first build, and later
for --gettime --rebuild
. But always rebuilding is too heavy-weight for
this use-case. My options are to either manually set the mtime before
refreshing, or to have ikiwiki do it at command. I used to do the
former, but would now like the latter, as ikiwiki now generally does this
timestamp handling.
From a quick look, the code in IkiWiki/Render.pm:find_new_files
is
relevant: if (! $pagemtime{$page}) { [...]
.
How would you like to tackle this?
This could be done via a
needsbuild
hook. The hook is passed the list of changed files, and it should be safe to callrcs_getmtime
and update thepagemtime
for each.That lets the feature be done by a plugin, which seems good, since
rcs_getmtime
varies between very slow and not very fast, depending on VCS.AFAICS, the only use case for doing this is if you commit changes and then delay pushing them to a DVCS repo. Since then the file mtime will be when the change was pushed, not when it was committed. But I've generally felt that recording when a change was published to the repo of a wiki as its mtime is good enough. --Joey
I'm attempting a merge with the SVN plugin via the web interface with ikiwiki-3.20100403 and subversion 1.6.11.
The web interface says
Your changes conflict with other changes made to the page.
Conflict markers have been inserted into the page content. Reconcile the conflict and commit again to save your changes.
However there are no merge conflict markers in the page. My apache error log says:
[Fri Apr 30 16:43:57 2010] [error] [client 10.64.64.42] svn: Commit failed (details follow):, referer: https://unixwiki.ncl.ac.uk/ikiwiki.cgi
[Fri Apr 30 16:43:57 2010] [error] [client 10.64.64.42] svn: Authorization failed, referer: https://unixwiki.ncl.ac.uk/ikiwiki.cgi
-- Jon
Only way for this to be improved would be for the svn plugin to explicitly check the file for conflict markers. I guess it could change the error message then, but the actual behavior of putting the changed file back in the editor so the user can recommit is about right as far as error recovery goes. --Joey
Lighttpd apparently sets REDIRECT_STATUS=200 for the server.error-handler-404 page. This breaks the 404 plugin which checks this variable for 404 before processing the URI. It also doesn't seem to set REDIRECT_URL.
For what it's worth, the first half is http://redmine.lighttpd.net/issues/1828. One workaround would be to make this script your 404 handler:
#!/bin/sh REDIRECT_STATUS=404; export REDIRECT_STATUS REDIRECT_URL="$SERVER_NAME$REQUEST_URI"; export REDIRECT_URL exec /path/to/your/ikiwiki.cgi "$@"
--smcv
I was able to fix my server to check the REQUEST_URI for ikiwiki.cgi and to continue processing if it was not found, passing $ENV{SEVER_NAME} . $ENV{REQUEST_URI} as the first parameter to cgi_page_from_404. However, my perl is terrible and I just made it work rather than figuring out exactly what to do to get it to work on both lighttpd and apache.
This is with lighttpd 1.4.19 on Debian.
/cgi-bin/ikiwiki.cgi?do=goto also provides redirection in the same way, if that's any help? You might need to set the lighttpd 404 handler to that, then compose REDIRECT_URL from other variables if necessary.
I originally wrote the plugin for Apache; weakish contributed the lighttpd docs and might know more about how to make it work there. --smcv
As I said, I got it working for me, but somebody who knows perl should probably look at it with the aim of making it work for everyone. I considered having lighttpd construct a proper url for the 404 redirect itself, but I don't know if it can do something like that or not. For what it's worth, here's the change I made to the module:
sub cgi ($) {
my $cgi=shift;
if ($ENV{REQUEST_URI} !~ /ikiwiki\.cgi/) {
my $page = cgi_page_from_404(
Encode::decode_utf8($ENV{SERVER_NAME} . $ENV{REQUEST_URI}),
$config{url}, $config{usedirs});
IkiWiki::Plugin::goto::cgi_goto($cgi, $page);
}
# if (exists $ENV{REDIRECT_STATUS} &&
# $ENV{REDIRECT_STATUS} eq '404') {
# my $page = cgi_page_from_404(
# Encode::decode_utf8($ENV{REDIRECT_URL}),
# $config{url}, $config{usedirs});
# IkiWiki::Plugin::goto::cgi_goto($cgi, $page);
# }
}
It seems that rebuild a wiki (ikiwiki --rebuild
) after changing the underlaydir
config option doesn't remove the pages coming from the previous underlaydir.
I've noticed this with the debian package version 3.20100102.3~bpo50+1.
Perhaps it is possible to improve this or mention it in the manual page?
--prosper
--rebuild causes ikiwiki to throw away all its info about what it built before, so it will never clean up pages that have been removed, by any means. Suggest you do a --refresh, possibly followed by a --rebuild if that is really necessary. --Joey
I have a page with the name "umläute". When I try to remove it, ikiwiki says:
Error: ?umläute does not exist
I'm curious about the '?' in the "?umläute" message. Suggests that the filename starts with another strange character. Can I get a copy of a git repository or tarball containing this file? --Joey
I wrote the following patch, which seems to work on my machine. I'm running on FreeBSD 6.3-RELEASE with ikiwiki-3.20100102.3 and perl-5.8.9_3.
--- remove.pm.orig 2009-12-14 23:26:20.000000000 +0100
+++ remove.pm 2010-01-18 17:49:39.000000000 +0100
@@ -193,6 +193,7 @@
# and that the user is allowed to edit(/remove) it.
my @files;
foreach my $page (@pages) {
+ $page = Encode::decode_utf8($page);
check_canremove($page, $q, $session);
# This untaint is safe because of the
The problem with this patch is that, in a recent fix to the same plugin, I made
@pages
come from$form->field("page")
, and that, in turn is already run throughdecode_form_utf8
just above the code you patched. So I need to understand why that is apparently not working for you. (It works fine for me, even when deleting a file named "umläute" --Joey
Update, having looked at the file in the src of the wiki that is causing trouble for remove, it is:
uml\303\203\302\244ute.mdwn
And that is not utf-8 encoded, which, represented the same would be:uml\303\244ute.mdwn
I think it's doubly-utf-8 encoded, which perhaps explains why the above patch works around the problem (since the page name gets doubly-decoded with it). The patch doesn't fix related problems when using remove, etc.
Apparently, on apoca's system, perl encodes filenames differently depending on locale settings. On mine, it does not. Ie, this perl program always creates a file named
uml\303\244ute
, no matter whether I run it with LANG="" or LANG="en_US.UTF-8":perl -e 'use IkiWiki; writefile("umläute", "./", "baz")'
Remains to be seen if this is due to the older version of perl used there, or perhaps FreeBSD itself. --Joey
Update: Perl 5.10 fixed the problem. --Joey
To reproduce:
- Add the backlinkbug plugin below to ikiwiki.
- Create a page named test.mdwn somewhere in the wiki.
- Refresh ikiwiki in verbose mode. Pages whose bestlink is the test.mwdn page will be printed to the terminal.
- Delete test.mdwn.
- Refresh ikiwiki in verbose mode again. The same pages will be printed to the terminal again.
- Refresh ikiwiki in verbose mode another time. Now no pages will be printed.
bestlink() checks %links (and %pagecase) to confirm the existance of the page. However, find_del_files() does not remove the deleted page from %links (and %pagecase).
Since find_del_files removes the deleted page from %pagesources and %destsources, won't it make sense for bestlink() to check %pagesources first? --harishcm
This same problem turned out to also be the root of half of ikiwiki's second-oldest bug, bestlink change update issue.
Fixing it is really a bit involved, see commit f1ddf4bd98821a597d8fa1532092f09d3d9b5483. The fix I committed fixes bestlink to not return deleted pages, but only after the needsbuild and scan hooks are called. So I was able to fix it for every case except the one you gave! Sorry for that. To fix it during beedsbuild and scan, a much more involved approach would be needed. AFAICS, no existing plugin in ikiwiki uses bestlink in needsbuild or scan though.
If the other half of bestlink change update issue is fixed, maybe by keeping a copy of the old backlinks info, then that fix could be applied here too. --Joey
Cool that was fast! Well at least half the bug is solved For now I'll probably try using a workaround if using bestlink within the needsbuild or scan hooks. Maybe by testing if pagemtime equals zero. --harishcm
Yeah, and bestlink could also do that. However, it feels nasty to have it need to look at pagemtime. --Joey
#!/usr/bin/perl
# Plugin to reproduce bestlink returning deleted pages.
# Run with ikiwiki in verbose mode.
package IkiWiki::Plugin::bestlinkbug;
use warnings;
use strict;
use IkiWiki 3.00;
sub import {
hook(type => "getsetup", id => "bestlinkbug", call => \&getsetup);
hook(type => "needsbuild", id => "bestlinkbug", call => \&needsbuild);
}
sub getsetup () {
return
plugin => {
safe => 1,
rebuild => 0,
},
}
sub needsbuild (@) {
my $needsbuild=shift;
foreach my $page (keys %pagestate) {
my $testpage=bestlink($page, "test") || next;
debug("$page");
}
}
1
The map directive sort by pagename. That looks kind of odd, when used together with show=title. I would expect it to sort by title then.
This would be quite hard to fix. Map sorts the pages it displays by page name, which has the happy effect of making "foo/bar" come after "foo"; which it has to do, so that it can be displayed as a child of the page it's located in. If sorting by title, that wouldn't hold. So, map would have to be effectively totally rewritten, to build up each group of child pages, and then re-sort those. --Joey
Ok, you are right, that does would break the tree. This made me think that I do not need to generate a tree for my particular use case just a list, so i thought i could use inline instead. This created two new issues:
inline also does sort by pagename even when explicitly told to sort by title.
I cannot get inline to create a list when the htmltidy plugin is switched on. I have a template which is enclosed in an li tag, and i put the ul tag around the inline manually, but htmltidy breaks this. --martin
You might want to check if the report plugin solves your problem. It can sort by title, among other things. --KathrynAndersen
I realise OP posted this 10 years ago, but here's how I do it: I generate https://jmtd.net/log/all/ using
inline
witharchive=yes
and a custom template which defines the LI element content for each post. I then include these inlines (one per calendar year) via another template, which has the wrapping UL elements in it. These templates are.tmpl
files (and live in my customtemplatedir
, although that might not matter) which means they avoid the htmlscrubber. — Jon, 2020-04-27See also: sort parameter for map plugin and directive --smcv
I'm trying to make a pretty theme for ikiwiki and I'm making progress (or at least I think I am :-). However I've noticed an issue when it comes to theming. On the front page the wiki name is put inside the "title" span and on all the other pages, it's put in the "parentlinks" span. See here:
From my dev home page:
<div class="header">
<span>
<span class="parentlinks">
</span>
<span class="title">
adam.shand.net/iki-dev
</span>
</span><!--.header-->
</div>
From a sub-page of my dev home page:
<div class="header">
<span>
<span class="parentlinks">
<a href="../">adam.shand.net/iki-dev/
</span>
<span class="title">
recipes
</span>
</span><!--.header-->
</div>
I understand the logic behind doing this (on the front page it is the title as well as the name of the wiki) however if you want to do something different with the title of a page vs. the name of the wiki it makes things pretty tricky.
I'll just modify the templates for my own site but I thought I'd report it as a bug in the hopes that it will be useful to others.
Cheers,
AdamShand.
I just noticed that it's also different on the comments, preferences and edit pages. I'll come up with a diff and see what you guys think. -- Adam.
Example:
[[`\[[!taglink TAG\]\]`|plugins/tag]]
gives:
[[\[[!taglink TAG\]\]
|plugins/tag]]
Expected: there is a wikilink with the complex text as the displayed text. --Ivan Z.
aggregate takes a name parameter that specifies a global name for a feed. This causes some problems:
- If a site has multiple pages that aggregate, and they use the same name, one will win and get the global name, the other will claim it's working, but it's really showing what the other aggregated.
- If an aggregate directive is moved from page A to page B, and the wiki
refreshed, aggregate does not realize the feed moved, and so it will
keep aggregated pages under
A/feed_name/*
. To work around this bug, you have to delete A, refresh (maybe with --aggregate?), and then add B.
Need to find a way to not make the name be global. Perhaps it needs to include the name of the page that contains the directive?
The remove plugin does not report an error if git rm fails. (It probably doesn't if other VCS backends fail too). This can happen for example if a page in your source directory is not a tracked file for whatever reason (in my case, due to renaming the files and forgetting to commit that change).
-- Jon
Error received when clicking on the "edit" link:
Error: [CGI::FormBuilder::AUTOLOAD] Fatal: Attempt to address non-existent field 'text' by name at /home/tealart/bin/share/perl/5.8.4/IkiWiki/CGI.pm line 112
Error received when following a "Create New Page" (eg. ?) link:
Error: [CGI::FormBuilder::AUTOLOAD] Fatal: Attempt to address non-existent field 'param' by name at /home/tealart/bin/share/perl/5.8.4/IkiWiki/Plugin/editpage.pm line 122
I could probably find several other flavors of this error if I went looking, but I trust you get the idea.
The CGI starts to render (this isn't the "you forgot to set the permissions/turn on the CGI" error) and then fails.
Further details:
Running on shared hosting (dreamhost; but everything compiles, dependencies installed, the site generates perfectly, other CGIs work, the file permissions work).
It's running perl 5.8.4, but I did upgrade gettext to 0.17
the server is running gcc v3.3.5 (at this point, this is the main difference between the working system and my box.)
I've removed the locale declarations from both the config file and the environment variable.
I've also modified the page template and have my templates in a non standard location. The wiki compiles fine, with the template, but might this be an issue? The CGI script doesn't (seem) to load under the new template, but I'm not sure how to address this issue.
All of the required/suggested module dependencies are installed (finally) to the latest version including (relevantly) CGI::FormBuilder 3.0501.
I'm running ikiwiki v3.08. Did I mention that it works perfectly in nearly every other way that I've managed to test thusfar?
I suspect that your perl is too old and is incompatible with the version of CGI::FormBuilder you have installed.
Is so, it seems likely that the same error message can be reproduced by running a simple command like this at the command line:
perl -e 'use warnings; use strict; use CGI::FormBuilder; my $form=CGI::FormBuilder->new; $form->text("boo")'
--Joey
nope, that command produces no output.
I considered downgrading CGI::FormBuilder but I saw evidence of previous versions being incompatible with ikiwiki so I decided against that.
-- tychoish
In ikiwiki 2.66, SVG images are not recognized as images. In ikiwiki.pm, the hardcoded list of image file extensions does not include ".svg", which it probably should unless there's some other issue about rendering SVGs?
The 'img' plugin also seems to not support SVGs.
SVG images can only be included via an
<object>
,<embed>
, or<iframe>
tag. Or, perhaps as inline SVG. The htmlscrubber strips all three tags since they can easily be used maliciously. If doing inline SVG, I'd worry that the svg file could be malformed and mess up the html, or even inject javascript. So, the only options seem to be only supporting svgs on wikis that do not sanitize their html, or assuming that svgs are trusted content and embedding them inline. None of which seem particularly palatable.I suppose the other option would be converting the svg file to a static image (png). The img plugin could probably do that fairly simply. --Joey
This seems to have improved since; at least chromium can display svg images from
<img>
tags. Firefox 3.5.19 did not in my testing.So, svgs can now be included on pages by linking to them, or by using the img directive. The most portable thing is to use the img directive plus some size, which forces them to be resized and a png to actually be displayed.
I have not yet tried to do anything with sanitizing them. --Joey
I'm working on inline SVG and MathML support in ikiwiki and I've modified my htmlscrubber to sanitize SVG and MathML using the whitelists from html5lib. Here's a patch. I've also made some notes about this here: svg.
I suspect that this bug may have caught the eye of anyone interested in this sort of thing. I'll elaborate a bit on my user page to avoid getting off-topic here. --JasonBlevins, October 21, 2008
A lot of strings in ikiwiki are hardcoded and not taken for locales resources through gettext. This is bad because ikiwiki is thus difficult to spread for non-english users.
I mean that, for instance in CGI.pm, line like:
my @buttons=("Save Page", "Preview", "Cancel");
should be written as
my @buttons=(gettext("Save Page"), gettext("Preview"), gettext("Cancel"));
Yes, these need to be fixed. But note that the localised texts come back into ikiwiki and are used in various places, including plugins. Including, possibly, third-party plugins. So localising the buttons would seem to require converting from the translations back into the C locale when the form is posted. --Joey
Wouldn't it be more easy to change all calls to the corrects ones (including in plugins) ? For instance in the same file (CGI.pm):
elsif ($form->submitted eq gettext("Save Page")) {
. That way no conversion to the C locale is needed. gettext use should just be publicized in documentation (at least in write). --bbbIt would be easy, but it could break third-party plugins that hardcode the english strings. It's also probably less efficient to run gettext over and over. --Joey
In standards templates things seems wrongly written too. For instance in page.tmpl line like:
<li><a href="<TMPL_VAR EDITURL>" rel="nofollow">Edit</a></li>
should be written as
<li><a href="<TMPL_VAR EDITURL>" rel="nofollow"><TMPL_VAR EDITURL_TEXT</a></li>
with EDITURL_TEXT variable initialized in Render.pm through a gettext call.
Am I wrong ?
No, that's not a sane way to localise the templates. The templates can be translated by making a copy and modifying it, or by using a tool to generate .mo files from the templates, and generate translated templates from .po files. (See l10n for one attempt.) But pushing the localisation of random strings in the templates through the ikiwiki program defeats the purpose of having templates at all. --Joey
If not I can spend some time preparing patches for such corrections if it can help.
-- bbb
A ?PageSpec that is entirely negated terminals, such as "!foo and !bar" matches all other pages, including all internal pages. This can lead to unexpected results, since it will match a bunch of recentchanges pages, etc.
Recall that internal-use pages are not matched by a glob. So "*" doesn't match them. So if the pagespec is "* and !foo and !bar", it won't match them. This is the much more common style.
There's an odd inconsistency with entirely negated pagespecs. If "!foo" matches page bar, shouldn't "" also match bar? But, the empty pagespec is actually special-cased to not match anything.
Indeed, it seems what would be best would be for "!foo" to not match any pages, unless it's combined with a terminal that positively matches pages ("* and !foo"). Although this would be a behavior change, with transition issues.
Another approach would be to try to detect the case of an entirely negated pagespec, and implicitly add "and !internal()" to it.
Either approach would require fully parsing the pagespec. And consider cases like "!(foo and !bar)". Doesn't seem at all easy to solve. --Joey
It occurs to me that at least one place in ikiwiki optimizes by assuming that pagespecs not mentioning the word "internal" never match internal pages. I wonder whether this bug could be solved by making that part of the definition of a pagespec, rather than a risky optimization like it is now? That seems strange, though - having this special case would make pagespecs significantly harder to understand. --smcv
The Atom and RSS templates use ESCAPE=HTML
in the title elements. However, HTML-escaped characters aren't valid according to http://feedvalidator.org/.
Removing ESCAPE=HTML
works fine, but I haven't checked to see if there are any characters it won't work for.
For Atom, at least, I believe adding type="xhtml"
to the title element will work. I don't think there's an equivalent for RSS.
Removing the ESCAPE=HTML will not work, feed validator hates that just as much. It wants rss feeds to use a specific style of escaping that happens to work in some large percentage of all rss consumers. (Most of which are broken). http://www.rssboard.org/rss-profile#data-types-characterdata There's also no actual spec about how this should work.
This will be a total beast to fix. The current design is very clean in that all (well, nearly all) xml/html escaping is pushed back to the templates. This allows plugins to substitute fields in the templates without worrying about getting escaping right in the plugins -- and a plugin doesn't even know what kind of template is being filled out when it changes a field's value, so it can't do different types of escaping for different templates.
The only reasonable approach seems to be extending HTML::Template with an ESCAPE=RSS and using that. Unfortunately its design does not allow doing so without hacking its code in several places. I've contacted its author to see if he'd accept such a patch.
(A secondary bug is that using meta title currently results in unnecessry escaping of the title value before it reaches the template. This makes the escaping issues show up much more than they need to, since lots more characters are currently being double-escaped in the rss.)
--Joey
Update: Ok, I've fixed this for titles, as a special case, but the underlying problem remains for other fields in rss feeds (such as author), so I'm leaving this bug report open. --Joey
I'm curious if there has been any progress on better RSS output? I've been prototyping a new blog and getting good RSS out of it seems important as the bulk of my current readers use RSS. I note, in passing that the "more" plugin doesn't quite do what I want either - I'd like to pass a full RSS feed of a post and only have "more" apply to the front page of the blog. Is there a way to do that? -- ?dtaht
To be clear, the RSS spec sucks to such an extent that, as far as I know, there is no sort of title escaping that will work in all RSS consumers. Titles are currently escaped in the way that tends to break the fewest according to what I've read. If you're unlucky enough to have a "&" or "<" in your name, then you may still run into problems with how that is escaped in rss feeds. --Joey
When committing a page like this one, with an escaped toc directive in it:
[[!toc ]]
The recentchangesdiff comes back with it unescaped. Which can be confusing.
It would be nice if the aggregate plugin would try to extract the m/ctime out of each post and touch the files on the filesystem appropriately, so that ikiwiki reflects the actual time of the post via the inline plugin, rather than the time when the aggregation ran to pull the post in. --madduck
Like this? (Existing code in aggregate.pm...) --Joey
# Set the mtime, this lets the build process get the right creation
# time on record for the new page.
utime $mtime, $mtime, pagefile($guid->{page})
if defined $mtime && $mtime <= time;
I'll have to debug this, it's not working here... and this is an ikiwiki aggregator scraping another ikiwiki site.
Any news about this? --Joey
That would be useful to avoid "flooding" with old content when something new is added with aggregate and then listed with the inline directive. -- hugo
I am using mercurial as RCS backend and ikiwiki 2.40.
It seems that, when adding a blog post, it is not immediately commited to the mercurial repo. I have a page with this directive:
[[!inline pages="journal/blog2008/* and !*/Discussion" show="0" feeds="no" actions="yes" rootpage="journal/blog2008"]]
When I add a blog post, I see it on the wiki but it doesn't appear on History
or RecentChanges
. If I run hg status
on the wiki source dir, I see the new file has been marked as A
(ie, a new file that has not been commited).
If I then edit the blog post, then the file gets commited and I can see the edit on History
and RecentChanges
. The creation of the file remains unrecorded. --?buo
Ikiwiki calls
rcs_add()
if the page is new, followed byrcs_commit()
. For mercurial, these run respectivelyhg add
andhg commit
. If the add or commit fails, it will print a warning to stderr, you might check apache's error.log to see if there's anything there. --JoeyThe problem was using accented characters (é, í) on the change comments. I didn't have an UTF-8 locale enabled in my setup file. By coincidence this happened for the first time in a couple of consecutive blog posts, so I was mistaken about the root of the problem. I don't know if you will consider this behavior a bug, since it's strictly speaking a misconfiguration but it still causes ikiwiki's mercurial backend to fail. A quick note in the docs might be a good idea. For my part, please close this bug, and thanks for the help. --?buo
So, in a non-utf8 locale, mercurial fails to commit if the commit message contains utf8? --Joey
(Sorry for the delay, I was AFK for a while.) What I am seeing is this: in a non-utf8 locale, using mercurial "stand-alone" (no ikiwiki involved), mercurial fails to commit if the commit message has characters such as á. If the locale is utf8, mercurial works fine (this is with mercurial 1.0).
However, the part that seems a bit wrong to me, is this: even if my locale is utf8, I have to explicitly set a utf8 locale in the wiki's setup file, or the commit fails. It looks like ikiwiki is not using this machine's default locale, which is utf8. Also, I'm not getting any errors on apache's error log.
Wouldn't it make sense to use the machine's default locale if 'locale' is commented out in the setup file?
Ikiwiki wrappers only allow whitelisted environment variables through, and the locale environment variables are not included currently.
But that's not the whole story, because "machine's default locale" is not very well defined. For example, my laptop is a Debian system. It has a locale setting in /etc/environment (
LANG="en_US.UTF-8"
). But even if I start apache, making sure that LANG is set and exported in the environment, CGI scripts apache runs do not see LANG in their environment. (I notice that/etc/init.d/apache
explocitly forces LANG=C. But CGI scripts don't see the C value either.) Apache simply does not propigate its runtime environment to CGI scripts, and this is probably to comply with the CGI specification (although it doesn't seem to completly rule out CGI's being passed other variables).If mercurial needs a utf-8 locale, I guess the mercurial plugin needs to check if it's not in one, and do something sane (either fail earlier, or complain, or strip utf-8 out of comments). --Joey
As far as I can tell, ikiwiki is not checking the SSL certificate of the remote host when using openid authentication. If so, this would allow for man-in-the-middle type attacks. Alternatively, maybe I am getting myself confused.
Test #1: Enter URL as openid server that cannot be verified (either because the certificate is self signed or signed by an unknown CA). I get no SSL errors.
Test #2: Download net_ssl_test from dodgy source (it uses the same SSL perl library, and test again. It seems to complain (on same site ikiwiki worked with) when it can't verify the signature. Although there is other breakage with the version I managed to download (eg. argument parsing is broken; also if I try to connect to a proxy server, it instructs the proxy server to connect to itself for some weird reason).
For now, I want to try and resolve the issues with net_ssl_test, and run more tests. However, in the meantime, I thought I would document the issue here.
-- Brian May
Openid's security model does not rely on the openid consumer (ie, ikiwiki) performing any sanity checking of the openid server. All the security authentication goes on between your web browser and the openid server. This may involve ssl, or not.
Note that I'm not an openid expert, and the above may need to be taken with a grain of salt. I also can make no general statements about openid being secure. --Joey
For example, my openid is "http://joey.kitenet.net/". If I log in with this openid, ikiwiki connects to that http url to determine what openid server it uses, and then redirects my browser to the server (https://www.myopenid.com/server), which validates the user and redirects the browser back to ikiwiki with a flag set indicating that the openid was validated. At no point does ikiwiki need to verify that the https url is good. --Joey
Ok, so I guess the worst that could happen when ikiwiki talks to the http address is that it gets intercepted, and ikiwiki gets the wrong address. ikiwiki will then redirect the browser to the wrong address. An attacker could trick ikiwiki to redirect to their site which always validates the user and then redirects back to ikiwiki. The legitimate user may not even notice. That doesn't so seem secure to me...
All the attacker needs is access to the network somewhere between ikiwiki and http://joey.kitenet.net/ or the ability to inject false DNS host names for use by ikiwiki and the rest is simple.
-- Brian May
I guess that the place to add SSL cert checking would be in either LWPx::ParanoidAgent or Net::OpenID::Consumer. Adding it to ikiwiki itself, which is just a user of those libraries, doesn't seem right.
It's not particularly clear to me how a SSL cert can usefully be checked at this level, where there is no way to do anything but succeed, or fail; and where the extent of the check that can be done is that the SSL cert is issued by a trusted party and matches the domain name of the site being connected to. I also don't personally think that SSL certs are the right fix for DNS poisoning issues. --Joey
I was a bit vague myself on the details on openid. So I looked up the standard. I was surprised to note that they have already considered these issues, in section 15.1.2, http://openid.net/specs/openid-authentication-2_0.html#anchor41.
It says:
"Using SSL with certificates signed by a trusted authority prevents these kinds of attacks by verifying the results of the DNS look-up against the certificate. Once the validity of the certificate has been established, tampering is not possible. Impersonating an SSL server requires forging or stealing a certificate, which is significantly harder than the network based attacks."
With regards to implementation, I am surprised that the libraries don't seem to do this checking, already, and by default. Unfortunately, I am not sure how to test this adequately, see Debian bug #466055. -- Brian May
I think Crypt::SSLeay already supports checking the certificate. The trick is to get LWP::UserAgent, which is used by LWPx::ParanoidAgent to enable this checking.
I think the trick is to set one of the the following environment variables before retrieving the data:
$ENV{HTTPS_CA_DIR} = "/etc/ssl/certs/";
$ENV{HTTPS_CA_FILE} = "/etc/ssl/certs/file.pem";
Unfortunately I get weird results if the certificate verification fails, see Debian bug #503440. It still seems to work though, regardless.
-- Brian May
If I try to authenticate using openid to my site, it tries to create a http or https connection to the openid server. This doesn't work, because the direct connection is blocked by the firewall.
It would be good if ikiwiki supported setting up a proxy server to solve this.
I have found if I add:
newenviron[i++]="HTTPS_PROXY=http://host.domain.com:3128";
to IkiWiki/Wrapper.pm it solves the problem for https requests, however it obviously would be preferred if the proxy name is not hard coded.
Also, the ability to set HTTPS_CA_FILE and HTTPS_CA_DIR might benefit some people. Then again, it I can't see any evidence that the SSL certificate of the server is being checked. See the bug report I filed on this separate issue.
Unfortunately, HTTP_PROXY doesn't work for http:// requests, it looks like that library is different.
Update 2008-10-26:
Better solution, one that works for both http and https, and uses config options. It appears to work...
Note that using $ua->proxy(['https'], ...); won't work, you get a "Not Implemented" error, see http://community.activestate.com/forum-topic/lwp-https-requests-proxy. Also see Debian bug #129528.
Also note that the proxy won't work with liblwpx-paranoidagent-perl, I had to remove liblwpx-paranoidagent-perl first.
louie:/usr/share/perl5/IkiWiki/Plugin# diff -u openid.pm.old openid.pm --- openid.pm.old 2008-10-26 12:18:58.094489360 +1100 +++ openid.pm 2008-10-26 12:40:05.763429880 +1100 @@ -165,6 +165,14 @@ $ua=LWP::UserAgent->new; } + if (defined($config{"http_proxy"})) { + $ua->proxy(['http'], $config{"http_proxy"}); + } + + if (defined($config{"https_proxy"})) { + $ENV{HTTPS_PROXY} = $config{"https_proxy"}; + } + # Store the secret in the session. my $secret=$session->param("openid_secret"); if (! defined $secret) {
Brian May
Rather than adding config file settings for every useful environment variable, there is a ENV config file setting that can be used to set any environment variables you like. So, no changed needed. --Joey
One thing I don't like about using ikiwiki for tracking bugs is I don't get notified when changes are made :-(.
Anyway, if you look at the code I pasted above, the environment variables do not work for http:// - you have to use $ua->proxy(...) for them. This is significant, because all openid servers in my version appear to have been defined with http:// not https:// in /usr/share/ikiwiki/openid-selector/ikiwiki/openid/openid-jquery.js
Use $ua->env_proxy() to get it to read the environment variables. Then http:// does work.
Unfortunately this breaks https:// even more - but nothing I do seems to make https:// work anymore.
LWP::UserAgent defaults to not caring about proxy settings in the environment. (To give control over the result, I guess?) To get it to care, pass
env_proxy => 1
to the constructor. Affected plugins: aggregate, openid, pinger. This probably wants to be on by default, and might not need to be configurable. --schmonzOkay, in a real-world scenario it does need to be configurable. A working implementation (tested with aggregate, not tested with the other two plugins) is in my git, commit 91c46819dee237a281909b0c7e65718eb43f4119. --schmonz
Oh, and according to the LWPx::ParanoidAgent docs, "proxy support is explicitly removed", so if ikiwiki can preferentially find that installed, even with the above commit,
openid
won't be able to traverse a proxy. --schmonz
I've redone this from scratch, much more simply, on a new branch. --schmonz.
After installing IkiWiki 2.16 on Mac OS X 10.4 server I attempted to use "/Library/Application\ Support/IkiWiki/Working\ Copies" as the parent of my $SRCPATH and get "skipping bad filename" errors for any .mdwn file in that directory:
skipping bad filename /Library/Application Support/IkiWiki/Working Copies/ikiwikinewt/index.mdwn
Tthe .ikiwiki directory is correctly created in that directory. I switched to using a path with no spaces and it works correctly.
The brokenlinks plugin falsely complains that formatting has a broken link to smileys, if the smiley plgin is disabled. While the page links to it inside a conditional, and so doesn't show the link in this case, ikiwiki scans for links w/o looking at conditionals and so still thinks the page contains the link.
This bug is described here:
If sandbox/page.mdwn has been generated and sandbox/sidebar.mdwn is created, the sidebar is only added to sandbox and none of the subpages. --TaylorKillian
Yes, a known bug. As noted in the code: --Joey
# FIXME: This isn't quite right; it won't take into account
# adding a new sidebar page. So adding such a page
# currently requires a wiki rebuild.
add_depends($page, $sidebar_page);
In markdown syntax, none of the other special characters get processed
inside a code block. However, in ikiwiki, wiki links and
preprocessor directives still get processed
inside a code block, requiring additional escaping. For example, [links
don't work](#here)
, but a <a href="../ikiwiki/wikilink/">wikilink</a> becomes HTML
. --JoshTriplett
Indented lines provide a good way to escape a block of text containing markdown syntax, but ikiwiki links like [[this]] are still interpreted within such a block. I think that intepretation should not be happening. That is I should be able to write:
<span class="createlink"><a href="/ikiwiki.cgi?do=create&from=bugs%2Fwiki_links_still_processed_inside_code_blocks&page=this" rel="nofollow">?</a>this</span>
and have it render like:
[[this]]
--?cworth
Has there been any progress or ideas on this bug recently? I use an expanded CamelCase regexp, and without much escaping in freelink text, or url links, or in codeblocks I get IkiWiki's attempt at creating a "link within a link".
I have no ideas other than perhaps once IkiWiki encounters [[ or the position is reset with a backreference from a CamelCased word, further processing of wikilinks is disabled until the position is reset and a "no not makelinks" flag or variable is cleared.
I've come up with some really ugly workarounds to handle case specific stuff like codeblocks but the problem creeps up again and again in unexpected places. I'd be happy to come up with a patch if anyone has a bright idea on a nice clean way (in theroy) to fix this. I'm out of ideas.
--CharlesMauch
I've moved the above comment here because it seems to be talking about this bug, not the similar Smileys bug.
In the case of either bug, no, I don't have an idea of a solution yet. --Joey
I've now solved a similar bug involving the smiley plugin. The code used there should give some strong hints how to fix this bug, though I haven't tried to apply the method yet. --Joey
As far, as I can see, smileys bug is solved by checking for code/pre. In this case, however, this is not applicable. WikiLinks/directives should be expanded before passing text to formatter, as their expansion may contain markup. Directives should be processed before, as they may provide partial markup (eg
template
ones), that have no sense except when in the page cotext. Links should be processed before, because, at least multimarkdown may try to expand them as anchor-links.For now, my partial solution is to restrict links to not have space at the start, this way in many cases escaping in code may be done in natural way and not break copypastability. For example, shell 'if [[ condition ]];' will work fine with this.
Maybe directives can also be restricted to only be allowed on the line by themselves (not separated by blank lines, however) or something similar.
--?isbear
I found a bug where footnotes can't be added inside tables (both the table plugin and markdown) because the markup isn't processed in the page context. What you describe regarding the processing order should fix it. Nevermind, it generally seems that footnotes aren't allowed in tables and footnotes can't be referred to more than once in Markdown. --awesomeadam
The header of subpages always links to its "superpage", even if it doesn't exist. I'm not sure if this is a feature or a bug, but I would certainly prefer that superpages weren't mandatory.
For example, if you are in 'example/page.html', the header will be something like 'wiki / example / page'. Now, if 'example.html' doesn't exist, you'll have a dead link for every subpage.
This is a bug, but fixing it is very tricky. Consider what would happen if example.mdwn were created: example/page.html and the rest of example/ would need to be updated to change the parentlink from a bare word to a link to the new page. Now if example.mdwn were removed again, they'd need to be updated again. So example/ depends on example. But it's even more tricky, because if example.mdwn is modified, we don't want to rebuild example/*!
ikiwiki doesn't have a way to represent this dependency and can't get one without a lot of new complex code being added.
Note that this code has now been added. In new terms, example/* has a presence dependency on example. So this bug is theoretically fixable now. --Joey
For now the best thing to do is to make sure that you always create example if you create example/foo. Which is probably a good idea anyway..
Note that this bug does not exist if the wiki is built with the "usedirs" option, since in that case, the parent link will link to a subdirectory, that will just be missing the index.html file, but still nicely usable. --Joey
http://www.gnu.org/software/hurd/hurd/translator/writing.html does not exist. Then, on http://www.gnu.org/software/hurd/hurd/translator/writing/example.html, in the parentlinks line, writing links to the top-level index file. It should rather not link anywhere at all. --tschwinge
So, the bug has changed behavior a bit. Rather than a broken link, we get a link to the toplevel page. This, FWIW, is because the template now uses this for each parentlink:
<a href="<TMPL_VAR URL>"><TMPL_VAR PAGE></a>/
Best workaround is still to enable usedirs. --Joey
Has bugs updating things if the bestlink of a page changes due to adding/removing a page. For example, if Foo/Bar links to "Baz", which is Foo/Baz, and Foo/Bar/Baz gets added, it will update the links in Foo/Bar to point to it, but will forget to update the backlinks in Foo/Baz.
The buggy code is in
refresh()
, when it determines what links, on what pages, have changed. It only looks at changed/added/deleted pages when doing this. But when Foo/Bar/Baz is added, Foo/Bar is not changed -- so the change it its backlinks is not noticed.To fix this, it needs to consider, when rebuilding Foo/Bar for the changed links, what oldlinks Foo/Bar had. If one of the oldlinks linked to Foo/Baz, and not links to Foo/Bar/Baz, it could then rebuild Foo/Baz.
Problem is that in order to do that, it needs to be able to tell that the oldlinks linked to Foo/Baz. Which would mean either calculating all links before the scan phase, or keeping a copy of the backlinks from the last build, and using that. The first option would be a lot of work for this minor issue.. it might be less expensive to just rebuild all pages that Foo/Bar links to.
Keeping a copy of the backlinks has some merit. It could also be incrementally updated.
This old bug still exists as of 031d1bf5046ab77c796477a19967e7c0c512c417.
And if Foo/Bar/Baz is then removed, Foo/Bar gets a broken link, instead of changing back to linking to Foo/Baz.
This part was finally fixed by commit f1ddf4bd98821a597d8fa1532092f09d3d9b5483.
If a file in the srcdir is removed, exposing a file in the underlaydir, ikiwiki will not notice the removal, and the page from the underlay will not be built. (However, it will be if the wiki gets rebuilt.)
This problem is caused by ikiwiki storing only filenames relative to the srcdir or underlay, and mtime comparison not handling this case.
A related problem occurs if changing a site's theme with the theme plugin. The style.css of the old and new theme often has the same mtime, so ikiwiki does not update it w/o a rebuild. This is worked around in theme.pm with a special-purpose needsbuild hook. --Joey
Web browsers don't word-wrap lines in submitted text, which makes editing a
page that someone wrote in a web browser annoying (gqip
is vim user's
friend here). Is there any way to improve this?
See "using the web interface with a real text editor" on the tips page. --JoshTriplett
Would it be useful to allow a "max width" plugin, which would force on commit the split of long lines ?
Please, no. That would wreak havoc on code blocks and arguments to preprocessor directives, and it would make bulleted lists and quoted blocks look bogus (because the subsequent lines would not match), among other problems. On the other hand, if you want to propose a piece of client-side JavaScript that looks at the active selection in a text area and word-wraps it, and have a plugin that adds a "Word-Wrap Selection" button to the editor, that seems fine. --JoshTriplett