There are around 20 different functions that take a UTF-8 sequence of
bytes and try to find the ordinal code point represented by them. It was
becoming clear that the existing tests in our suite were inadequate, not
finding glaring bugs. And UTF-8 handling is important, with failures in
it having been exploited by hackers in various products over the years
for various nefarious purposes.
I set out to improve the tests, spending way too much time before
realizing that adding band aids to the current scheme was not going to
work out. So I undertook rewriting the tests. This turned out to be way
harder and time consuming than I expected. And it still isn't ready to
go into blead. But along the way, I discovered that it was finding
corner case bugs that I would never have anticipated. This series of
commits fixes those, while simplifying the code and reducing redundancy.
The new test file needs clean-up, and probably ways to make it faster,
but it is finally far enough along that I believe it has caught most of
the bugs out there. So I'm submitting these now to get into v5.42. The
deadline for the test file is later in the development process.
This asserts against the flags to the call of this function being
contradictory, in that it is boths
1) to warn and/or die if anything goes wrong; and
2) not to warn under any circumstances but instead to return to the
caller objects describing what it would have otherise warned.
In a non-DEBUGGING build, the warn/die flags are ignored
Previous commits have allowed the beginning of several of the case
statements in this switch() to have the same code. This commit creates
a macro encapsulating that code and changes the cases to use it.
The macro continues the enclosing loop if no message needs to be
generated. This allows the removal of various conditional blocks. And
it means that these conditions don't break to the bottom of the switch()
if no message is needed.
Braces are needed in one case: so as to not run afoul of C++
initialization crossing
This macro is used to hide the details of determining if an abnormal
condition should raise a warning or not. But I found it more convenient
to expand the macro to return the packed warnings category(ies) if a
warning should be raised or not. That information is known inside the
macro and was being discarded, and then having to be recalculated. The
new name reflects its expanded purpose, PACK_WARN. 0 is returned if no
warnings need be raised; and importantly fixing a bug in the old code,
it returns < 0 if no warning should be raised directly, but that an
entry needs to be added to the AV array returned by the function (if the
parameter requesting that has been passed in)
But Encode, for which this form of the translation function was created,
and may be the only user of it, depends on not getting a zero return.
So this has an override until Encode can be fixed.
I introduced the DIE_IF_MALFORMED flag in the previous development
release, making it subservient to the CHECK_ONLY flag. I have since
realized that the precedence should be reversed. If a developer
inadvertently passes both flags, it is better to honor the one saying
you need to quit, than the one saying ignore any problems.
At this point in the code we know that the input sequence is shorter
than a full character and that it is the legal beginning of a sequence that
could evaluate to a code point that is of interest to the caller of this
function. It turns out that in some cases any filling out of the input
to a full character must lead to a code point that the caller is
interested in. That interest has been signalled by flags passed to this
function.
In the past, we filled out the sequence with the minimum legal
continuation byte, but that is wrong for some cases. This commit fixes
that.
Certain start bytes require the second byte to be higher than the
minimum, or else it is an overlong. Prior to this commit, we could
generate overlongs. This commit avoids that pitfall.
It also moves the complex analysis away from the comments in the code,
and to this commit message, adding even more analysis.
There are four classes of code points that the caller can have signalled
to this function that it is interested in.
The noncharacter code point class always needs a full sequence to
determine, and the conditionals prevent the code this analasys is about
from being executed.
Use of Perl extended-UTF-8 is determinable from the first byte in the
input sequence, and that has already been determined.
Both of the other two sequences don't have to be fully filled out in
order to determine if a partial sequence would lead to them or not.
Consider first, the sequences that evaluate to an above-Unicode code
point, charmingly named "supers" by Perl's poetic coders.
ASCII platforms EBCDIC I8
U+10FFFF: \xF4\x8F\xBF\xBF \xF9\xA1\xBF\xBF\xBF
0x110000: \xF4\x90\x80\x80 \xF9\xA2\xA0\xA0\xA0
*
(Continuation byte range):
\x80 to \xbf \xa0 to \xbf
On ASCII platforms, any start byte \xf3 and below can't be for a super,
and any non-overlong sequence \xf5 and above has to be for a super. If
the start byte is \xf4, we need a second byte to resolve the ambiguity.
But it takes just the one, or possibly two bytes to make the
determination. It's similar on EBCDIC, but with different values.
And a similar situation exists for the surrogates. The range of
non-overlong surrogates is:
ASCII platforms EBCDIC I8
"\xed\xa0\x80" "\xf1\xb6\xa0\xa0"
to "\xed\xbf\xbf". "\xf1\xb7\xbf\xbf"
In both platforms, if we have the first two bytes, we can tell if it is
a surrogate or not, as all legal continuations in the rest of the byte
positions are for surrogates. If we have only one byte, we can't tell,
so we have to assume it isn't a surrogate.
Overlongs don't meaningfully change things. The shortest ASCII overlong
for the first surrogate is "\xf0\x8d\xa0\x80"
and for the highest surrogate it is "\xf0\x8d\xbf\xbf".
Note that only the first byte has been changed, into two bytes. All but
the first byte is the same for any overlong of any code point in either
ASCII or EBCDIC.
This means the algorithm for filling things out works for these two
classes in all cases. Note also that the upper end of the range
conveniently works out without any extra effort needed. The highest
surrogate corresponds to the highest continuation bytes. And the
highest super that fits in the platform will also use the highest
continuation bytes.
The start bytes that need to have the fix in this commit are the ones
that could be the start of overlongs, minus the lower ones which can
represent only code points smaller than any of the ones the caller can
flag as being "interesting" (U+D800 is that value), and minus 0xFF.
Hence 0xE0 can have overlongs, but it and its overlongs can only
represent code points lower than 0xD800. So we don't have to worry
about it or any smaller start byte.
But the reason 0xFF doesn't have to be considered is more complex.
It isn't the second byte in a sequence beginning with FF that needs to
be higher than the minimum continuation, but one further in. This
would make things harder except that any sequence beginning with 0xFF is
Perl-extended UTF-8, and has already been considered earlier in this
function. This code is only executed when 'must_be_super' is false.
'must_be_super' is set true if the sequence overflows or there is no
detectable overlong. By DeMorgan's laws, this means to get here, it
doesn't overflow, and must be overlong. To know that it is overlong, we
must have seen enough bytes to get past the point where we need a higher
continuation byte to legally fill it out. So we can just fill the rest
with the minimum continuation.
(Note that the same reasoning would apply to 0xFE on ASCII platforms.
That is also used only by Perl-extended UTF-8, so would have been
considered earlier, and to get here we know it has to be overlong, and
so we've already seen enough bytes to not need to handle it specially.
But it fits into the same paradigm as the lower start bytes with just
the second byte needing to be higher, and there is no extra code
required to handle it besides including a case: for it in the switch().
This works in both ASCII and EBCDIC.)
This begins the process of fixing the current problematic behavior of
handling UTF-8 that is for code points above the Unicode maximum.
The lowest of these are considered SUPERs, but if you go high enough, it
takes Perl's extended UTF-8 to represent them. Higher still, and the
extended UTF-8 can represent code points that don't fit in the current
platform's word size.
A complication is overlongs, where the representation for a seemingly
large code point can reduce down to something much smaller; even 0.
Such sequences are considered invalid by fiat from Unicode due to
successful hacker attacks using them. But Perl has traditionally
allowed XS code to allow them, with flags passed to the translation
functions. So it is important to get it right.
A sequence that overflows by necessity is using Perl's extended UTF-8,
as that kicks in below a 32 bit word. This commit reverses the prior
order of testing for overflow and extended UTF-8. Steps can be saved
because we now test for Perl-extended first, which is a lot more likely
to happen than overflow.
The next few commits will fail these tests. I could squash them all
together, but that would hide the step by step change progess.
This should allow future bisecting to not fail in this commit window.
This removes the duplicate code from many of the case statements in a
switch to be common before the switch, with a single conditional
controlling them
By checking before we go to the trouble to do something, rather than in
the middle of it, we can save some work.
The new test looks at the source UTF-8; the previous one looked at the
code point calculated from it
The overlong cases more logically belong with the other conditions that
are rejected by default.
Future commits will simplify this to look much more like those other
conditions.
Prior to this commit, there were two different methods for doing this
check; one if no malformations have been found so far, and the other if
some had been found. The latter method is valid in both cases, and is
just as fast or faster than the first method. So change to always use
it
This sets the accumulated code point to UV_MAX when overflow is
detected. Much further below the REPLACEMENT CHARACTER is returned
instead; but this makes sure that code in between doesn't get confused
by an intermediate value
Processing the overlong malformation needed to be last because it likely
would overwrite the calculated UV. Other cases also overwrote that.
This is unnecessarily brittle, as we can simply store the UV before
processing any cases, and then refer to that copy.
This case: has two occurrences of the same statement, within two
different conditionals. But the case: doesn't get executed unless at
least one of those conditionals is known to be true. Therefore the
statement is guaranteed to be executed at least once; no need to have
two copies.
This function returns values in an AV instead of raising warnings. It
turns out that this test file gets some of it wrong. And this test file
turns out to be inadequate in other ways. I have rewritten the test
file, but there isn't time to get it in before the code-complete
deadline. Fixes here will end up being discarded.
In order to get the code that is actually part of the perl interpreter
into this release, I've skipped the test that would fail here, and made
sure it all passes the rewritten test.
- recognize the short form() as well as Perl_form()
- accept/ignore spaces between `Perl_croak(` and `aTHX_`
With this change, diag.t now recognizes several diagnostic messages that
went undetected previously (note the space before `aTHX_`):
- perlio.c
Perl_croak( aTHX_
"%s (%" UVuf ") does not match %s (%" UVuf ")",
Perl_croak( aTHX_
"%s (%" UVuf ") smaller than %s (%" UVuf ")",
- regcomp_trie.c
Perl_croak( aTHX_ "error creating/fetching widecharmap entry for 0x%" UVXf, uvc );
default: Perl_croak( aTHX_ "panic! In trie construction, unknown node type %u %s", (unsigned) flags, REGNODE_NAME(flags) );
Perl_croak( aTHX_ "panic! In trie construction, no char mapping for %" IVdf, uvc );
This PR partially overlaps with #23017. Merging either will cause
conflicts in the other that will have to be resolved manually.
(In particular, if this PR is merged first, the diag.t changes from
#23017 can be dropped, as can some of the perldiag.pod additions. But
that PR also modifies the perlio.c messages, so their old forms added
here ("%s (%d) does not match %s (%d)", "%s (%d) smaller than %s (%d)")
will have to be deleted.)
There is no point in using a separate format string if the whole error
message is written right next to it. Not only does this change lead to
simpler code (passing one argument instead of two), it also exposes more
error messages to t/porting/diag.t, which relies on croak's first
argument to supply the message template.
Also make an equivalent change to S_open_script, which passes a constant
string in the form of an err variable that is not used anywhere else.
(It used to be, but that code was deleted in commit 5bc7d00e3e.)
- Remove 'require 5.000'. In theory, this would give a nice runtime
error message when run under perl4; in practice, this file doesn't
even parse as perl4 due to 'use strict', 'our', and '->' method calls.
- Use numeric comparison with $], not string comparison. (In practice,
this would probably only start failing once we reach perl 10, but
still.)
- Don't repeatedly check $fc_available at runtime. Just define a
fallback fc() in terms of lc() if CORE::fc is not available.
- Add missing $key argument to sample code in SYNOPSIS. This fixes
<https://rt.cpan.org/Ticket/Display.html?id=97189>.