mirror of
https://github.com/curl/curl.git
synced 2026-01-26 15:03:21 +00:00
GHA: detect and warn for more English contractions
As we try to avoid them in curl documentation Closes #13940
This commit is contained in:
parent
3841569ec8
commit
ea12afd5ea
14
.github/scripts/badwords.txt
vendored
14
.github/scripts/badwords.txt
vendored
@ -12,13 +12,24 @@ wild-card:wildcard
|
||||
wild card:wildcard
|
||||
i'm:I am
|
||||
you've:You have
|
||||
we've:we have
|
||||
we're:we are
|
||||
we'll:we will
|
||||
we'd:we would
|
||||
they've:They have
|
||||
they're:They are
|
||||
they'll:They will
|
||||
they'd:They would
|
||||
you've:you have
|
||||
you'd:you would
|
||||
you'll:you will
|
||||
you're:you are
|
||||
should've:should have
|
||||
don't:do not
|
||||
could've:could have
|
||||
doesn't:does not
|
||||
isn't:is not
|
||||
aren't:are not
|
||||
a html: an html
|
||||
a http: an http
|
||||
a ftp: an ftp
|
||||
@ -26,14 +37,13 @@ isn't:is not
|
||||
internet\b=Internet
|
||||
isation:ization
|
||||
it's:it is
|
||||
it'd:it would
|
||||
there's:there is
|
||||
[^.]\. And: Rewrite it somehow?
|
||||
^(And|So|But) = Rewrite it somehow?
|
||||
\. But: Rewrite it somehow?
|
||||
\. So : Rewrite without "so" ?
|
||||
dir :directory
|
||||
you'd:you would
|
||||
you'll:you will
|
||||
can't:cannot
|
||||
that's:that is
|
||||
web page:webpage
|
||||
|
||||
2
.github/workflows/badwords.yml
vendored
2
.github/workflows/badwords.yml
vendored
@ -26,4 +26,4 @@ jobs:
|
||||
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4
|
||||
|
||||
- name: check
|
||||
run: ./.github/scripts/badwords.pl < .github/scripts/badwords.txt docs/*.md docs/libcurl/*.md docs/libcurl/opts/*.md docs/cmdline-opts/*.md
|
||||
run: ./.github/scripts/badwords.pl < .github/scripts/badwords.txt docs/*.md docs/libcurl/*.md docs/libcurl/opts/*.md docs/cmdline-opts/*.md docs/TODO docs/KNOWN_BUGS
|
||||
|
||||
70
docs/ECH.md
70
docs/ECH.md
@ -6,16 +6,15 @@ SPDX-License-Identifier: curl
|
||||
|
||||
# Building curl with HTTPS-RR and ECH support
|
||||
|
||||
We've added support for ECH to in this curl build. That can use HTTPS RRs
|
||||
published in the DNS, if curl is using DoH, or else can accept the relevant
|
||||
ECHConfigList values from the command line. That works with OpenSSL,
|
||||
WolfSSL or boringssl as the TLS provider, depending on how you build curl.
|
||||
We have added support for ECH to curl. It can use HTTPS RRs published in the
|
||||
DNS if curl uses DoH, or else can accept the relevant ECHConfigList values
|
||||
from the command line. This works with OpenSSL, WolfSSL or boringssl as the
|
||||
TLS provider.
|
||||
|
||||
This feature is EXPERIMENTAL. DO NOT USE IN PRODUCTION.
|
||||
|
||||
This should however provide enough of a proof-of-concept to prompt an informed
|
||||
discussion about a good path forward for ECH support in curl, when using
|
||||
OpenSSL, or other TLS libraries, as those add ECH support.
|
||||
discussion about a good path forward for ECH support in curl.
|
||||
|
||||
## OpenSSL Build
|
||||
|
||||
@ -42,21 +41,21 @@ To build curl ECH-enabled, making use of the above:
|
||||
autoreconf -fi
|
||||
LDFLAGS="-Wl,-rpath,$HOME/code/openssl-local-inst/lib/" ./configure --with-ssl=$HOME/code/openssl-local-inst --enable-ech --enable-httpsrr
|
||||
...lots of output...
|
||||
WARNING: ech ECH HTTPSRR enabled but marked EXPERIMENTAL...
|
||||
WARNING: ECH HTTPSRR enabled but marked EXPERIMENTAL...
|
||||
make
|
||||
...lots more output...
|
||||
```
|
||||
|
||||
If you do not get that WARNING at the end of the ``configure`` command, then ECH
|
||||
is not enabled, so go back some steps and re-do whatever needs re-doing:-) If you
|
||||
want to debug curl then you should add ``--enable-debug`` to the ``configure``
|
||||
command.
|
||||
If you do not get that WARNING at the end of the ``configure`` command, then
|
||||
ECH is not enabled, so go back some steps and re-do whatever needs re-doing:-)
|
||||
If you want to debug curl then you should add ``--enable-debug`` to the
|
||||
``configure`` command.
|
||||
|
||||
In a recent (2024-05-20) build on one machine, configure failed to find the
|
||||
ECH-enabled SSL library, apparently due to the existence of
|
||||
``$HOME/code/openssl-local-inst/lib/pkgconfig`` as a directory containing
|
||||
various settings. Deleting that directory worked around the problem but may not
|
||||
be the best solution.
|
||||
various settings. Deleting that directory worked around the problem but may
|
||||
not be the best solution.
|
||||
|
||||
## Using ECH and DoH
|
||||
|
||||
@ -216,7 +215,7 @@ or IP address hints.
|
||||
- ``USE_ECH`` protects ECH specific code.
|
||||
|
||||
There are various obvious code blocks for handling the new command line
|
||||
arguments which aren't described here, but should be fairly clear.
|
||||
arguments which are not described here, but should be fairly clear.
|
||||
|
||||
As shown in the ``configure`` usage above, there are ``configure.ac`` changes
|
||||
that allow separately dis/enabling ``USE_HTTPSRR`` and ``USE_ECH``. If ``USE_ECH``
|
||||
@ -270,7 +269,7 @@ curl might handle those values when present in the DNS.
|
||||
("aliasMode") - the current code takes no account of that at all. One could
|
||||
envisage implementing the equivalent of following CNAMEs in such cases, but
|
||||
it is not clear if that'd be a good plan. (As of now, chrome browsers do not seem
|
||||
to have any support for that "aliasMode" and we've not checked Firefox for that
|
||||
to have any support for that "aliasMode" and we have not checked Firefox for that
|
||||
recently.)
|
||||
|
||||
- We have not investigated what related changes or additions might be needed
|
||||
@ -282,7 +281,7 @@ doing so would seem to require re-implementing an ECH-enabled server as part
|
||||
of the curl test harness. For now, we have a ``./tests/ech_test.sh`` script
|
||||
that attempts ECH with various test servers and with many combinations of the
|
||||
allowed command line options. While that is a useful test and has find issues,
|
||||
it is not comprehensive and we're not (as yet) sure what would be the right
|
||||
it is not comprehensive and we are not (as yet) sure what would be the right
|
||||
level of coverage. When running that script you should not have a
|
||||
``$HOME/.curlrc`` file that affects ECH or some of the negative tests could
|
||||
produce spurious failures.
|
||||
@ -331,7 +330,7 @@ Then:
|
||||
autoreconf -fi
|
||||
LDFLAGS="-Wl,-rpath,$HOME/code/boringssl/inst/lib" ./configure --with-ssl=$HOME/code/boringssl/inst --enable-ech --enable-httpsrr
|
||||
...lots of output...
|
||||
WARNING: ech ECH HTTPSRR enabled but marked EXPERIMENTAL. Use with caution!
|
||||
WARNING: ECH HTTPSRR enabled but marked EXPERIMENTAL. Use with caution!
|
||||
make
|
||||
```
|
||||
|
||||
@ -384,13 +383,12 @@ There are some known issues with the ECH implementation in WolfSSL:
|
||||
|
||||
There are what seem like oddball differences:
|
||||
|
||||
- The DoH URL in``$HOME/.curlrc`` can use "1.1.1.1" for OpenSSL but has to be
|
||||
"one.one.one.one" for WolfSSL. The latter works for both, so OK, we'll change
|
||||
to that.
|
||||
- The DoH URL in``$HOME/.curlrc`` can use `1.1.1.1` for OpenSSL but has to be
|
||||
`one.one.one.one` for WolfSSL. The latter works for both, so OK, we us that.
|
||||
- There seems to be some difference in CA databases too - the WolfSSL version
|
||||
does not like ``defo.ie``, whereas the system and OpenSSL ones do. We can ignore
|
||||
that for our purposes via ``--insecure``/``-k`` but would need to fix for a
|
||||
real setup. (Browsers do like those certificates though.)
|
||||
does not like ``defo.ie``, whereas the system and OpenSSL ones do. We can
|
||||
ignore that for our purposes via ``--insecure``/``-k`` but would need to fix
|
||||
for a real setup. (Browsers do like those certificates though.)
|
||||
|
||||
Then there are some functional code changes:
|
||||
|
||||
@ -418,22 +416,22 @@ on localhost:53, so would fit this use-case. That said, it is unclear if
|
||||
this is a niche that is worth trying to address. (The author is just as happy to
|
||||
let curl use DoH to talk to the same public recursive that stubby might use:-)
|
||||
|
||||
Assuming for the moment this is a use-case we'd like to support, then
|
||||
if DoH is not being used by curl, it is not clear at this time how to provide
|
||||
Assuming for the moment this is a use-case we would like to support, then if
|
||||
DoH is not being used by curl, it is not clear at this time how to provide
|
||||
support for ECH. One option would seem to be to extend the ``c-ares`` library
|
||||
to support HTTPS RRs, but in that case it is not now clear whether such changes
|
||||
would be attractive to the ``c-ares`` maintainers, nor whether the "tag=value"
|
||||
extensibility inherent in the HTTPS/SVCB specification is a good match for the
|
||||
``c-ares`` approach of defining structures specific to decoded answers for each
|
||||
supported RRtype. We're also not sure how many downstream curl deployments
|
||||
actually make use of the ``c-ares`` library, which would affect the utility of
|
||||
such changes. Another option might be to consider using some other generic DNS
|
||||
library that does support HTTPS RRs, but it is unclear if such a library could
|
||||
or would be used by all or almost all curl builds and downstream releases of
|
||||
curl.
|
||||
to support HTTPS RRs, but in that case it is not now clear whether such
|
||||
changes would be attractive to the ``c-ares`` maintainers, nor whether the
|
||||
"tag=value" extensibility inherent in the HTTPS/SVCB specification is a good
|
||||
match for the ``c-ares`` approach of defining structures specific to decoded
|
||||
answers for each supported RRtype. We are also not sure how many downstream
|
||||
curl deployments actually make use of the ``c-ares`` library, which would
|
||||
affect the utility of such changes. Another option might be to consider using
|
||||
some other generic DNS library that does support HTTPS RRs, but it is unclear
|
||||
if such a library could or would be used by all or almost all curl builds and
|
||||
downstream releases of curl.
|
||||
|
||||
Our current conclusion is that doing the above is likely best left until we
|
||||
have some experience with the "using DoH" approach, so we're going to punt on
|
||||
have some experience with the "using DoH" approach, so we are going to punt on
|
||||
this for now.
|
||||
|
||||
### Debugging
|
||||
|
||||
@ -18,7 +18,7 @@ problems may have been fixed or changed somewhat since this was written.
|
||||
2. TLS
|
||||
2.1 IMAPS connection fails with rustls error
|
||||
2.3 Unable to use PKCS12 certificate with Secure Transport
|
||||
2.4 Secure Transport will not import PKCS#12 client certificates without a password
|
||||
2.4 Secure Transport does not import PKCS#12 client certificates without a password
|
||||
2.5 Client cert handling with Issuer DN differs between backends
|
||||
2.7 Client cert (MTLS) issues with Schannel
|
||||
2.11 Schannel TLS 1.2 handshake bug in old Windows versions
|
||||
@ -60,7 +60,7 @@ problems may have been fixed or changed somewhat since this was written.
|
||||
6.13 Negotiate against Hadoop HDFS
|
||||
|
||||
7. FTP
|
||||
7.1 FTP upload fails if remembered dir is deleted
|
||||
7.1 FTP upload fails if remembered directory is deleted
|
||||
7.2 Implicit FTPS upload timeout
|
||||
7.3 FTP with NOBODY and FAILONERROR
|
||||
7.4 FTP with ACCT
|
||||
@ -145,7 +145,7 @@ problems may have been fixed or changed somewhat since this was written.
|
||||
|
||||
See https://github.com/curl/curl/issues/5403
|
||||
|
||||
2.4 Secure Transport will not import PKCS#12 client certificates without a password
|
||||
2.4 Secure Transport does not import PKCS#12 client certificates without a password
|
||||
|
||||
libcurl calls SecPKCS12Import with the PKCS#12 client certificate, but that
|
||||
function rejects certificates that do not have a password.
|
||||
@ -226,9 +226,9 @@ problems may have been fixed or changed somewhat since this was written.
|
||||
|
||||
5.2 curl-config --libs contains private details
|
||||
|
||||
"curl-config --libs" will include details set in LDFLAGS when configure is
|
||||
run that might be needed only for building libcurl. Further, curl-config
|
||||
--cflags suffers from the same effects with CFLAGS/CPPFLAGS.
|
||||
"curl-config --libs" include details set in LDFLAGS when configure is run
|
||||
that might be needed only for building libcurl. Further, curl-config --cflags
|
||||
suffers from the same effects with CFLAGS/CPPFLAGS.
|
||||
|
||||
5.3 building for old macOS fails with gcc
|
||||
|
||||
@ -243,8 +243,8 @@ problems may have been fixed or changed somewhat since this was written.
|
||||
it can only be encoded properly in the Unicode character set. Windows uses
|
||||
UTF-16 encoding for Unicode and stores it in wide characters, however curl
|
||||
and libcurl are not equipped for that at the moment except when built with
|
||||
_UNICODE and UNICODE defined. And, except for Cygwin, Windows cannot use UTF-8
|
||||
as a locale.
|
||||
_UNICODE and UNICODE defined. Except for Cygwin, Windows cannot use UTF-8 as
|
||||
a locale.
|
||||
|
||||
https://curl.se/bug/?i=345
|
||||
https://curl.se/bug/?i=731
|
||||
@ -303,7 +303,7 @@ problems may have been fixed or changed somewhat since this was written.
|
||||
|
||||
6.1 NTLM authentication and unicode
|
||||
|
||||
NTLM authentication involving unicode user name or password only works
|
||||
NTLM authentication involving unicode username or password only works
|
||||
properly if built with UNICODE defined together with the Schannel
|
||||
backend. The original problem was mentioned in:
|
||||
https://curl.se/mail/lib-2009-10/0024.html
|
||||
@ -321,8 +321,8 @@ problems may have been fixed or changed somewhat since this was written.
|
||||
6.3 NTLM in system context uses wrong name
|
||||
|
||||
NTLM authentication using SSPI (on Windows) when (lib)curl is running in
|
||||
"system context" will make it use wrong(?) user name - at least when compared
|
||||
to what winhttp does. See https://curl.se/bug/view.cgi?id=535
|
||||
"system context" makes it use wrong(?) username - at least when compared to
|
||||
what winhttp does. See https://curl.se/bug/view.cgi?id=535
|
||||
|
||||
6.5 NTLM does not support password with § character
|
||||
|
||||
@ -331,11 +331,11 @@ problems may have been fixed or changed somewhat since this was written.
|
||||
6.6 libcurl can fail to try alternatives with --proxy-any
|
||||
|
||||
When connecting via a proxy using --proxy-any, a failure to establish an
|
||||
authentication will cause libcurl to abort trying other options if the
|
||||
failed method has a higher preference than the alternatives. As an example,
|
||||
authentication causes libcurl to abort trying other options if the failed
|
||||
method has a higher preference than the alternatives. As an example,
|
||||
--proxy-any against a proxy which advertise Negotiate and NTLM, but which
|
||||
fails to set up Kerberos authentication will not proceed to try authentication
|
||||
using NTLM.
|
||||
fails to set up Kerberos authentication does not proceed to try
|
||||
authentication using NTLM.
|
||||
|
||||
https://github.com/curl/curl/issues/876
|
||||
|
||||
@ -378,7 +378,7 @@ problems may have been fixed or changed somewhat since this was written.
|
||||
|
||||
7. FTP
|
||||
|
||||
7.1 FTP upload fails if remembered dir is deleted
|
||||
7.1 FTP upload fails if remembered directory is deleted
|
||||
|
||||
curl's FTP code assumes that the directory it entered in a previous transfer
|
||||
still exists when it comes back to do a second transfer, and does not respond
|
||||
@ -399,17 +399,16 @@ problems may have been fixed or changed somewhat since this was written.
|
||||
7.4 FTP with ACCT
|
||||
|
||||
When doing an operation over FTP that requires the ACCT command (but not when
|
||||
logging in), the operation will fail since libcurl does not detect this and
|
||||
thus fails to issue the correct command:
|
||||
https://curl.se/bug/view.cgi?id=635
|
||||
logging in), the operation fails since libcurl does not detect this and thus
|
||||
fails to issue the correct command: https://curl.se/bug/view.cgi?id=635
|
||||
|
||||
7.12 FTPS server compatibility on Windows with Schannel
|
||||
|
||||
FTPS is not widely used with the Schannel TLS backend and so there may be more
|
||||
bugs compared to other TLS backends such as OpenSSL. In the past users have
|
||||
reported hanging and failed connections. It's very likely some changes to curl
|
||||
since then fixed the issues. None of the reported issues can be reproduced any
|
||||
longer.
|
||||
FTPS is not widely used with the Schannel TLS backend and so there may be
|
||||
more bugs compared to other TLS backends such as OpenSSL. In the past users
|
||||
have reported hanging and failed connections. It is likely some changes to
|
||||
curl since then fixed the issues. None of the reported issues can be
|
||||
reproduced any longer.
|
||||
|
||||
If you encounter an issue connecting to your server via FTPS with the latest
|
||||
curl and Schannel then please search for open issues or file a new issue.
|
||||
@ -444,7 +443,7 @@ problems may have been fixed or changed somewhat since this was written.
|
||||
|
||||
In the SSH_SFTP_INIT state for libssh, the ssh session working mode is set to
|
||||
blocking mode. If the network is suddenly disconnected during sftp
|
||||
transmission, curl will be stuck, even if curl is configured with a timeout.
|
||||
transmission, curl is stuck, even if curl is configured with a timeout.
|
||||
|
||||
https://github.com/curl/curl/issues/8632
|
||||
|
||||
@ -472,8 +471,8 @@ problems may have been fixed or changed somewhat since this was written.
|
||||
11.2 error buffer not set if connection to multiple addresses fails
|
||||
|
||||
If you ask libcurl to resolve a hostname like example.com to IPv6 addresses
|
||||
only. But you only have IPv4 connectivity. libcurl will correctly fail with
|
||||
CURLE_COULDNT_CONNECT. But the error buffer set by CURLOPT_ERRORBUFFER
|
||||
when you only have IPv4 connectivity. libcurl fails with
|
||||
CURLE_COULDNT_CONNECT, but the error buffer set by CURLOPT_ERRORBUFFER
|
||||
remains empty. Issue: https://github.com/curl/curl/issues/544
|
||||
|
||||
11.4 HTTP test server 'connection-monitor' problems
|
||||
@ -532,8 +531,8 @@ problems may have been fixed or changed somewhat since this was written.
|
||||
13.2 Trying local ports fails on Windows
|
||||
|
||||
This makes '--local-port [range]' to not work since curl cannot properly
|
||||
detect if a port is already in use, so it will try the first port, use that and
|
||||
then subsequently fail anyway if that was actually in use.
|
||||
detect if a port is already in use, so it tries the first port, uses that and
|
||||
then subsequently fails anyway if that was actually in use.
|
||||
|
||||
https://github.com/curl/curl/issues/8112
|
||||
|
||||
@ -607,9 +606,9 @@ problems may have been fixed or changed somewhat since this was written.
|
||||
17.2 HTTP/2 frames while in the connection pool kill reuse
|
||||
|
||||
If the server sends HTTP/2 frames (like for example an HTTP/2 PING frame) to
|
||||
curl while the connection is held in curl's connection pool, the socket will
|
||||
be found readable when considered for reuse and that makes curl think it is
|
||||
dead and then it will be closed and a new connection gets created instead.
|
||||
curl while the connection is held in curl's connection pool, the socket is
|
||||
found readable when considered for reuse and that makes curl think it is dead
|
||||
and then it is closed and a new connection gets created instead.
|
||||
|
||||
This is *best* fixed by adding monitoring to connections while they are kept
|
||||
in the pool so that pings can be responded to appropriately.
|
||||
@ -638,7 +637,7 @@ problems may have been fixed or changed somewhat since this was written.
|
||||
19.1 Some methods do not support response bodies
|
||||
|
||||
The RTSP implementation is written to assume that a number of RTSP methods
|
||||
will always get responses without bodies, even though there seems to be no
|
||||
always get responses without bodies, even though there seems to be no
|
||||
indication in the RFC that this is always the case.
|
||||
|
||||
https://github.com/curl/curl/issues/12414
|
||||
|
||||
116
docs/TODO
116
docs/TODO
@ -62,7 +62,7 @@
|
||||
4. FTP
|
||||
4.1 HOST
|
||||
4.2 Alter passive/active on failure and retry
|
||||
4.4 Support CURLOPT_PREQUOTE for dir listings too
|
||||
4.4 Support CURLOPT_PREQUOTE for directories listings
|
||||
4.5 ASCII support
|
||||
4.6 GSSAPI via Windows SSPI
|
||||
4.7 STAT for LIST without data connection
|
||||
@ -159,16 +159,16 @@
|
||||
18.14 --dry-run
|
||||
18.15 --retry should resume
|
||||
18.16 send only part of --data
|
||||
18.17 consider file name from the redirected URL with -O ?
|
||||
18.17 consider filename from the redirected URL with -O ?
|
||||
18.18 retry on network is unreachable
|
||||
18.19 expand ~/ in config files
|
||||
18.20 host name sections in config files
|
||||
18.20 hostname sections in config files
|
||||
18.21 retry on the redirected-to URL
|
||||
18.23 Set the modification date on an uploaded file
|
||||
18.24 Use multiple parallel transfers for a single download
|
||||
18.25 Prevent terminal injection when writing to terminal
|
||||
18.26 Custom progress meter update interval
|
||||
18.27 -J and -O with %-encoded file names
|
||||
18.27 -J and -O with %-encoded filenames
|
||||
18.28 -J with -C -
|
||||
18.29 --retry and transfer timeouts
|
||||
|
||||
@ -192,7 +192,7 @@
|
||||
21.2 Support MQTTS
|
||||
|
||||
22. TFTP
|
||||
22.1 TFTP doesn't convert LF to CRLF for mode=netascii
|
||||
22.1 TFTP does not convert LF to CRLF for mode=netascii
|
||||
|
||||
==============================================================================
|
||||
|
||||
@ -250,7 +250,7 @@
|
||||
|
||||
This option allows applications to set a replacement IP address for a given
|
||||
host + port pair. Consider making support for providing a replacement address
|
||||
for the host name on all port numbers.
|
||||
for the hostname on all port numbers.
|
||||
|
||||
See https://github.com/curl/curl/issues/1264
|
||||
|
||||
@ -291,11 +291,12 @@
|
||||
|
||||
1.13 c-ares and CURLOPT_OPENSOCKETFUNCTION
|
||||
|
||||
curl will create most sockets via the CURLOPT_OPENSOCKETFUNCTION callback and
|
||||
curl creates most sockets via the CURLOPT_OPENSOCKETFUNCTION callback and
|
||||
close them with the CURLOPT_CLOSESOCKETFUNCTION callback. However, c-ares
|
||||
does not use those functions and instead opens and closes the sockets
|
||||
itself. This means that when curl passes the c-ares socket to the
|
||||
CURLMOPT_SOCKETFUNCTION it is not owned by the application like other sockets.
|
||||
does not use those functions and instead opens and closes the sockets itself.
|
||||
This means that when curl passes the c-ares socket to the
|
||||
CURLMOPT_SOCKETFUNCTION it is not owned by the application like other
|
||||
sockets.
|
||||
|
||||
See https://github.com/curl/curl/issues/2734
|
||||
|
||||
@ -481,8 +482,8 @@
|
||||
2.4 Split connect and authentication process
|
||||
|
||||
The multi interface treats the authentication process as part of the connect
|
||||
phase. As such any failures during authentication will not trigger the relevant
|
||||
QUIT or LOGOFF for protocols such as IMAP, POP3 and SMTP.
|
||||
phase. As such any failures during authentication does not trigger the
|
||||
relevant QUIT or LOGOFF for protocols such as IMAP, POP3 and SMTP.
|
||||
|
||||
2.5 Edge-triggered sockets should work
|
||||
|
||||
@ -532,7 +533,7 @@
|
||||
|
||||
4.1 HOST
|
||||
|
||||
HOST is a command for a client to tell which host name to use, to offer FTP
|
||||
HOST is a command for a client to tell which hostname to use, to offer FTP
|
||||
servers named-based virtual hosting:
|
||||
|
||||
https://datatracker.ietf.org/doc/html/rfc7151
|
||||
@ -544,7 +545,7 @@
|
||||
connection. There could be a way to fallback to an active connection (and
|
||||
vice versa). https://curl.se/bug/feature.cgi?id=1754793
|
||||
|
||||
4.4 Support CURLOPT_PREQUOTE for dir listings too
|
||||
4.4 Support CURLOPT_PREQUOTE for directions listings
|
||||
|
||||
The lack of support is mostly an oversight and requires the FTP state machine
|
||||
to get updated to get fixed.
|
||||
@ -585,10 +586,10 @@
|
||||
|
||||
5.1 Provide the error body from a CONNECT response
|
||||
|
||||
When curl receives a body response from a CONNECT request to a proxy, it will
|
||||
always just read and ignore it. It would make some users happy if curl
|
||||
instead optionally would be able to make that responsible available. Via a new
|
||||
callback? Through some other means?
|
||||
When curl receives a body response from a CONNECT request to a proxy, it
|
||||
always just reads and ignores it. It would make some users happy if curl
|
||||
instead optionally would be able to make that responsible available. Via a
|
||||
new callback? Through some other means?
|
||||
|
||||
See https://github.com/curl/curl/issues/9513
|
||||
|
||||
@ -615,7 +616,7 @@
|
||||
5.4 Allow SAN names in HTTP/2 server push
|
||||
|
||||
curl only allows HTTP/2 push promise if the provided :authority header value
|
||||
exactly matches the host name given in the URL. It could be extended to allow
|
||||
exactly matches the hostname given in the URL. It could be extended to allow
|
||||
any name that would match the Subject Alternative Names in the server's TLS
|
||||
certificate.
|
||||
|
||||
@ -660,7 +661,7 @@
|
||||
6.2 ditch telnet-specific select
|
||||
|
||||
Move the telnet support's network select() loop go away and merge the code
|
||||
into the main transfer loop. Until this is done, the multi interface will not
|
||||
into the main transfer loop. Until this is done, the multi interface does not
|
||||
work for telnet.
|
||||
|
||||
6.3 feature negotiation debug data
|
||||
@ -919,10 +920,10 @@
|
||||
|
||||
15.4 Add option to allow abrupt server closure
|
||||
|
||||
libcurl w/schannel will error without a known termination point from the
|
||||
server (such as length of transfer, or SSL "close notify" alert) to prevent
|
||||
against a truncation attack. Really old servers may neglect to send any
|
||||
termination point. An option could be added to ignore such abrupt closures.
|
||||
libcurl w/schannel errors without a known termination point from the server
|
||||
(such as length of transfer, or SSL "close notify" alert) to prevent against
|
||||
a truncation attack. Really old servers may neglect to send any termination
|
||||
point. An option could be added to ignore such abrupt closures.
|
||||
|
||||
https://github.com/curl/curl/issues/4427
|
||||
|
||||
@ -948,7 +949,7 @@
|
||||
SSH is a perfectly fine multiplexed protocols which would allow libcurl to do
|
||||
multiple parallel transfers from the same host using the same connection,
|
||||
much in the same spirit as HTTP/2 does. libcurl however does not take
|
||||
advantage of that ability but will instead always create a new connection for
|
||||
advantage of that ability but does instead always create a new connection for
|
||||
new transfers even if an existing connection already exists to the host.
|
||||
|
||||
To fix this, libcurl would have to detect an existing connection and "attach"
|
||||
@ -958,7 +959,7 @@
|
||||
|
||||
The SFTP code in libcurl checks the file size *before* a transfer starts and
|
||||
then proceeds to transfer exactly that amount of data. If the remote file
|
||||
grows while the transfer is in progress libcurl will not notice and will not
|
||||
grows while the transfer is in progress libcurl does not notice and does not
|
||||
adapt. The OpenSSH SFTP command line tool does and libcurl could also just
|
||||
attempt to download more to see if there is more to get...
|
||||
|
||||
@ -1026,7 +1027,7 @@
|
||||
18.6 Option to make -Z merge lined based outputs on stdout
|
||||
|
||||
When a user requests multiple lined based files using -Z and sends them to
|
||||
stdout, curl will not "merge" and send complete lines fine but may send
|
||||
stdout, curl does not "merge" and send complete lines fine but may send
|
||||
partial lines from several sources.
|
||||
|
||||
https://github.com/curl/curl/issues/5175
|
||||
@ -1055,7 +1056,7 @@
|
||||
backed up from those that are either not ready or have not changed.
|
||||
|
||||
Downloads in progress are neither ready to be backed up, nor should they be
|
||||
opened by a different process. Only after a download has been completed it's
|
||||
opened by a different process. Only after a download has been completed it is
|
||||
sensible to include it in any integer snapshot or backup of the system.
|
||||
|
||||
See https://github.com/curl/curl/issues/3354
|
||||
@ -1101,22 +1102,22 @@
|
||||
|
||||
See https://github.com/curl/curl/issues/1200
|
||||
|
||||
18.17 consider file name from the redirected URL with -O ?
|
||||
18.17 consider filename from the redirected URL with -O ?
|
||||
|
||||
When a user gives a URL and uses -O, and curl follows a redirect to a new
|
||||
URL, the file name is not extracted and used from the newly redirected-to URL
|
||||
even if the new URL may have a much more sensible file name.
|
||||
URL, the filename is not extracted and used from the newly redirected-to URL
|
||||
even if the new URL may have a much more sensible filename.
|
||||
|
||||
This is clearly documented and helps for security since there is no surprise
|
||||
to users which file name that might get overwritten. But maybe a new option
|
||||
to users which filename that might get overwritten, but maybe a new option
|
||||
could allow for this or maybe -J should imply such a treatment as well as -J
|
||||
already allows for the server to decide what file name to use so it already
|
||||
already allows for the server to decide what filename to use so it already
|
||||
provides the "may overwrite any file" risk.
|
||||
|
||||
This is extra tricky if the original URL has no file name part at all since
|
||||
then the current code path will error out with an error message, and we cannot
|
||||
*know* already at that point if curl will be redirected to a URL that has a
|
||||
file name...
|
||||
This is extra tricky if the original URL has no filename part at all since
|
||||
then the current code path does error out with an error message, and we
|
||||
cannot *know* already at that point if curl is redirected to a URL that has a
|
||||
filename...
|
||||
|
||||
See https://github.com/curl/curl/issues/1241
|
||||
|
||||
@ -1138,10 +1139,10 @@
|
||||
|
||||
See https://github.com/curl/curl/issues/2317
|
||||
|
||||
18.20 host name sections in config files
|
||||
18.20 hostname sections in config files
|
||||
|
||||
config files would be more powerful if they could set different
|
||||
configurations depending on used URLs, host name or possibly origin. Then a
|
||||
configurations depending on used URLs, hostname or possibly origin. Then a
|
||||
default .curlrc could a specific user-agent only when doing requests against
|
||||
a certain site.
|
||||
|
||||
@ -1179,7 +1180,7 @@
|
||||
- If splitting up the work improves the transfer rate, it could then be done
|
||||
again. Then again, etc up to a limit.
|
||||
|
||||
This way, if transfer B fails (because Range: is not supported) it will let
|
||||
This way, if transfer B fails (because Range: is not supported) it lets
|
||||
transfer A remain the single one. N and M could be set to some sensible
|
||||
defaults.
|
||||
|
||||
@ -1200,9 +1201,9 @@
|
||||
progressing and has not stuck, but they may not appreciate the
|
||||
many-times-a-second frequency curl can end up doing it with now.
|
||||
|
||||
18.27 -J and -O with %-encoded file names
|
||||
18.27 -J and -O with %-encoded filenames
|
||||
|
||||
-J/--remote-header-name does not decode %-encoded file names. RFC 6266 details
|
||||
-J/--remote-header-name does not decode %-encoded filenames. RFC 6266 details
|
||||
how it should be done. The can of worm is basically that we have no charset
|
||||
handling in curl and ascii >=128 is a challenge for us. Not to mention that
|
||||
decoding also means that we need to check for nastiness that is attempted,
|
||||
@ -1213,15 +1214,15 @@
|
||||
-O also does not decode %-encoded names, and while it has even less
|
||||
information about the charset involved the process is similar to the -J case.
|
||||
|
||||
Note that we will not add decoding to -O without the user asking for it with
|
||||
some other means as well, since -O has always been documented to use the name
|
||||
exactly as specified in the URL.
|
||||
Note that we do not decode -O without the user asking for it with some other
|
||||
means, since -O has always been documented to use the name exactly as
|
||||
specified in the URL.
|
||||
|
||||
18.28 -J with -C -
|
||||
|
||||
When using -J (with -O), automatically resumed downloading together with "-C
|
||||
-" fails. Without -J the same command line works. This happens because the
|
||||
resume logic is worked out before the target file name (and thus its
|
||||
resume logic is worked out before the target filename (and thus its
|
||||
pre-transfer size) has been figured out. This can be improved.
|
||||
|
||||
https://curl.se/bug/view.cgi?id=1169
|
||||
@ -1230,8 +1231,8 @@
|
||||
|
||||
If using --retry and the transfer timeouts (possibly due to using -m or
|
||||
-y/-Y) the next attempt does not resume the transfer properly from what was
|
||||
downloaded in the previous attempt but will truncate and restart at the
|
||||
original position where it was at before the previous failed attempt. See
|
||||
downloaded in the previous attempt but truncates and restarts at the original
|
||||
position where it was at before the previous failed attempt. See
|
||||
https://curl.se/mail/lib-2008-01/0080.html and Mandriva bug report
|
||||
https://qa.mandriva.com/show_bug.cgi?id=22565
|
||||
|
||||
@ -1251,12 +1252,13 @@
|
||||
curl.
|
||||
|
||||
19.3 Do not use GNU libtool on OpenBSD
|
||||
When compiling curl on OpenBSD with "--enable-debug" it will give linking
|
||||
errors when you use GNU libtool. This can be fixed by using the libtool
|
||||
provided by OpenBSD itself. However for this the user always needs to invoke
|
||||
make with "LIBTOOL=/usr/bin/libtool". It would be nice if the script could
|
||||
have some magic to detect if this system is an OpenBSD host and then use the
|
||||
OpenBSD libtool instead.
|
||||
|
||||
When compiling curl on OpenBSD with "--enable-debug" it gives linking errors
|
||||
when you use GNU libtool. This can be fixed by using the libtool provided by
|
||||
OpenBSD itself. However for this the user always needs to invoke make with
|
||||
"LIBTOOL=/usr/bin/libtool". It would be nice if the script could have some
|
||||
magic to detect if this system is an OpenBSD host and then use the OpenBSD
|
||||
libtool instead.
|
||||
|
||||
See https://github.com/curl/curl/issues/5862
|
||||
|
||||
@ -1306,8 +1308,8 @@
|
||||
A test suite made for HTTP cookies (RFC 6265) by Adam Barth is available at
|
||||
https://github.com/abarth/http-state/tree/master/tests
|
||||
|
||||
It'd be really awesome if someone would write a script/setup that would run
|
||||
curl with that test suite and detect deviances. Ideally, that would even be
|
||||
It would be good if someone would write a script/setup that would run curl
|
||||
with that test suite and detect deviances. Ideally, that would even be
|
||||
incorporated into our regular test suite.
|
||||
|
||||
20.8 Run web-platform-tests URL tests
|
||||
@ -1330,7 +1332,7 @@
|
||||
|
||||
22. TFTP
|
||||
|
||||
22.1 TFTP doesn't convert LF to CRLF for mode=netascii
|
||||
22.1 TFTP does not convert LF to CRLF for mode=netascii
|
||||
|
||||
RFC 3617 defines that an TFTP transfer can be done using "netascii"
|
||||
mode. curl does not support extracting that mode from the URL nor does it treat
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user