mirror of
https://github.com/curl/curl.git
synced 2026-01-27 01:44:17 +00:00
The -J / --remote-header-name logic now records the file name part used in the redirects so that it can use the last one as a name if no Content-Disposition header arrives. Add tests to verify: 1641: -J with a redirect and extract the CD contents in the second response 1642: -J with a redirect but no Content-Disposition, use the name from the Location: header 1643: -J with two redirects, using the last file name and also use queries and fragments to verify them stripped off Closes #20430
1050 lines
39 KiB
Markdown
1050 lines
39 KiB
Markdown
<!--
|
|
Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al.
|
|
|
|
SPDX-License-Identifier: curl
|
|
-->
|
|
|
|
# TODO intro
|
|
|
|
Things to do in project curl. Please tell us what you think, contribute and
|
|
send us patches that improve things.
|
|
|
|
Be aware that these are things that we could do, or have once been considered
|
|
things we could do. If you want to work on any of these areas, please consider
|
|
bringing it up for discussions first on the mailing list so that we all agree
|
|
it is still a good idea for the project.
|
|
|
|
All bugs documented in the [known_bugs
|
|
document](https://curl.se/docs/knownbugs.html) are subject for fixing.
|
|
|
|
# libcurl
|
|
|
|
## Consult `%APPDATA%` also for `.netrc`
|
|
|
|
`%APPDATA%\.netrc` is not considered when running on Windows. Should not it?
|
|
|
|
See [curl issue 4016](https://github.com/curl/curl/issues/4016)
|
|
|
|
## `struct lifreq`
|
|
|
|
Use `struct lifreq` and `SIOCGLIFADDR` instead of `struct ifreq` and
|
|
`SIOCGIFADDR` on newer Solaris versions as they claim the latter is obsolete.
|
|
To support IPv6 interface addresses for network interfaces properly.
|
|
|
|
## alt-svc sharing
|
|
|
|
The share interface could benefit from allowing the alt-svc cache to be
|
|
possible to share between easy handles.
|
|
|
|
See [curl issue 4476](https://github.com/curl/curl/issues/4476)
|
|
|
|
The share interface offers CURL_LOCK_DATA_CONNECT to have multiple easy
|
|
handle share a connection cache, but due to how connections are used they are
|
|
still not thread-safe when used shared.
|
|
|
|
See [curl issue 4915](https://github.com/curl/curl/issues/4915) and lib1541.c
|
|
|
|
The share interface offers CURL_LOCK_DATA_HSTS to have multiple easy handle
|
|
share an HSTS cache, but this is not thread-safe.
|
|
|
|
## thread-safe sharing
|
|
|
|
Using the share interface users can share some data between easy handles but
|
|
several of the sharing options are documented as not safe and supported to
|
|
share between multiple concurrent threads. Fixing this would enable more users
|
|
to share data in more powerful ways.
|
|
|
|
## updated DNS server while running
|
|
|
|
If `/etc/resolv.conf` gets updated while a program using libcurl is running, it
|
|
is may cause name resolves to fail unless `res_init()` is called. We should
|
|
consider calling `res_init()` + retry once unconditionally on all name resolve
|
|
failures to mitigate against this. Firefox works like that. Note that Windows
|
|
does not have `res_init()` or an alternative.
|
|
|
|
[curl issue 2251](https://github.com/curl/curl/issues/2251)
|
|
|
|
## c-ares and CURLOPT_OPENSOCKETFUNCTION
|
|
|
|
curl creates most sockets via the CURLOPT_OPENSOCKETFUNCTION callback and
|
|
close them with the CURLOPT_CLOSESOCKETFUNCTION callback. However, c-ares does
|
|
not use those functions and instead opens and closes the sockets itself. This
|
|
means that when curl passes the c-ares socket to the CURLMOPT_SOCKETFUNCTION
|
|
it is not owned by the application like other sockets.
|
|
|
|
See [curl issue 2734](https://github.com/curl/curl/issues/2734)
|
|
|
|
## Monitor connections in the connection pool
|
|
|
|
libcurl's connection cache or pool holds a number of open connections for the
|
|
purpose of possible subsequent connection reuse. It may contain a few up to a
|
|
significant amount of connections. Currently, libcurl leaves all connections
|
|
as they are and first when a connection is iterated over for matching or reuse
|
|
purpose it is verified that it is still alive.
|
|
|
|
Those connections may get closed by the server side for idleness or they may
|
|
get an HTTP/2 ping from the peer to verify that they are still alive. By
|
|
adding monitoring of the connections while in the pool, libcurl can detect
|
|
dead connections (and close them) better and earlier, and it can handle HTTP/2
|
|
pings to keep such ones alive even when not actively doing transfers on them.
|
|
|
|
## Try to URL encode given URL
|
|
|
|
Given a URL that for example contains spaces, libcurl could have an option
|
|
that would try somewhat harder than it does now and convert spaces to %20 and
|
|
perhaps URL encoded byte values over 128 etc (basically do what the redirect
|
|
following code already does).
|
|
|
|
[curl issue 514](https://github.com/curl/curl/issues/514)
|
|
|
|
## Add support for IRIs
|
|
|
|
IRIs (RFC 3987) allow localized, non-ASCII, names in the URL. To properly
|
|
support this, curl/libcurl would need to translate/encode the given input
|
|
from the input string encoding into percent encoded output "over the wire".
|
|
|
|
To make that work smoothly for curl users even on Windows, curl would probably
|
|
need to be able to convert from several input encodings.
|
|
|
|
## try next proxy if one does not work
|
|
|
|
Allow an application to specify a list of proxies to try, and failing to
|
|
connect to the first go on and try the next instead until the list is
|
|
exhausted. Browsers support this feature at least when they specify proxies
|
|
using `PAC`.
|
|
|
|
[curl issue 896](https://github.com/curl/curl/issues/896)
|
|
|
|
## provide timing info for each redirect
|
|
|
|
curl and libcurl provide timing information via a set of different time-stamps
|
|
(CURLINFO_*_TIME). When curl is following redirects, those returned time value
|
|
are the accumulated sums. An improvement could be to offer separate timings
|
|
for each redirect.
|
|
|
|
[curl issue 6743](https://github.com/curl/curl/issues/6743)
|
|
|
|
## CURLINFO_PAUSE_STATE
|
|
|
|
Return information about the transfer's current pause state, in both
|
|
directions. See [curl issue 2588](https://github.com/curl/curl/issues/2588)
|
|
|
|
## Expose tried IP addresses that failed
|
|
|
|
When libcurl fails to connect to a host, it could offer the application the
|
|
addresses that were used in the attempt. Source + destination IP, source +
|
|
destination port and protocol (UDP or TCP) for each failure. Possibly as a
|
|
callback. Perhaps also provide reason.
|
|
|
|
[curl issue 2126](https://github.com/curl/curl/issues/2126)
|
|
|
|
## erase secrets from heap/stack after use
|
|
|
|
Introducing a concept and system to erase secrets from memory after use, it
|
|
could help mitigate and lessen the impact of (future) security problems etc.
|
|
However: most secrets are passed to libcurl as clear text from the application
|
|
and then clearing them within the library adds nothing...
|
|
|
|
[curl issue 7268](https://github.com/curl/curl/issues/7268)
|
|
|
|
## make DoH inherit more transfer properties
|
|
|
|
Some options are not inherited because they are not relevant for the DoH SSL
|
|
connections, or inheriting the option may result in unexpected behavior. For
|
|
example the user's debug function callback is not inherited because it would
|
|
be unexpected for internal handles (i.e DoH handles) to be passed to that
|
|
callback.
|
|
|
|
If an option is not inherited then it is not possible to set it separately
|
|
for DoH without a DoH-specific option. For example:
|
|
`CURLOPT_DOH_SSL_VERIFYHOST`, `CURLOPT_DOH_SSL_VERIFYPEER` and
|
|
`CURLOPT_DOH_SSL_VERIFYSTATUS`.
|
|
|
|
See [curl issue 6605](https://github.com/curl/curl/issues/6605)
|
|
|
|
# libcurl - multi interface
|
|
|
|
## More non-blocking
|
|
|
|
Make sure we do not ever loop because of non-blocking sockets returning
|
|
`EWOULDBLOCK` or similar. Blocking cases include:
|
|
|
|
- Name resolves on non-Windows unless c-ares or the threaded resolver is used.
|
|
- The threaded resolver may block on cleanup:
|
|
[curl issue 4852](https://github.com/curl/curl/issues/4852)
|
|
- `file://` transfers
|
|
- TELNET transfers
|
|
- GSSAPI authentication for FTP transfers
|
|
- The "DONE" operation (post transfer protocol-specific actions) for the
|
|
protocols SFTP, SMTP, FTP. Fixing `multi_done()` for this is a worthy task.
|
|
- `curl_multi_remove_handle()` for any of the above.
|
|
- Calling `curl_ws_send()` from a callback
|
|
|
|
## Better support for same name resolves
|
|
|
|
If a name resolve has been initiated for a given name and a second easy handle
|
|
wants to resolve that same name as well, make it wait for the first resolve to
|
|
end up in the cache instead of doing a second separate resolve. This is
|
|
especially needed when adding many simultaneous handles using the same
|
|
hostname when the DNS resolver can get flooded.
|
|
|
|
## Non-blocking `curl_multi_remove_handle()`
|
|
|
|
The multi interface has a few API calls that assume a blocking behavior, like
|
|
`add_handle()` and `remove_handle()` which limits what we can do internally.
|
|
The multi API need to be moved even more into a single function that "drives"
|
|
everything in a non-blocking manner and signals when something is done. A
|
|
remove or add would then only ask for the action to get started and then
|
|
`multi_perform()` etc still be called until the add/remove is completed.
|
|
|
|
## Split connect and authentication process
|
|
|
|
The multi interface treats the authentication process as part of the connect
|
|
phase. As such any failures during authentication does not trigger the
|
|
relevant QUIT or LOGOFF for protocols such as IMAP, POP3 and SMTP.
|
|
|
|
## Edge-triggered sockets should work
|
|
|
|
The multi_socket API should work with edge-triggered socket events. One of the
|
|
internal actions that need to be improved for this to work perfectly is the
|
|
`maxloops` handling in `transfer.c:readwrite_data()`.
|
|
|
|
## multi upkeep
|
|
|
|
In libcurl 7.62.0 we introduced `curl_easy_upkeep`. It unfortunately only
|
|
works on easy handles. We should introduces a version of that for the multi
|
|
handle, and also consider doing `upkeep` automatically on connections in the
|
|
connection pool when the multi handle is in used.
|
|
|
|
See [curl issue 3199](https://github.com/curl/curl/issues/3199)
|
|
|
|
## Virtual external sockets
|
|
|
|
libcurl performs operations on the given file descriptor that presumes it is a
|
|
socket and an application cannot replace them at the moment. Allowing an
|
|
application to fully replace those would allow a larger degree of freedom and
|
|
flexibility.
|
|
|
|
See [curl issue 5835](https://github.com/curl/curl/issues/5835)
|
|
|
|
## dynamically decide to use socketpair
|
|
|
|
For users who do not use `curl_multi_wait()` or do not care for
|
|
`curl_multi_wakeup()`, we could introduce a way to make libcurl NOT create a
|
|
socketpair in the multi handle.
|
|
|
|
See [curl issue 4829](https://github.com/curl/curl/issues/4829)
|
|
|
|
# Documentation
|
|
|
|
## Improve documentation about fork safety
|
|
|
|
See [curl issue 6968](https://github.com/curl/curl/issues/6968)
|
|
|
|
# FTP
|
|
|
|
## A fixed directory listing format
|
|
|
|
Since listing the contents of a remove directory with FTP is returning the
|
|
list in a format and style the server likes without any established or even
|
|
defacto standard existing, it would be a feature to users if curl could parse
|
|
the directory listing and output a general curl format that is fixed and the
|
|
same, independent of the server's choice. This would allow users to better and
|
|
more reliably extract information about remote content via FTP directory
|
|
listings.
|
|
|
|
## GSSAPI via Windows SSPI
|
|
|
|
In addition to currently supporting the SASL GSSAPI mechanism (Kerberos V5)
|
|
via third-party GSS-API libraries, such as MIT Kerberos, also add support for
|
|
GSSAPI authentication via Windows SSPI.
|
|
|
|
## STAT for LIST without data connection
|
|
|
|
Some FTP servers allow STAT for listing directories instead of using LIST, and
|
|
the response is then sent over the control connection instead of as the
|
|
otherwise used data connection.
|
|
|
|
This is not detailed in any FTP specification.
|
|
|
|
## Passive transfer could try other IP addresses
|
|
|
|
When doing FTP operations through a proxy at localhost, the reported spotted
|
|
that curl only tried to connect once to the proxy, while it had multiple
|
|
addresses and a failed connect on one address should make it try the next.
|
|
|
|
After switching to passive mode (EPSV), curl could try all IP addresses for
|
|
`localhost`. Currently it tries `::1`, but it should also try `127.0.0.1`.
|
|
|
|
See [curl issue 1508](https://github.com/curl/curl/issues/1508)
|
|
|
|
# HTTP
|
|
|
|
## Provide the error body from a CONNECT response
|
|
|
|
When curl receives a body response from a CONNECT request to a proxy, it
|
|
always just reads and ignores it. It would make some users happy if curl
|
|
instead optionally would be able to make that responsible available. Via a new
|
|
callback? Through some other means?
|
|
|
|
See [curl issue 9513](https://github.com/curl/curl/issues/9513)
|
|
|
|
## Obey `Retry-After` in redirects
|
|
|
|
The `Retry-After` response header is said to dictate "the minimum time that
|
|
the user agent is asked to wait before issuing the redirected request" and
|
|
libcurl does not obey this.
|
|
|
|
See [curl issue 11447](https://github.com/curl/curl/issues/11447)
|
|
|
|
## Rearrange request header order
|
|
|
|
Server implementers often make an effort to detect browser and to reject
|
|
clients it can detect to not match. One of the last details we cannot yet
|
|
control in libcurl's HTTP requests, which also can be exploited to detect that
|
|
libcurl is in fact used even when it tries to impersonate a browser, is the
|
|
order of the request headers. I propose that we introduce a new option in
|
|
which you give headers a value, and then when the HTTP request is built it
|
|
sorts the headers based on that number. We could then have internally created
|
|
headers use a default value so only headers that need to be moved have to be
|
|
specified.
|
|
|
|
## Allow SAN names in HTTP/2 server push
|
|
|
|
curl only allows HTTP/2 push promise if the provided :authority header value
|
|
exactly matches the hostname given in the URL. It could be extended to allow
|
|
any name that would match the Subject Alternative Names in the server's TLS
|
|
certificate.
|
|
|
|
See [curl pull request 3581](https://github.com/curl/curl/pull/3581)
|
|
|
|
## `auth=` in URLs
|
|
|
|
Add the ability to specify the preferred authentication mechanism to use by
|
|
using `;auth=<mech>` in the login part of the URL.
|
|
|
|
For example:
|
|
|
|
`http://test:pass;auth=NTLM@example.com` would be equivalent to specifying
|
|
`--user test:pass;auth=NTLM` or `--user test:pass --ntlm` from the command
|
|
line.
|
|
|
|
Additionally this should be implemented for proxy base URLs as well.
|
|
|
|
## alt-svc should fallback if alt-svc does not work
|
|
|
|
The `alt-svc:` header provides a set of alternative services for curl to use
|
|
instead of the original. If the first attempted one fails, it should try the
|
|
next etc and if all alternatives fail go back to the original.
|
|
|
|
See [curl issue 4908](https://github.com/curl/curl/issues/4908)
|
|
|
|
## Require HTTP version X or higher
|
|
|
|
curl and libcurl provide options for trying higher HTTP versions (for example
|
|
HTTP/2) but then still allows the server to pick version 1.1. We could
|
|
consider adding a way to require a minimum version.
|
|
|
|
See [curl issue 7980](https://github.com/curl/curl/issues/7980)
|
|
|
|
# TELNET
|
|
|
|
## ditch stdin
|
|
|
|
Reading input (to send to the remote server) on stdin is a crappy solution for
|
|
library purposes. We need to invent a good way for the application to be able
|
|
to provide the data to send.
|
|
|
|
## ditch telnet-specific select
|
|
|
|
Move the telnet support's network `select()` loop go away and merge the code
|
|
into the main transfer loop. Until this is done, the multi interface does not
|
|
work for telnet.
|
|
|
|
## feature negotiation debug data
|
|
|
|
Add telnet feature negotiation data to the debug callback as header data.
|
|
|
|
## exit immediately upon connection if stdin is /dev/null
|
|
|
|
If it did, curl could be used to probe if there is an server there listening
|
|
on a specific port. That is, the following command would exit immediately
|
|
after the connection is established with exit code 0:
|
|
|
|
curl -s --connect-timeout 2 telnet://example.com:80 </dev/null
|
|
|
|
# SMTP
|
|
|
|
## Pass NOTIFY option to CURLOPT_MAIL_RCPT
|
|
|
|
Is there a way to pass the NOTIFY option to the CURLOPT_MAIL_RCPT option ? I
|
|
set a string that already contains a bracket. For instance something like
|
|
that: `curl_slist_append(recipients, "<foo@bar> NOTIFY=SUCCESS,FAILURE");`.
|
|
|
|
[curl issue 8232](https://github.com/curl/curl/issues/8232)
|
|
|
|
## Enhanced capability support
|
|
|
|
Add the ability, for an application that uses libcurl, to obtain the list of
|
|
capabilities returned from the EHLO command.
|
|
|
|
## Add `CURLOPT_MAIL_CLIENT` option
|
|
|
|
Rather than use the URL to specify the mail client string to present in the
|
|
`HELO` and `EHLO` commands, libcurl should support a new `CURLOPT`
|
|
specifically for specifying this data as the URL is non-standard and to be
|
|
honest a bit of a hack.
|
|
|
|
Please see the following thread for more information:
|
|
https://curl.se/mail/lib-2012-05/0178.html
|
|
|
|
# POP3
|
|
|
|
## Enhanced capability support
|
|
|
|
Add the ability, for an application that uses libcurl, to obtain the list of
|
|
capabilities returned from the CAPA command.
|
|
|
|
# IMAP
|
|
|
|
## Enhanced capability support
|
|
|
|
Add the ability, for an application that uses libcurl, to obtain the list of
|
|
capabilities returned from the CAPABILITY command.
|
|
|
|
# LDAP
|
|
|
|
## SASL based authentication mechanisms
|
|
|
|
Currently the LDAP module only supports `ldap_simple_bind_s()` in order to
|
|
bind to an LDAP server. However, this function sends username and password
|
|
details using the simple authentication mechanism (as clear text). However, it
|
|
should be possible to use `ldap_bind_s()` instead specifying the security
|
|
context information ourselves.
|
|
|
|
## `CURLOPT_SSL_CTX_FUNCTION` for LDAPS
|
|
|
|
`CURLOPT_SSL_CTX_FUNCTION` works perfectly for HTTPS and email protocols, but
|
|
it has no effect for LDAPS connections.
|
|
|
|
[curl issue 4108](https://github.com/curl/curl/issues/4108)
|
|
|
|
## Paged searches on LDAP server
|
|
|
|
[curl issue 4452](https://github.com/curl/curl/issues/4452)
|
|
|
|
## Certificate-Based Authentication
|
|
|
|
LDAPS not possible with macOS and Windows with Certificate-Based Authentication
|
|
|
|
[curl issue 9641](https://github.com/curl/curl/issues/9641)
|
|
|
|
# SMB
|
|
|
|
## Support modern versions
|
|
|
|
curl only supports version 1, which barely anyone is using anymore.
|
|
|
|
## File listing support
|
|
|
|
Add support for listing the contents of an SMB share. The output should
|
|
probably be the same as/similar to FTP.
|
|
|
|
## Honor file timestamps
|
|
|
|
The timestamp of the transferred file should reflect that of the original
|
|
file.
|
|
|
|
## Use NTLMv2
|
|
|
|
Currently the SMB authentication uses NTLMv1.
|
|
|
|
## Create remote directories
|
|
|
|
Support for creating remote directories when uploading a file to a directory
|
|
that does not exist on the server, just like `--ftp-create-dirs`.
|
|
|
|
# FILE
|
|
|
|
## Directory listing on non-POSIX
|
|
|
|
Listing the contents of a directory accessed with FILE only works on platforms
|
|
with `opendir()`. Support could be added for more systems, like Windows.
|
|
|
|
# TLS
|
|
|
|
## `TLS-PSK` with OpenSSL
|
|
|
|
Transport Layer Security pre-shared key cipher suites (`TLS-PSK`) is a set of
|
|
cryptographic protocols that provide secure communication based on pre-shared
|
|
keys (`PSK`). These pre-shared keys are symmetric keys shared in advance among
|
|
the communicating parties.
|
|
|
|
[curl issue 5081](https://github.com/curl/curl/issues/5081)
|
|
|
|
## TLS channel binding
|
|
|
|
TLS 1.2 and 1.3 provide the ability to extract some secret data from the TLS
|
|
connection and use it in the client request (usually in some sort of
|
|
authentication) to ensure that the data sent is bound to the specific TLS
|
|
connection and cannot be successfully intercepted by a proxy. This
|
|
functionality can be used in a standard authentication mechanism such as
|
|
GSS-API or SCRAM, or in custom approaches like custom HTTP Authentication
|
|
headers.
|
|
|
|
For TLS 1.2, the binding type is usually `tls-unique`, and for TLS 1.3 it is
|
|
`tls-exporter`.
|
|
|
|
- https://datatracker.ietf.org/doc/html/rfc5929
|
|
- https://datatracker.ietf.org/doc/html/rfc9266
|
|
- [curl issue 9226](https://github.com/curl/curl/issues/9226)
|
|
|
|
## Defeat TLS fingerprinting
|
|
|
|
By changing the order of TLS extensions provided in the TLS handshake, it is
|
|
sometimes possible to circumvent TLS fingerprinting by servers. The TLS
|
|
extension order is of course not the only way to fingerprint a client.
|
|
|
|
## Consider OCSP stapling by default
|
|
|
|
Treat a negative response a reason for aborting the connection. Since OCSP
|
|
stapling is presumed to get used much less in the future when Let's Encrypt
|
|
drops the OCSP support, the benefit of this might however be limited.
|
|
|
|
[curl issue 15483](https://github.com/curl/curl/issues/15483)
|
|
|
|
## Provide callback for cert verification
|
|
|
|
OpenSSL supports a callback for customized verification of the peer
|
|
certificate, but this does not seem to be exposed in the libcurl APIs. Could
|
|
it be? There is so much that could be done if it were.
|
|
|
|
## Less memory massaging with Schannel
|
|
|
|
The Schannel backend does a lot of custom memory management we would rather
|
|
avoid: the repeated allocation + free in sends and the custom memory + realloc
|
|
system for encrypted and decrypted data. That should be avoided and reduced
|
|
for 1) efficiency and 2) safety.
|
|
|
|
## Support DANE
|
|
|
|
[DNS-Based Authentication of Named Entities
|
|
(DANE)](https://datatracker.ietf.org/doc/html/rfc6698) is a way to provide
|
|
SSL keys and certs over DNS using DNSSEC as an alternative to the CA model.
|
|
|
|
A patch was posted on March 7 2013
|
|
(https://curl.se/mail/lib-2013-03/0075.html) but it was a too simple approach.
|
|
See Daniel's comments: https://curl.se/mail/lib-2013-03/0103.html
|
|
|
|
Björn Stenberg once wrote a separate initial take on DANE that was never
|
|
completed.
|
|
|
|
## TLS record padding
|
|
|
|
TLS (1.3) offers optional record padding and OpenSSL provides an API for it. I
|
|
could make sense for libcurl to offer this ability to applications to make
|
|
traffic patterns harder to figure out by network traffic observers.
|
|
|
|
See [curl issue 5398](https://github.com/curl/curl/issues/5398)
|
|
|
|
## Support Authority Information Access certificate extension (AIA)
|
|
|
|
AIA can provide various things like certificate revocation lists but more
|
|
importantly information about intermediate CA certificates that can allow
|
|
validation path to be fulfilled when the HTTPS server does not itself provide
|
|
them.
|
|
|
|
Since AIA is about downloading certs on demand to complete a TLS handshake, it
|
|
is probably a bit tricky to get done right and a serious privacy leak.
|
|
|
|
See [curl issue 2793](https://github.com/curl/curl/issues/2793)
|
|
|
|
## Some TLS options are not offered for HTTPS proxies
|
|
|
|
Some TLS related options to the command line tool and libcurl are only
|
|
provided for the server and not for HTTPS proxies. `--proxy-tls-max`,
|
|
`--proxy-tlsv1.3`, `--proxy-curves` and a few more. For more Documentation on
|
|
this see: https://curl.se/libcurl/c/tls-options.html
|
|
|
|
[curl issue 12286](https://github.com/curl/curl/issues/12286)
|
|
|
|
## Make sure we forbid TLS 1.3 post-handshake authentication
|
|
|
|
RFC 8740 explains how using HTTP/2 must forbid the use of TLS 1.3
|
|
post-handshake authentication. We should make sure to live up to that.
|
|
|
|
See [curl issue 5396](https://github.com/curl/curl/issues/5396)
|
|
|
|
## Support the `clienthello` extension
|
|
|
|
Certain stupid networks and middle boxes have a problem with SSL handshake
|
|
packets that are within a certain size range because how that sets some bits
|
|
that previously (in older TLS version) were not set. The `clienthello`
|
|
extension adds padding to avoid that size range.
|
|
|
|
- https://datatracker.ietf.org/doc/html/rfc7685
|
|
- [curl issue 2299](https://github.com/curl/curl/issues/2299)
|
|
|
|
## Share the CA cache
|
|
|
|
For TLS backends that supports CA caching, it makes sense to allow the share
|
|
object to be used to store the CA cache as well via the share API. Would allow
|
|
multiple easy handles to reuse the CA cache and save themselves from a lot of
|
|
extra processing overhead.
|
|
|
|
## Add missing features to TLS backends
|
|
|
|
The feature matrix at https://curl.se/libcurl/c/tls-options.html shows which
|
|
features are supported by which TLS backends, and thus also where there are
|
|
feature gaps.
|
|
|
|
# Proxy
|
|
|
|
## Retry SOCKS handshake on address type not supported
|
|
|
|
When curl resolves a hostname, it might get a mix of IPv6 and IPv4 returned.
|
|
curl might then use an IPv6 address with a SOCKS5 proxy, which - if it does
|
|
not support IPv6 - returns "Address type not supported" and curl exits with
|
|
that error.
|
|
|
|
Perhaps it is preferred if curl would in this situation instead first retry
|
|
the SOCKS handshake again for this case and then use one of the IPv4 addresses
|
|
for the target host.
|
|
|
|
See [curl issue 17222](https://github.com/curl/curl/issues/17222)
|
|
|
|
# Schannel
|
|
|
|
## Extend support for client certificate authentication
|
|
|
|
The existing support for the `-E`/`--cert` and `--key` options could be
|
|
extended by supplying a custom certificate and key in PEM format, see:
|
|
[Getting a Certificate for
|
|
Schannel](https://learn.microsoft.com/windows/win32/secauthn/getting-a-certificate-for-schannel)
|
|
|
|
## Extend support for the `--ciphers` option
|
|
|
|
The existing support for the `--ciphers` option could be extended by mapping
|
|
the OpenSSL/GnuTLS cipher suites to the Schannel APIs, see [Specifying
|
|
Schannel Ciphers and Cipher
|
|
Strengths](https://learn.microsoft.com/windows/win32/secauthn/specifying-schannel-ciphers-and-cipher-strengths).
|
|
|
|
## Add option to allow abrupt server closure
|
|
|
|
libcurl with Schannel errors without a known termination point from the server
|
|
(such as length of transfer, or SSL "close notify" alert) to prevent against a
|
|
truncation attack. Really old servers may neglect to send any termination
|
|
point. An option could be added to ignore such abrupt closures.
|
|
|
|
[curl issue 4427](https://github.com/curl/curl/issues/4427)
|
|
|
|
# SASL
|
|
|
|
## Other authentication mechanisms
|
|
|
|
Add support for other authentication mechanisms such as `OLP`, `GSS-SPNEGO`
|
|
and others.
|
|
|
|
## Add `QOP` support to GSSAPI authentication
|
|
|
|
Currently the GSSAPI authentication only supports the default `QOP` of auth
|
|
(Authentication), whilst Kerberos V5 supports both `auth-int` (Authentication
|
|
with integrity protection) and `auth-conf` (Authentication with integrity and
|
|
privacy protection).
|
|
|
|
# SSH protocols
|
|
|
|
## Multiplexing
|
|
|
|
SSH is a perfectly fine multiplexed protocols which would allow libcurl to do
|
|
multiple parallel transfers from the same host using the same connection, much
|
|
in the same spirit as HTTP/2 does. libcurl however does not take advantage of
|
|
that ability but does instead always create a new connection for new transfers
|
|
even if an existing connection already exists to the host.
|
|
|
|
To fix this, libcurl would have to detect an existing connection and "attach"
|
|
the new transfer to the existing one.
|
|
|
|
## Handle growing SFTP files
|
|
|
|
The SFTP code in libcurl checks the file size *before* a transfer starts and
|
|
then proceeds to transfer exactly that amount of data. If the remote file
|
|
grows while the transfer is in progress libcurl does not notice and does not
|
|
adapt. The OpenSSH SFTP command line tool does and libcurl could also just
|
|
attempt to download more to see if there is more to get...
|
|
|
|
[curl issue 4344](https://github.com/curl/curl/issues/4344)
|
|
|
|
## Read keys from `~/.ssh/id_ecdsa`, `id_ed25519`
|
|
|
|
The libssh2 backend in curl is limited to only reading keys from `id_rsa` and
|
|
`id_dsa`, which makes it fail connecting to servers that use more modern key
|
|
types.
|
|
|
|
[curl issue 8586](https://github.com/curl/curl/issues/8586)
|
|
|
|
## Support `CURLOPT_PREQUOTE`
|
|
|
|
The two other `QUOTE` options are supported for SFTP, but this was left out
|
|
for unknown reasons.
|
|
|
|
## SSH over HTTPS proxy for libssh
|
|
|
|
The SSH based protocols SFTP and SCP did not work over HTTPS proxy at all
|
|
until [curl pull request 6021](https://github.com/curl/curl/pull/6021) brought
|
|
the functionality with the libssh2 backend. Presumably, this support can/could
|
|
be added for the libssh backend as well.
|
|
|
|
## SFTP with `SCP://`
|
|
|
|
OpenSSH 9 switched their `scp` tool to speak SFTP under the hood. Going
|
|
forward it might be worth having curl or libcurl attempt SFTP if SCP fails to
|
|
follow suite.
|
|
|
|
# Command line tool
|
|
|
|
## multi-threading
|
|
|
|
When asked to do transfers in parallel, the curl tool could be extended to use
|
|
a number of independent worker threads. This would allow faster transfers in
|
|
situations where curl becomes CPU bound.
|
|
|
|
Ideally, curl would (with permission) fire up new threads on demand when it
|
|
deems that it might be helpful. Perhaps, if it has more transfers to add and
|
|
the existing transfers make the CPU busy enough and there are more cores
|
|
available.
|
|
|
|
## sync
|
|
|
|
`curl --sync http://example.com/feed[1-100].rss` or
|
|
`curl --sync http://example.net/{index,calendar,history}.html`
|
|
|
|
Downloads a range or set of URLs using the remote name, but only if the remote
|
|
file is newer than the local file. A `Last-Modified` HTTP date header should
|
|
also be used to set the mod date on the downloaded file.
|
|
|
|
## glob posts
|
|
|
|
Globbing support for `-d` and `-F`, as in `curl -d "name=foo[0-9]" URL`. This
|
|
is easily scripted though.
|
|
|
|
## `--proxycommand`
|
|
|
|
Allow the user to make curl run a command and use its stdio to make requests
|
|
and not do any network connection by itself. Example:
|
|
|
|
curl --proxycommand 'ssh pi@raspberrypi.local -W 10.1.1.75 80' \
|
|
http://some/otherwise/unavailable/service.php
|
|
|
|
See [curl issue 4941](https://github.com/curl/curl/issues/4941)
|
|
|
|
## UTF-8 filenames in Content-Disposition
|
|
|
|
RFC 6266 documents how UTF-8 names can be passed to a client in the
|
|
`Content-Disposition` header, and curl does not support this.
|
|
|
|
[curl issue 1888](https://github.com/curl/curl/issues/1888)
|
|
|
|
## Option to make `-Z` merge lined based outputs on stdout
|
|
|
|
When a user requests multiple lined based files using `-Z` and sends them to
|
|
stdout, curl does not *merge* and send complete lines fine but may send
|
|
partial lines from several sources.
|
|
|
|
[curl issue 5175](https://github.com/curl/curl/issues/5175)
|
|
|
|
## specify which response codes that make `-f`/`--fail` return error
|
|
|
|
Allows a user to better specify exactly which error code(s) that are fine and
|
|
which are errors for their specific uses cases
|
|
|
|
## Choose the name of file in braces for complex URLs
|
|
|
|
When using braces to download a list of URLs and you use complicated names
|
|
in the list of alternatives, it could be handy to allow curl to use other
|
|
names when saving.
|
|
|
|
Consider a way to offer that. Possibly like
|
|
`{partURL1:name1,partURL2:name2,partURL3:name3}` where the name following the
|
|
colon is the output name.
|
|
|
|
See [curl issue 221](https://github.com/curl/curl/issues/221)
|
|
|
|
## improve how curl works in a Windows console window
|
|
|
|
If you pull the scroll bar when transferring with curl in a Windows console
|
|
window, the transfer is interrupted and can get disconnected. This can
|
|
probably be improved. See [curl issue 322](https://github.com/curl/curl/issues/322)
|
|
|
|
## Windows: set attribute 'archive' for completed downloads
|
|
|
|
The archive bit (`FILE_ATTRIBUTE_ARCHIVE, 0x20`) separates files that shall be
|
|
backed up from those that are either not ready or have not changed.
|
|
|
|
Downloads in progress are neither ready to be backed up, nor should they be
|
|
opened by a different process. Only after a download has been completed it is
|
|
sensible to include it in any integer snapshot or backup of the system.
|
|
|
|
See [curl issue 3354](https://github.com/curl/curl/issues/3354)
|
|
|
|
## keep running, read instructions from pipe/socket
|
|
|
|
Provide an option that makes curl not exit after the last URL (or even work
|
|
without a given URL), and then make it read instructions passed on a pipe or
|
|
over a socket to make further instructions so that a second subsequent curl
|
|
invoke can talk to the still running instance and ask for transfers to get
|
|
done, and thus maintain its connection pool, DNS cache and more.
|
|
|
|
## Acknowledge `Ratelimit` headers
|
|
|
|
Consider a command line option that can make curl do multiple serial requests
|
|
while acknowledging server specified [rate
|
|
limits](https://datatracker.ietf.org/doc/draft-ietf-httpapi-ratelimit-headers/).
|
|
|
|
See [curl issue 5406](https://github.com/curl/curl/issues/5406)
|
|
|
|
## `--dry-run`
|
|
|
|
A command line option that makes curl show exactly what it would do and send
|
|
if it would run for real.
|
|
|
|
See [curl issue 5426](https://github.com/curl/curl/issues/5426)
|
|
|
|
## `--retry` should resume
|
|
|
|
When `--retry` is used and curl actually retries transfer, it should use the
|
|
already transferred data and do a resumed transfer for the rest (when
|
|
possible) so that it does not have to transfer the same data again that was
|
|
already transferred before the retry.
|
|
|
|
See [curl issue 1084](https://github.com/curl/curl/issues/1084)
|
|
|
|
## retry on network is unreachable
|
|
|
|
The `--retry` option retries transfers on *transient failures*. We later added
|
|
`--retry-connrefused` to also retry for *connection refused* errors.
|
|
|
|
Suggestions have been brought to also allow retry on *network is unreachable*
|
|
errors and while totally reasonable, maybe we should consider a way to make
|
|
this more configurable than to add a new option for every new error people
|
|
want to retry for?
|
|
|
|
[curl issue 1603](https://github.com/curl/curl/issues/1603)
|
|
|
|
## hostname sections in config files
|
|
|
|
config files would be more powerful if they could set different configurations
|
|
depending on used URLs, hostname or possibly origin. Then a default `.curlrc`
|
|
could a specific user-agent only when doing requests against a certain site.
|
|
|
|
## retry on the redirected-to URL
|
|
|
|
When curl is told to `--retry` a failed transfer and follows redirects, it
|
|
might get an HTTP 429 response from the redirected-to URL and not the original
|
|
one, which then could make curl decide to rather retry the transfer on that
|
|
URL only instead of the original operation to the original URL.
|
|
|
|
Perhaps extra emphasized if the original transfer is a large POST that
|
|
redirects to a separate GET, and that GET is what gets the 529
|
|
|
|
See [curl issue 5462](https://github.com/curl/curl/issues/5462)
|
|
|
|
## Set the modification date on an uploaded file
|
|
|
|
For SFTP and possibly FTP, curl could offer an option to set the modification
|
|
time for the uploaded file.
|
|
|
|
See [curl issue 5768](https://github.com/curl/curl/issues/5768)
|
|
|
|
## Use multiple parallel transfers for a single download
|
|
|
|
To enhance transfer speed, downloading a single URL can be split up into
|
|
multiple separate range downloads that get combined into a single final
|
|
result.
|
|
|
|
An ideal implementation would not use a specified number of parallel
|
|
transfers, but curl could:
|
|
- First start getting the full file as transfer A
|
|
- If after N seconds have passed and the transfer is expected to continue for
|
|
M seconds or more, add a new transfer (B) that asks for the second half of
|
|
A's content (and stop A at the middle).
|
|
- If splitting up the work improves the transfer rate, it could then be done
|
|
again. Then again, etc up to a limit.
|
|
|
|
This way, if transfer B fails (because Range: is not supported) it lets
|
|
transfer A remain the single one. N and M could be set to some sensible
|
|
defaults.
|
|
|
|
See [curl issue 5774](https://github.com/curl/curl/issues/5774)
|
|
|
|
## Prevent terminal injection when writing to terminal
|
|
|
|
curl could offer an option to make escape sequence either non-functional or
|
|
avoid cursor moves or similar to reduce the risk of a user getting tricked by
|
|
clever tricks.
|
|
|
|
See [curl issue 6150](https://github.com/curl/curl/issues/6150)
|
|
|
|
## `-J` and `-O` with %-encoded filenames
|
|
|
|
`-J`/`--remote-header-name` does not decode %-encoded filenames. RFC 6266
|
|
details how it should be done. The can of worm is basically that we have no
|
|
charset handling in curl and ASCII >=128 is a challenge for us. Not to mention
|
|
that decoding also means that we need to check for nastiness that is
|
|
attempted, like `../` sequences and the like. Probably everything to the left
|
|
of any embedded slashes should be cut off. See
|
|
https://curl.se/bug/view.cgi?id=1294
|
|
|
|
`-O` also does not decode %-encoded names, and while it has even less
|
|
information about the charset involved the process is similar to the `-J`
|
|
case.
|
|
|
|
Note that we do not decode `-O` without the user asking for it with some other
|
|
means, since `-O` has always been documented to use the name exactly as
|
|
specified in the URL.
|
|
|
|
## `-J` with `-C -`
|
|
|
|
When using `-J` (with `-O`), automatically resumed downloading together with
|
|
`-C -` fails. Without `-J` the same command line works. This happens because
|
|
the resume logic is worked out before the target filename (and thus its
|
|
pre-transfer size) has been figured out. This can be improved.
|
|
|
|
https://curl.se/bug/view.cgi?id=1169
|
|
|
|
## `--retry` and transfer timeouts
|
|
|
|
If using `--retry` and the transfer timeouts (possibly due to using -m or
|
|
`-y`/`-Y`) the next attempt does not resume the transfer properly from what
|
|
was downloaded in the previous attempt but truncates and restarts at the
|
|
original position where it was at before the previous failed attempt. See
|
|
https://curl.se/mail/lib-2008-01/0080.html
|
|
|
|
# Build
|
|
|
|
## Enable `PIE` and `RELRO` by default
|
|
|
|
Especially when having programs that execute curl via the command line, `PIE`
|
|
renders the exploitation of memory corruption vulnerabilities a lot more
|
|
difficult. This can be attributed to the additional information leaks being
|
|
required to conduct a successful attack. `RELRO`, on the other hand, masks
|
|
different binary sections like the `GOT` as read-only and thus kills a handful
|
|
of techniques that come in handy when attackers are able to arbitrarily
|
|
overwrite memory. A few tests showed that enabling these features had close to
|
|
no impact, neither on the performance nor on the general functionality of
|
|
curl.
|
|
|
|
## Do not use GNU libtool on OpenBSD
|
|
|
|
When compiling curl on OpenBSD with `--enable-debug` it gives linking errors
|
|
when you use GNU libtool. This can be fixed by using the libtool provided by
|
|
OpenBSD itself. However for this the user always needs to invoke make with
|
|
`LIBTOOL=/usr/bin/libtool`. It would be nice if the script could have some
|
|
magic to detect if this system is an OpenBSD host and then use the OpenBSD
|
|
libtool instead.
|
|
|
|
See [curl issue 5862](https://github.com/curl/curl/issues/5862)
|
|
|
|
## Package curl for Windows in a signed installer
|
|
|
|
See [curl issue 5424](https://github.com/curl/curl/issues/5424)
|
|
|
|
## make configure use `--cache-file` more and better
|
|
|
|
The configure script can be improved to cache more values so that repeated
|
|
invokes run much faster.
|
|
|
|
See [curl issue 7753](https://github.com/curl/curl/issues/7753)
|
|
|
|
# Test suite
|
|
|
|
## SSL tunnel
|
|
|
|
Make our own version of stunnel for simple port forwarding to enable HTTPS and
|
|
FTP-SSL tests without the stunnel dependency, and it could allow us to provide
|
|
test tools built with either OpenSSL or GnuTLS
|
|
|
|
## more protocols supported
|
|
|
|
Extend the test suite to include more protocols. The telnet could just do FTP
|
|
or http operations (for which we have test servers).
|
|
|
|
## more platforms supported
|
|
|
|
Make the test suite work on more platforms. OpenBSD and macOS. Remove fork()s
|
|
and it should become even more portable.
|
|
|
|
## write an SMB test server to replace impacket
|
|
|
|
This would allow us to run SMB tests on more platforms and do better and more
|
|
covering tests.
|
|
|
|
See [curl issue 15697](https://github.com/curl/curl/issues/15697)
|
|
|
|
## Use the RFC 6265 test suite
|
|
|
|
A test suite made for HTTP cookies (RFC 6265) by Adam Barth [is
|
|
available](https://github.com/abarth/http-state/tree/master/tests).
|
|
|
|
It would be good if someone would write a script/setup that would run curl
|
|
with that test suite and detect deviance. Ideally, that would even be
|
|
incorporated into our regular test suite.
|
|
|
|
## Run web-platform-tests URL tests
|
|
|
|
Run web-platform-tests URL tests and compare results with browsers on
|
|
`wpt.fyi`.
|
|
|
|
It would help us find issues to fix and help us document where our parser
|
|
differs from the WHATWG URL spec parsers.
|
|
|
|
See [curl issue 4477](https://github.com/curl/curl/issues/4477)
|
|
|
|
# MQTT
|
|
|
|
## Support rate-limiting
|
|
|
|
The rate-limiting logic is done in the PERFORMING state in multi.c but MQTT is
|
|
not (yet) implemented to use that.
|
|
|
|
## Support MQTTS
|
|
|
|
## Handle network blocks
|
|
|
|
Running test suite with `CURL_DBG_SOCK_WBLOCK=90 ./runtests.pl -a mqtt` makes
|
|
several MQTT test cases fail where they should not.
|
|
|
|
## large payloads
|
|
|
|
libcurl unnecessarily allocates heap memory to hold the entire payload to get
|
|
sent, when the data is already perfectly accessible where it is when
|
|
`CURLOPT_POSTFIELDS` is used. This is highly inefficient for larger payloads.
|
|
Additionally, libcurl does not support using the read callback for sending
|
|
MQTT which is yet another way to avoid having to hold large payload in memory.
|
|
|
|
# TFTP
|
|
|
|
## TFTP does not convert LF to CRLF for `mode=netascii`
|
|
|
|
RFC 3617 defines that an TFTP transfer can be done using `netascii` mode. curl
|
|
does not support extracting that mode from the URL nor does it treat such
|
|
transfers specifically. It should probably do LF to CRLF translations for
|
|
them.
|
|
|
|
See [curl issue 12655](https://github.com/curl/curl/issues/12655)
|
|
|
|
# Gopher
|
|
|
|
## Handle network blocks
|
|
|
|
Running test suite with `CURL_DBG_SOCK_WBLOCK=90 ./runtests.pl -a 1200 to
|
|
1300` makes several Gopher test cases fail where they should not.
|
|
|
|
# Signals
|
|
|
|
## SIGPIPE
|
|
|
|
Since we control the IO functions for most protocols and disable
|
|
SIGPIPE on sends, libcurl could skip the special SIGPIPE ignore
|
|
handling for those transfers.
|