The actual algorithm is largely unchanged, just allowed to use
singlebyte checks for common encodings.
It could certainly be optimized much further, as here again it often
scans from the front of the string when we're interested in the back of
it. But the algorithm as many Windows only corner cases so I'd rather
ship a good improvement now and eventually come back to it later.
Most of improvement here is from the reduced setup cost (avodi double
null checks, avoid duping the argument, etc), and skipping the multi-byte
checks.
```
compare-ruby: ruby 4.1.0dev (2026-01-19T03:51:30Z master 631bf19b37) +PRISM [arm64-darwin25]
built-ruby: ruby 4.1.0dev (2026-01-21T08:21:05Z opt-basename 7eb11745b2) +PRISM [arm64-darwin25]
```
| |compare-ruby|built-ruby|
|:----------|-----------:|---------:|
|long | 3.412M| 18.158M|
| | -| 5.32x|
|long_name | 1.981M| 8.580M|
| | -| 4.33x|
|withext | 3.200M| 12.986M|
| | -| 4.06x|
- `str_null_check` was performed twice, once by `FilePathStringValue`
and a second time by `StringValueCStr`.
- `StringValueCStr` was checking for the terminator presence, but we
don't care about that.
- `FilePathStringValue` calls `rb_str_new_frozen` to ensure `fname`
isn't mutated, but that's costly for such a check. Instead we
can do it in debug mode only.
- `rb_enc_get` is slow because it accepts arbitrary objects, even immediates,
so it has to do numerous type checks. Add a much faster `rb_str_enc_get`
when we know we're dealing with a string.
- `rb_enc_copy` is slow for the same reasons, since we already have the
encoding, we can use `rb_enc_str_new` instead.
`chompdirsep` searches from the start of the string each time, which
perhaps is necessary for certain encodings (not even sure?) but for
the common encodings it's very wasteful. Instead we can start from the
back of the string and only compare one or two characters in most cases.
Also replace `StringValueCStr` for the simpler `rb_str_null_check`
as we only care about whether the string contains `NULL` bytes, we
don't care whether it is NULL terminated or not.
We also only check the final string for NULLs.
```
compare-ruby: ruby 4.1.0dev (2026-01-17T14:40:03Z master 00a3b71eaf) +PRISM [arm64-darwin25]
built-ruby: ruby 4.1.0dev (2026-01-18T12:55:15Z spedup-file-join 5948e92e03) +PRISM [arm64-darwin25]
warming up....
| |compare-ruby|built-ruby|
|:-------------|-----------:|---------:|
|two_strings | 2.477M| 19.317M|
| | -| 7.80x|
|many_strings | 547.577k| 10.298M|
| | -| 18.81x|
|array | 515.280k| 523.291k|
| | -| 1.02x|
|mixed | 621.840k| 635.422k|
| | -| 1.02x|
```
`File.join` is a hotspot for common libraries such as Zeitwerk
and Bootsnap. It has a fairly flexible signature, but 99% of
the time it's called with just two (or a small number of) UTF-8 strings.
If we optimistically optimize for that use case we can cut down a large
number of type and encoding checks, significantly speeding up the method.
The one remaining expensive check we could try to optimize is `str_null_check`.
Given it's common to use the same base string for joining, we could memoize it.
Also we could precompute it for literal strings.
```
compare-ruby: ruby 4.1.0dev (2026-01-17T14:40:03Z master 00a3b71eaf) +PRISM [arm64-darwin25]
built-ruby: ruby 4.1.0dev (2026-01-18T12:10:38Z spedup-file-join 069bab58d4) +PRISM [arm64-darwin25]
warming up....
| |compare-ruby|built-ruby|
|:-------------|-----------:|---------:|
|two_strings | 2.475M| 9.444M|
| | -| 3.82x|
|many_strings | 551.975k| 2.346M|
| | -| 4.25x|
|array | 514.946k| 522.034k|
| | -| 1.01x|
|mixed | 621.236k| 633.189k|
| | -| 1.02x|
```
RSTRUCT_LEN / RSTRUCT_GET / RSTRUCT_SET all existing in two
versions, one public that does type and frozens checks
and one private that doesn't.
The problem is that this is error prone because the public version
is always accessible, but the private one require to include
`internal/struct.h`. So you may have some code that rely on the
public version, and later on the private header is included and
changes the behavior.
This already led to introducing a bug in YJIT & ZJIT:
https://github.com/ruby/ruby/pull/15835
[Feature #21084]
# Summary
The current way of marking weak references uses `rb_gc_mark_weak(VALUE *ptr)`.
This presents challenges because Ruby's GC is incremental, meaning that if the
`ptr` changes (e.g. realloc'd or free'd), then we could have an invalid memory
access. This also overwrites `*ptr = Qundef` if `*ptr` is dead, which prevents
any cleanup to be run (e.g. freeing memory or deleting entries from hash
tables). This ticket proposes `rb_gc_declare_weak_references` which declares
that an object has weak references and calls a cleanup function after marking,
allowing the object to clean up any memory for dead objects.
# Introduction
In [[Feature #19783]](https://bugs.ruby-lang.org/issues/19783), I introduced an
API allowing objects to mark weak references, the function signature looks like
this:
```c
void rb_gc_mark_weak(VALUE *ptr);
```
`rb_gc_mark_weak` is called during the marking phase of the GC to specify that
the memory at `ptr` holds a pointer to a Ruby object that is weakly referenced.
`rb_gc_mark_weak` appends this pointer to a list that is processed after the
marking phase of the GC. If the object at `*ptr` is no longer alive, then it
overwrites the object reference with a special value (`*ptr = Qundef`).
However, this API resulted in two challenges:
1. Ruby's default GC is incremental, which means that the GC is not ran in one
phase, but rather split into chunks of work that interleaves with Ruby
execution. The `ptr` passed into `rb_gc_mark_weak` could be on the malloc
heap, and that memory could be realloc'd or even free'd. We had to use
workarounds such as `rb_gc_remove_weak` to ensure that there were no illegal
memory accesses. This made `rb_gc_mark_weak` difficult to use, impacted
runtime performance, and increased memory usage.
2. When an object dies, `rb_gc_mark_weak` only overwites the reference with
`Qundef`. This means that if we want to do any cleanup (e.g. free a piece of
memory or delete a hash table entry), we could not do that and had to defer
this process elsewhere (e.g. during marking or runtime).
In this ticket, I'm proposing a new API for weak references. Instead of an
object marking its weak references during the marking phase, the object declares
that it has weak references using the `rb_gc_declare_weak_references` function.
This declaration occurs during runtime (e.g. after the object has been created)
rather than during GC.
After an object declares that it has weak references, it will have its callback
function called after marking as long as that object is alive. This callback
function can then call a special function `rb_gc_handle_weak_references_alive_p`
to determine whether its references are alive. This will allow the callback
function to do whatever it wants on the object, allowing it to perform any
cleanup work it needs.
This significantly simplifies the code for `ObjectSpace::WeakMap` and
`ObjectSpace::WeakKeyMap` because it no longer needs to have the workarounds for
the limitations of `rb_gc_mark_weak`.
# Performance
The performance results below demonstrate that `ObjectSpace::WeakMap#[]=` is now
about 60% faster because the implementation has been simplified and the number
of allocations has been reduced. We can see that there is not a significant
impact on the performance of `ObjectSpace::WeakMap#[]`.
Base:
```
ObjectSpace::WeakMap#[]=
4.620M (± 6.4%) i/s (216.44 ns/i) - 23.342M in 5.072149s
ObjectSpace::WeakMap#[]
30.967M (± 1.9%) i/s (32.29 ns/i) - 154.998M in 5.007157s
```
Branch:
```
ObjectSpace::WeakMap#[]=
7.336M (± 2.8%) i/s (136.31 ns/i) - 36.755M in 5.013983s
ObjectSpace::WeakMap#[]
30.902M (± 5.4%) i/s (32.36 ns/i) - 155.901M in 5.064060s
```
Code:
```
require "bundler/inline"
gemfile do
source "https://rubygems.org"
gem "benchmark-ips"
end
wmap = ObjectSpace::WeakMap.new
key = Object.new
val = Object.new
wmap[key] = val
Benchmark.ips do |x|
x.report("ObjectSpace::WeakMap#[]=") do |times|
i = 0
while i < times
wmap[Object.new] = Object.new
i += 1
end
end
x.report("ObjectSpace::WeakMap#[]") do |times|
i = 0
while i < times
wmap[key]
wmap[val] # does not exist
i += 1
end
end
end
```
# Alternative designs
Currently, `rb_gc_declare_weak_references` is designed to be an internal-only
API. This allows us to assume the object types that call
`rb_gc_declare_weak_references`. In the future, if we want to open up this API
to third parties, we may want to change this function to something like:
```c
void rb_gc_add_cleaner(VALUE obj, void (*callback)(VALUE obj));
```
This will allow the third party to implement a custom `callback` that gets
called after the marking phase of GC to clean up any dead references. I chose
not to implement this design because it is less efficient as we would need to
store a mapping from `obj` to `callback`, which requires extra memory.
Mutexes spend a significant amount of time in `rb_fiber_serial`
because it can't be inlined (except with LTO).
The fiber struct is opaque the so function can't be defined as inlineable.
Ideally the while fiber struct would not be opaque to the rest of
Ruby core, but it's tricky to do.
Instead we can store the fiber serial in the execution context
itself, and make its access cheaper:
```
$ hyperfine './miniruby-baseline --yjit /tmp/mut.rb' './miniruby-inline-serial --yjit /tmp/mut.rb'
Benchmark 1: ./miniruby-baseline --yjit /tmp/mut.rb
Time (mean ± σ): 4.011 s ± 0.084 s [User: 3.977 s, System: 0.011 s]
Range (min … max): 3.950 s … 4.245 s 10 runs
Benchmark 2: ./miniruby-inline-serial --yjit /tmp/mut.rb
Time (mean ± σ): 3.495 s ± 0.150 s [User: 3.448 s, System: 0.009 s]
Range (min … max): 3.340 s … 3.869 s 10 runs
Summary
./miniruby-inline-serial --yjit /tmp/mut.rb ran
1.15 ± 0.05 times faster than ./miniruby-baseline --yjit /tmp/mut.rb
```
```ruby
i = 10_000_000
mut = Mutex.new
while i > 0
i -= 1
mut.synchronize { }
mut.synchronize { }
mut.synchronize { }
mut.synchronize { }
mut.synchronize { }
mut.synchronize { }
mut.synchronize { }
mut.synchronize { }
mut.synchronize { }
mut.synchronize { }
end
```
The changes are to `io.c` and `thread.c`.
I changed the API of 2 exported thread functions from `internal/thread.h` that
didn't look like they had any use in C extensions:
* rb_thread_wait_for_single_fd
* rb_thread_io_wait
I didn't change the following exported internal function because it's
used in C extensions:
* rb_thread_fd_select
I added a comment to note that this function, although internal, is used
in C extensions.
This rewrites the class allocator search to be faster. Instead of using
RCLASS_SUPER, which is now even slower due to Box, we can scan the
superclasses list to find a class where the allocator is defined.
This also disallows allocating from an ICLASS. Previously I believe that
was only done for FrozenCore, and that was changed in
e596cf6e93dbf121e197cccfec8a69902e00eda3.
If we don't have uint128, then rb_int128_to_numeric emits a strict
aliasing warning:
numeric.c:3641:39: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
3641 | return rb_uint128_to_numeric(*(rb_uint128_t*)&n);
| ^~~~~~~~~~~~~~~~~
The #ifdef is currently not taken because include/ruby/backward.h is
not included at this point. The attribute is unnecessary in an internal
header, so remove it.
[Bug #21710]
- struct.c: `struct_alloc`
It is possible for a `NEWOBJ` tracepoint call back to write fields
into a newly allocated object before `struct_alloc` had the time
to set the `RSTRUCT_GEN_FIELDS` flags and such.
Hence we can't blindly initialize the `fields_obj` reference to `0`
we first need to check no fields were added yet.
- object.c: `rb_class_allocate_instance`
Similarly, if a `NEWOBJ` tracepoint tries to set fields on the object,
the `shape_id` must already be set, as it's required on T_OBJECT to
know where to write fields.
`NEWOBJ_OF` had to be refactored to accept a `shape_id`.
Previously this held a pointer to the Fiber itself, which requires
marking it (which was only implemented recently, prior to that it was
buggy). Using a monotonically increasing integer instead allows us to
avoid having a free function and keeps everything simpler.
My main motivations in making this change are that the root fiber lazily
allocates self, which makes the writebarrier implementation challenging
to do correctly, and wanting to avoid sending Mutexes to the remembered
set when locked by a short-lived Fiber.
This reverts commit 2998c8d6b99ec49925ebea42198b29c3e27b34a7.
We need to find a better way to fix this bug. Even with this refcount
change, errors were still being seen in CI. For now we need to remove
this failing test.
Calling the usual rb_iclass_classext_free() causes SEGV because
duplicating a newer classext of iclass had set the reference from superclass
to the newer classext, but calling rb_iclass_classext_free() deletes it.
to adopt strict shareable rule.
* (basically) shareable objects only refer shareable objects
* (exception) shareable objects can refere unshareable objects
but should not leak reference to unshareable objects to Ruby world
* `RB_OBJ_SET_SHAREABLE(obj)` makes obj shareable.
All of reachable objects from `obj` should be shareable.
* `RB_OBJ_SET_FROZEN_SHAREABLE(obj)` same as above
but freeze `obj` before making it shareable.
Also `rb_gc_verify_shareable(obj)` is introduced to check
the `obj` does not violate shareable rule (an shareable object
only refers shareable objects) strictly.
The rule has some exceptions (some shareable objects can refer to
unshareable objects, such as a Ractor object (which is a shareable
object) can refer to the Ractor local objects.
To handle such case, `check_shareable` flag is also introduced.
`STRICT_VERIFY_SHAREABLE` macro is also introduced to verify
the strict shareable rule at `SET_SHAREABLE`.
The st_insert in RCLASS_SET_NAMESPACE_CLASSEXT may overwrite an existing
rb_classext_t, causing it to leak memory. This commit changes it to use
st_update to free the existing one before overwriting it.