About this guide
Status: Stub
This guide is for contributors and reviewers to Rust's standard library.
Other places to find information
You might also find the following sites useful:
- std API docs -- rustdoc documentation for the standard library itself
- Forge -- contains documentation about rust infrastructure, team procedures, and more
- libs-team -- the home-base for the rust Library Team, with description of the team procedures, active working groups, and the team calendar.
Getting started
Status: Stub
Welcome to the standard library!
This guide is an effort to capture some of the context needed to develop and maintain the Rust standard library. Its goal is to help members of the Libs team share the process and experience they bring to working on the standard library so other members can benefit. It’ll probably accumulate a lot of trivia that might also be interesting to members of the wider Rust community.
Where to get help
Maintaining the standard library can feel like a daunting responsibility!
Ping the @rust-lang/libs-impl
or @rust-lang/libs
teams on GitHub anytime.
You can also reach out in the t-libs
stream on Zulip.
A tour of the standard library
Status: Stub
The standard library codebase lives in the rust-lang/rust
repository under the /library
directory.
The standard library is made up of three crates that exist in a loose hierarchy:
core
: dependency free and makes minimal assumptions about the runtime environment.alloc
: depends oncore
, assumes allocator support.alloc
doesn't re-exportcore
's public API, so it's not strictly above it in the layering.std
: depends oncore
andalloc
and re-exports both of their public APIs.
The Library Team
The Rust standard library and the official rust-lang
crates are
the responsibility of the Library Team.
The Library team makes sure the libraries are maintained,
PRs get reviewed, and issues get handled in time,
although that does not mean the team members are doing all the work themselves.
Many team members and other contributors are involved in this work,
and the team's main task is to guide and enable that work.
The Library API Team
A very critical aspect of maintaining and evolving the standard library is its stability. Unlike other crates, we can not release a new major version once in a while for backwards incompatible changes. Every version of the standard library is semver-compatible with all previous versions since Rust 1.0.
This means that we have to be very careful with additions and changes to the public interface. We can deprecate things if necessary, but removing items or changing signatures is almost never an option. As a result, we are very careful with stabilizing additions to the standard library. Once something is stable, we're basically stuck with it forever.
To guard the stability and prevent us from adding things we'll regret later, we have a team that specifically focuses on the public API. Every RFC and stabilization of a library addition/change goes through a FCP process in which the members of the Library API Team are asked to sign off on the change.
The members of this team are not necessarily familiar with the implementation details of the standard library, but are experienced with API design and understand the details of breaking changes and how they are avoided.
The Library Contributors
In addition to the two teams above, we also have a the Library Contributors, which is a somewhat more loosely defined team consisting of those who regularly contribute or review changes to the standard libraries.
Many of these contributors have a specific area of expertise, for example certain data structures or a specific operating system.
Team Membership
The Library Team will privately discuss potential new members for itself and Library Contributors, and extend an invitation after all members and the moderation team is on board with the potential addition.
See Membership for details.
r+ permission
All members of the Library Team, the Library API Team, and the Library Contributors have the permission to approve PRs, and are expected handle this with care. See Reviewing for details.
high-five rotation
Some of the members of the team are part of the 'high-five rotation'; the list from which the high-five bot picks reviewers to assign new PRs to.
Being a member of one of the teams does not come with the expectation to be on this list. However, members of this list should be on at least one of the three library teams. See Reviewing for details.
Meetings
Currently, both the Library Team and the Library API Team have a weekly hour-long meeting. Both meetings are open to non-members by default, although some might be (partially) private when agenda topics require that.
The meetings are held as video calls through Jitsi, but everyone is welcome to join without video or even audio. If you want to participate in meeting discussions through text, you can do so through Jitsi's chat function.
Meetings and their agendas are announced in the #t-libs/meetings channel on Zulip.
Agendas are generated by the fully-automatic-rust-libs-team-triage-meeting-agenda-generator
,
which will include all relevant issues and PRs, such as those tagged with I-nominated
or S-waiting-on-team
.
If you have any specific topics you'd like to have discussed in a meeting, feel free to open an issue on the libs-team
repository
and mark it as I-nominated
and T-libs
or T-libs-api
. Or just leave a message in the Zulip channel.
All the meetings, including those of the library working groups, can be found on our Google Calendar:
Membership
Library Contributors
Membership to Library Contributors can be offered by the Library Team once a regular contributor has made a number of significant contributions over some period of time, and has shown to have a good judgement on what changes are acceptable.
The Library Team and Library API Team
The Library Team and Library API Team pick their own members, although it's expected that new members come from the Library Contributors or another Rust team, and have already been involved in relevant library work.
The process
In all cases, the process of adding a new members goes as follows:
- A member of the Library (API) Team proposes the addition of a contributor on our private mailing list.
This proposal includes:
- A short description of what this person has been working on; how they have been contributing.
- A few specific examples of cases where this person clearly communicated their ideas.
- A few specific examples that show this person understands what are and what aren't acceptable changes.
Someone who makes significant contributions but usually needs to make large adjustments to their PRs might be a wonderful external contributor, but might not yet be a good match for membership with review permissions expecting to judge other contributions.
- Every single team member is asked for their input. No team member must have any objections.
- Objections are ideally shared with the entire team, but may also be shared privately with the team lead or the moderation team.
- Objections ideally include examples showing behavior not in line with the expectations described under step 1 (or the code of conduct).
- The team lead reaches out to the moderation team to ask if they are aware of any objections.
- Only once the team members and the moderation team agree, the new contributor is invited to join.
- If the new contributor agrees too, a PR is sent to the
team
repository to add them. - A blog post is published in the Internals Blog with a short introduction of the new contributor. The contents of this post can be based on some of the points brought up in the email from step 1. The contents are first checked with the new contributor before it is published.
Reviewing
Every member of the Library Team, Library API Team, and Library Contributors has 'r+ rights'.
That is, the ability to approve a PR and instruct @bors
to test and merge it into Rust nightly.
If you decide to review a PR, thank you! But please keep in mind:
- You are always welcome to review any PR, regardless of who it is assigned to.
However, do not approve PRs unless:
- You are confident that nobody else wants to review it first. If you think someone else on the team would be a better person to review it, feel free to reassign it to them.
- You are confident in that part of the code.
- You are confident it will not cause any breakage or regress performance.
- It does not change the public API, including any stable promises we make in documentation, unless there's a finished FCP for the change.
- For unstable API changes/additions, it can be acceptable to skip the RFC process if the design is small and the change is uncontroversial.
Make sure to involve
@rust-lang/libs-api
on such changes.
- For unstable API changes/additions, it can be acceptable to skip the RFC process if the design is small and the change is uncontroversial.
Make sure to involve
- Always be polite when reviewing: you are a representative of the Rust project, so it is expected that you will go above and beyond when it comes to the Code of Conduct.
High-five rotation
Some of the members of the team are part of the 'high-five rotation'; the list from which the high-five bot picks reviewers to assign new PRs to.
Being a member of one of the teams does not come with the expectation to be on this list. However, members of this list should be on at least one of the three library teams.
If the bot assigns you a PR for which you do not have the time or expertise to review it,
feel free to reassign it to someone else.
To assign it to another random person picked from the high-five rotation,
use r? rust-lang/libs
.
If you find yourself unable to do any reviews for an extended period of time, it might be a good idea to (temporarily) remove yourself from the list. To add or remove yourself from the list, send a PR to change the high-five configuration file.
The feature lifecycle
Status: Stub
Landing new features
Status: Stub
New unstable features can be added and approved without going through a Libs FCP. There should be some buy-in from Libs that a feature is desirable and likely to be stabilized at some point before landing though.
If you're not sure, open an issue against rust-lang/rust
first suggesting the feature before developing it.
All public items in the standard library need a #[stable]
or #[unstable]
attribute on them. When a feature is first added, it gets a #[unstable]
attribute.
Before a new feature is merged, those #[unstable]
attributes need to be linked to a tracking issue.
Using tracking issues
Status: Stub
Tracking issues are used to facilitate discussion and report on the status of standard library features. All public APIs need a dedicated tracking issue. Some larger internal units of work may also use them.
Creating a tracking issue
There's a template that can be used to fill out the initial tracking issue. The Libs team also maintains a Cargo tool that can be used to quickly dump the public API of an unstable feature.
Working on an unstable feature
The current state of an unstable feature should be outlined in its tracking issue.
If there's a change you'd like to make to an unstable feature, it can be discussed on the tracking issue first.
Stabilizing features
Status: Stub
Feature stabilization involves adding #[stable]
attributes. They may be introduced alongside new trait impls or replace existing #[unstable]
attributes.
Stabilization goes through the Libs FCP process, which occurs on the tracking issue for the feature.
Before writing a PR to stabilize a feature
Check to see if a FCP has completed first. If not, either ping @rust-lang/libs
or leave a comment asking about the status of the feature.
This will save you from opening a stabilization PR and having it need regular rebasing while the FCP process runs its course.
Writing a stabilization PR
- Replace any
#[unstable]
attributes for the given feature with stable ones. The value of thesince
field is usually the currentnightly
version. - Remove any
#![feature()]
attributes that were previously required. - Submit a PR with a stabilization report.
When there's const
involved
Const functions can be stabilized in a PR that replaces #[rustc_const_unstable]
attributes with #[rustc_const_stable]
ones. The Constant Evaluation WG should be pinged for input on whether or not the const
-ness is something we want to commit to. If it is an intrinsic being exposed that is const-stabilized then @rust-lang/lang
should also be included in the FCP.
Check whether the function internally depends on other unstable const
functions through #[allow_internal_unstable]
attributes and consider how the function could be implemented if its internally unstable calls were removed. See the Stability attributes page for more details on #[allow_internal_unstable]
.
Where unsafe
and const
is involved, e.g., for operations which are "unconst", that the const safety argument for the usage also be documented. That is, a const fn
has additional determinism (e.g. run-time/compile-time results must correspond and the function's output only depends on its inputs...) restrictions that must be preserved, and those should be argued when unsafe
is used.
Deprecating features
Status: Stub
Public APIs aren't deleted from the standard library. If something shouldn't be used anymore it gets deprecated by adding a #[rustc_deprecated]
attribute. Deprecating need to go through a Libs FCP, just like stabilizations do.
To try reduce noise in the docs from deprecated items, they should be moved to the bottom of the module or impl
block so they're rendered at the bottom of the docs page. The docs should then be cut down to focus on why the item is deprecated rather than how you might use it.
Code considerations
Code considerations capture our experiences working on the standard library for all contributors. If you come across something new or unexpected then a code consideration is a great place to record it. Then other contributors and reviewers can find it by searching the guide.
How to write a code consideration
Code considerations are a bit like guidelines. They should try make concrete recommendations that reviewers and contributors can refer to in discussions. A link to a real case where this was discussed or tripped us up is good to include.
Code considerations should also try include a For reviewers section. These can call out specific things to look out for in reviews that could suggest the consideration applies. They can also include advice on how to apply it.
It's more important that we capture these experiences somehow though, so don't be afraid to drop some sketchy notes in and debate the details later!
Design
Status: Stub
Most of the considerations in this guide are quality in some sense. This section has some general advice on maintaining code quality in the standard library.
For reviewers
Think about how you would implement a feature and whether your approach would differ from what's being proposed. What trade-offs are being made? Is the weighting of those trade-offs the most appropriate?
Public API design
Status: Stub
Standard library APIs typically follow the API Guidelines, which were originally spawned from the standard library itself.
For reviewers
For new unstable features, look for any prior discussion of the proposed API to see what options and tradeoffs have already been considered. If in doubt, ping @rust-lang/libs
for input.
When to add #[must_use]
The #[must_use]
attribute can be applied to types or functions when failing to explicitly consider them or their output is almost certainly a bug.
As an example, Result
is #[must_use]
because failing to consider it may indicate a caller didn't realise a method was fallible:
// Is `check_status` infallible? Or did we forget to look at its `Result`?
check_status();
Operators like saturating_add
are also #[must_use]
because failing to consider their output might indicate a caller didn't realise they don't mutate the left-hand-side:
// A caller might assume this method mutates `a`
a.saturating_add(b);
Combinators produced by the Iterator
trait are #[must_use]
because failing to use them might indicate a caller didn't realize Iterator
s are lazy and won't actually do anything unless you drive them:
// A caller might not realise this code won't do anything
// unless they call `collect`, `count`, etc.
v.iter().map(|x| println!("{}", x));
On the other hand, thread::JoinHandle
isn't #[must_use]
because spawning fire-and-forget work is a legitimate pattern and forcing callers to explicitly ignore handles could be a nuisance rather than an indication of a bug:
thread::spawn(|| {
// this background work isn't waited on
});
For reviewers
Look for any legitimate use-cases where #[must_use]
will cause callers to explicitly ignore values. If these are common then #[must_use]
probably isn't appropriate.
The #[must_use]
attribute only produces warnings, so it can technically be introduced at any time. To avoid accumulating nuisance warnings though ping @rust-lang/libs
for input before adding new #[must_use]
attributes to existing types and functions.
Breaking changes
Breaking changes should be avoided when possible. RFC 1105 lays the foundations for what constitutes a breaking change. Breakage may be deemed acceptable or not based on its actual impact, which can be approximated with a crater run.
There are strategies for mitigating breakage depending on the impact.
For changes where the value is high and the impact is high too:
- Using compiler lints to try phase out broken behavior.
If the impact isn't too high:
- Looping in maintainers of broken crates and submitting PRs to fix them.
For reviewers
Look out for changes to documented behavior and new trait impls for existing stable traits.
Breakage from changing behavior
Breaking changes aren't just limited to compilation failures. Behavioral changes to stable functions generally can't be accepted. See the home_dir
issue for an example.
An exception is when a behavior is specified in an RFC (such as IETF specifications for IP addresses). If a behavioral change fixes non-conformance then it can be considered a bug fix. In these cases, @rust-lang/libs
should still be pinged for input.
For reviewers
Look out for changes in existing implementations for stable functions, especially if assertions in test cases have been changed.
Breakage from new trait impls
A lot of PRs to the standard library are adding new impls for already stable traits, which can break consumers in many weird and wonderful ways. The following sections gives some examples of breakage from new trait impls that may not be obvious just from the change made to the standard library.
Also see #[fundamental]
types for special considerations for types like &T
, &mut T
, Box<T>
, and other core smart pointers.
Inference breaks when a second generic impl is introduced
Rust will use the fact that there's only a single impl for a generic trait during inference. This breaks once a second impl makes the type of that generic ambiguous. Say we have:
// in `std`
impl From<&str> for Arc<str> { .. }
// in an external `lib`
let b = Arc::from("a");
then we add:
impl From<&str> for Arc<str> { .. }
+ impl From<&str> for Arc<String> { .. }
then
let b = Arc::from("a");
will no longer compile, because we've previously been relying on inference to figure out the T
in Box<T>
.
This kind of breakage can be ok, but a crater run should estimate the scope.
Deref coercion breaks when a new impl is introduced
Rust will use deref coercion to find a valid trait impl if the arguments don't type check directly. This only seems to occur if there's a single impl so introducing a new one may break consumers relying on deref coercion. Say we have:
// in `std`
impl Add<&str> for String { .. }
impl Deref for String { type Target = str; .. }
// in an external `lib`
let a = String::from("a");
let b = String::from("b");
let c = a + &b;
then we add:
impl Add<&str> for String { .. }
+ impl Add<char> for String { .. }
then
let c = a + &b;
will no longer compile, because we won't attempt to use deref to coerce the &String
into &str
.
This kind of breakage can be ok, but a crater run should estimate the scope.
For reviewers
Look out for new #[stable]
trait implementations for existing stable traits.
#[fundamental]
types
Status: Stub
Type annotated with the #[fundamental]
attribute have different coherence rules. See RFC 1023 for details. That includes:
&T
&mut T
Box<T>
Pin<T>
Typically, the scope of breakage in new trait impls is limited to inference and deref-coercion. New trait impls on #[fundamental]
types may overlap with downstream impls and cause other kinds of breakage.
For reviewers
Look out for blanket trait implementations for fundamental types, like:
impl<'a, T> PublicTrait for &'a T
where
T: SomeBound,
{
}
unless the blanket implementation is being stabilized along with PublicTrait
. In cases where we really want to do this, a crater run can help estimate the scope of the breakage.
Breaking changes to the prelude
Making changes to the prelude can easily cause breakage because it impacts all Rust code. In most cases the impact is limited since prelude items have the lowest priority in name lookup (lower than glob imports), but there are two cases where this doesn't work.
Traits
Adding a new trait to the prelude causes new methods to become available for existing types. This can cause name resolution errors in user code if a method with the same name is also available from a different trait.
For this reason, TryFrom
and TryInto
were only added to the prelude for the 2021 edition despite being stabilized in 2019.
Macros
Unlike other item types, rustc's name resolution for macros does not support giving prelude macros a lower priority than other macros, even if the macro is unstable. As a general rule, avoid adding macros to the prelude except at edition boundaries.
This issues was encoutered when trying to land the assert_matches!
macro.
Safety and soundness
Status: Stub
Unsafe code blocks in the standard library need a comment explaining why they're ok. There's a lint that checks this. The unsafe code also needs to actually be ok.
The rules around what's sound and what's not can be subtle. See the Unsafe Code Guidelines WG for current thinking, and consider pinging @rust-lang/libs-impl
, @rust-lang/lang
, and/or somebody from the WG if you're in any doubt. We love debating the soundness of unsafe code, and the more eyes on it the better!
For reviewers
Look out for any unsafe blocks. If they're optimizations consider whether they're actually necessary. If the unsafe code is necessary then always feel free to ping somebody to help review it.
Look at the level of test coverage for the new unsafe code. Tests do catch bugs!
Generics and unsafe
Be careful of generic types that interact with unsafe code. Unless the generic type is bounded by an unsafe trait that specifies its contract, we can't rely on the results of generic types being reliable or correct.
A place where this commonly comes up is with the RangeBounds
trait. You might assume that the start and end bounds given by a RangeBounds
implementation will remain the same since it works through shared references. That's not necessarily the case though, an adversarial implementation may change the bounds between calls:
struct EvilRange(Cell<bool>);
impl RangeBounds<usize> for EvilRange {
fn start_bound(&self) -> Bound<&usize> {
Bound::Included(if self.0.get() {
&1
} else {
self.0.set(true);
&0
})
}
fn end_bound(&self) -> Bound<&usize> {
Bound::Unbounded
}
}
This has caused problems in the past for code making safety assumptions based on bounds without asserting they stay the same.
Code using generic types to interact with unsafe should try convert them into known types first, then work with those instead of the generic. For our example with RangeBounds
, this may mean converting into a concrete Range
, or a tuple of (Bound, Bound)
.
For reviewers
Look out for generic functions that also contain unsafe blocks and consider how adversarial implementations of those generics could violate safety.
Drop and #[may_dangle]
A generic Type<T>
that manually implements Drop
should consider whether a #[may_dangle]
attribute is appropriate on T
. The Nomicon has some details on what #[may_dangle]
is all about.
If a generic Type<T>
has a manual drop implementation that may also involve dropping T
then dropck needs to know about it. If Type<T>
's ownership of T
is expressed through types that don't drop T
themselves such as ManuallyDrop<T>
, *mut T
, or MaybeUninit<T>
then Type<T>
also needs a PhantomData<T>
field to tell dropck that T
may be dropped. Types in the standard library that use the internal Unique<T>
pointer type don't need a PhantomData<T>
marker field. That's taken care of for them by Unique<T>
.
As a real-world example of where this can go wrong, consider an OptionCell<T>
that looks something like this:
struct OptionCell<T> {
is_init: bool,
value: MaybeUninit<T>,
}
impl<T> Drop for OptionCell<T> {
fn drop(&mut self) {
if self.is_init {
// Safety: `value` is guaranteed to be fully initialized when `is_init` is true.
// Safety: The cell is being dropped, so it can't be accessed again.
unsafe { self.value.assume_init_drop() };
}
}
}
Adding a #[may_dangle]
attribute to this OptionCell<T>
that didn't have a PhantomData<T>
marker field opened up a soundness hole for T
's that didn't strictly outlive the OptionCell<T>
, and so could be accessed after being dropped in their own Drop
implementations. The correct application of #[may_dangle]
also required a PhantomData<T>
field:
struct OptionCell<T> {
is_init: bool,
value: MaybeUninit<T>,
+ _marker: PhantomData<T>,
}
- impl<T> Drop for OptionCell<T> {
+ unsafe impl<#[may_dangle] T> Drop for OptionCell<T> {
For reviewers
If there's a manual Drop
implementation, consider whether #[may_dangle]
is appropriate. If it is, make sure there's a PhantomData<T>
too either through Unique<T>
or as a field directly.
Using mem
to break assumptions
mem::replace
and mem::swap
Any value behind a &mut
reference can be replaced with a new one using mem::replace
or mem::swap
, so code shouldn't assume any reachable mutable references can't have their internals changed by replacing.
mem::forget
Rust doesn't guarantee destructors will run when a value is leaked (which can be done with mem::forget
), so code should avoid relying on them for maintaining safety. Remember, everyone poops.
It's ok not to run a destructor when a value is leaked because its storage isn't deallocated or repurposed. If the storage is initialized and is being deallocated or repurposed then destructors need to be run first, because memory may be pinned. Having said that, there can still be exceptions for skipping destructors when deallocating if you can guarantee there's never pinning involved.
For reviewers
If there's a Drop
impl involved, look out for possible soundness issues that could come from that destructor never running.
Using unstable language features
The standard library codebase is a great place to try unstable language features, but we have to be careful about exposing them publicly. The following is a list of unstable language features that are ok to use within the standard library itself along with any caveats:
- Const generics
- Specialization
- Something missing? Please submit a PR to keep this list up-to-date!
For reviewers
Look out for any use of unstable language features in PRs, especially if any new #![feature]
attributes have been added.
Using const generics
Status: Stub
Complete const generics are currently unstable. You can track their progress here.
Const generics are ok to use in public APIs, so long as they fit in the min_const_generics
subset.
For reviewers
Look out for const operations on const generics in public APIs like:
pub fn extend_array<T, const N: usize, const M: usize>(arr: [T; N]) -> [T; N + 1] {
..
}
or for const generics that aren't integers, bools, or chars:
pub fn tag<const S: &'static str>() {
..
}
Using specialization
Specialization is currently unstable. You can track its progress here.
We try to avoid leaning on specialization too heavily, limiting its use to optimizing specific implementations. These specialized optimizations use a private trait to find the correct implementation, rather than specializing the public method itself. Any use of specialization that changes how methods are dispatched for external callers should be carefully considered.
As an example of how to use specialization in the standard library, consider the case of creating an Rc<[T]>
from a &[T]
:
impl<T: Clone> From<&[T]> for Rc<[T]> {
#[inline]
fn from(v: &[T]) -> Rc<[T]> {
unsafe { Self::from_iter_exact(v.iter().cloned(), v.len()) }
}
}
It would be nice to have an optimized implementation for the case where T: Copy
:
impl<T: Copy> From<&[T]> for Rc<[T]> {
#[inline]
fn from(v: &[T]) -> Rc<[T]> {
unsafe { Self::copy_from_slice(v) }
}
}
Unfortunately we couldn't have both of these impls normally, because they'd overlap. This is where private specialization can be used to choose the right implementation internally. In this case, we use a trait called RcFromSlice
that switches the implementation:
impl<T: Clone> From<&[T]> for Rc<[T]> {
#[inline]
fn from(v: &[T]) -> Rc<[T]> {
<Self as RcFromSlice<T>>::from_slice(v)
}
}
/// Specialization trait used for `From<&[T]>`.
trait RcFromSlice<T> {
fn from_slice(slice: &[T]) -> Self;
}
impl<T: Clone> RcFromSlice<T> for Rc<[T]> {
#[inline]
default fn from_slice(v: &[T]) -> Self {
unsafe { Self::from_iter_exact(v.iter().cloned(), v.len()) }
}
}
impl<T: Copy> RcFromSlice<T> for Rc<[T]> {
#[inline]
fn from_slice(v: &[T]) -> Self {
unsafe { Self::copy_from_slice(v) }
}
}
Only specialization using the min_specialization
feature should be used. The full specialization
feature is known to be unsound.
For reviewers
Look out for any default
annotations on public trait implementations. These will need to be refactored into a private dispatch trait. Also look out for uses of specialization that do more than pick a more optimized implementation.
Performance
Status: Stub
Changes to hot code might impact performance in consumers, for better or for worse. Appropriate benchmarks should give an idea of how performance characteristics change. For changes that affect rustc
itself, you can also do a rust-timer
run.
For reviewers
If a PR is focused on performance then try get some idea of what the impact is. Also consider marking the PR as rollup=never
.
When to #[inline]
Inlining is a trade-off between potential execution speed, compile time and code size. There's some discussion about it in this PR to the hashbrown
crate. From the thread:
#[inline]
is very different than simply just an inline hint. As I mentioned before, there's no equivalent in C++ for what#[inline]
does. In debug mode rustc basically ignores#[inline]
, pretending you didn't even write it. In release mode the compiler will, by default, codegen an#[inline]
function into every single referencing codegen unit, and then it will also addinlinehint
. This means that if you have 16 CGUs and they all reference an item, every single one is getting the entire item's implementation inlined into it.
You can add #[inline]
:
- To public, small, non-generic functions.
You shouldn't need #[inline]
:
- On methods that have any generics in scope.
- On methods on traits that don't have a default implementation.
#[inline]
can always be introduced later, so if you're in doubt they can just be removed.
What about #[inline(always)]
?
You should just about never need #[inline(always)]
. It may be beneficial for private helper methods that are used in a limited number of places or for trivial operators. A micro benchmark should justify the attribute.
For reviewers
#[inline]
can always be added later, so if there's any debate about whether it's appropriate feel free to defer it by removing the annotations for a start.
doc alias policy
Rust's documentation supports adding aliases to any declaration (such as a
function, type, or constant), using the syntax #[doc(alias = "name")]
. We
want to use doc aliases to help people find what they're looking for, while
keeping those aliases maintainable and high-value. This policy outlines the
cases where we add doc aliases, and the cases where we omit those aliases.
- We must have a reasonable expectation that people might search for the term
in the documentation search. Rust's documentation provides a name search, not
a full-text search; as such, we expect that people may search for plausible
names, but that for more general documentation searches they'll turn to a web
search engine.
- Related: we don't expect that people are currently searching Rust documentation for language-specific names from arbitrary languages they're familiar with, and we don't want to add that as a new documentation search feature; please don't add aliases based on your favorite language. Those mappings should live in separate guides or references. We do expect that people might look for the Rust name of a function they reasonably expect to exist in Rust (e.g. a system function or a C library function), to try to figure out what Rust called that function.
- The proposed alias must be a name we would plausibly have used for the
declaration. For instance,
mkdir
forcreate_dir
, orrmdir
forremove_dir
, orpopcnt
andpopcount
forcount_ones
, orumask
formode
. This feeds into the reasonable expectation that someone might search for the name and expect to find it ("what did Rust callmkdir
"). - There must be an obvious single target for the alias that is an exact
analogue of the aliased name. We will not add the same alias to multiple
declarations. (
const
and non-const
versions of the same function are fine.) We will also not add an alias for a function that's only somewhat similar or related. - The alias must not conflict with the actual name of any existing declaration.
- As a special case for stdarch, aliases from exact assembly instruction names to the corresponding intrinsic function are welcome, as long as they don't conflict with other names.
Tools and bots
Status: Stub
@bors
Status: Stub
PRs to the standard library aren’t merged manually using GitHub’s UI or by pushing remote branches. Everything goes through @bors
.
You can approve a PR with:
@bors r+
Rolling up
For Libs PRs, rolling up is usually fine, in particular if it's only a new unstable addition or if it only touches docs. See the rollup guidelines for more details on when to rollup.
@rust-timer
Status: Stub
You can kick off a performance test using @rust-timer
:
@bors try @rust-timer queue
@craterbot
Status: Stub
Crater is a tool that can test PRs against a public subset of the Rust ecosystem to estimate the scale of potential breakage.
You can kick off a crater run by first calling:
@bors try
Once that finishes, you can then call:
@craterbot check
to ensure crates compile, or:
@craterbot run mode=build-and-test