core/sync/atomic.rs
1//! Atomic types
2//!
3//! Atomic types provide primitive shared-memory communication between
4//! threads, and are the building blocks of other concurrent
5//! types.
6//!
7//! This module defines atomic versions of a select number of primitive
8//! types, including [`AtomicBool`], [`AtomicIsize`], [`AtomicUsize`],
9//! [`AtomicI8`], [`AtomicU16`], etc.
10//! Atomic types present operations that, when used correctly, synchronize
11//! updates between threads.
12//!
13//! Atomic variables are safe to share between threads (they implement [`Sync`])
14//! but they do not themselves provide the mechanism for sharing and follow the
15//! [threading model](../../../std/thread/index.html#the-threading-model) of Rust.
16//! The most common way to share an atomic variable is to put it into an [`Arc`][arc] (an
17//! atomically-reference-counted shared pointer).
18//!
19//! [arc]: ../../../std/sync/struct.Arc.html
20//!
21//! Atomic types may be stored in static variables, initialized using
22//! the constant initializers like [`AtomicBool::new`]. Atomic statics
23//! are often used for lazy global initialization.
24//!
25//! ## Memory model for atomic accesses
26//!
27//! Rust atomics currently follow the same rules as [C++20 atomics][cpp], specifically the rules
28//! from the [`intro.races`][cpp-intro.races] section, without the "consume" memory ordering. Since
29//! C++ uses an object-based memory model whereas Rust is access-based, a bit of translation work
30//! has to be done to apply the C++ rules to Rust: whenever C++ talks about "the value of an
31//! object", we understand that to mean the resulting bytes obtained when doing a read. When the C++
32//! standard talks about "the value of an atomic object", this refers to the result of doing an
33//! atomic load (via the operations provided in this module). A "modification of an atomic object"
34//! refers to an atomic store.
35//!
36//! The end result is *almost* equivalent to saying that creating a *shared reference* to one of the
37//! Rust atomic types corresponds to creating an `atomic_ref` in C++, with the `atomic_ref` being
38//! destroyed when the lifetime of the shared reference ends. The main difference is that Rust
39//! permits concurrent atomic and non-atomic reads to the same memory as those cause no issue in the
40//! C++ memory model, they are just forbidden in C++ because memory is partitioned into "atomic
41//! objects" and "non-atomic objects" (with `atomic_ref` temporarily converting a non-atomic object
42//! into an atomic object).
43//!
44//! The most important aspect of this model is that *data races* are undefined behavior. A data race
45//! is defined as conflicting non-synchronized accesses where at least one of the accesses is
46//! non-atomic. Here, accesses are *conflicting* if they affect overlapping regions of memory and at
47//! least one of them is a write. (A `compare_exchange` or `compare_exchange_weak` that does not
48//! succeed is not considered a write.) They are *non-synchronized* if neither of them
49//! *happens-before* the other, according to the happens-before order of the memory model.
50//!
51//! The other possible cause of undefined behavior in the memory model are mixed-size accesses: Rust
52//! inherits the C++ limitation that non-synchronized conflicting atomic accesses may not partially
53//! overlap. In other words, every pair of non-synchronized atomic accesses must be either disjoint,
54//! access the exact same memory (including using the same access size), or both be reads.
55//!
56//! Each atomic access takes an [`Ordering`] which defines how the operation interacts with the
57//! happens-before order. These orderings behave the same as the corresponding [C++20 atomic
58//! orderings][cpp_memory_order]. For more information, see the [nomicon].
59//!
60//! [cpp]: https://en.cppreference.com/w/cpp/atomic
61//! [cpp-intro.races]: https://timsong-cpp.github.io/cppwp/n4868/intro.multithread#intro.races
62//! [cpp_memory_order]: https://en.cppreference.com/w/cpp/atomic/memory_order
63//! [nomicon]: ../../../nomicon/atomics.html
64//!
65//! ```rust,no_run undefined_behavior
66//! use std::sync::atomic::{AtomicU16, AtomicU8, Ordering};
67//! use std::mem::transmute;
68//! use std::thread;
69//!
70//! let atomic = AtomicU16::new(0);
71//!
72//! thread::scope(|s| {
73//! // This is UB: conflicting non-synchronized accesses, at least one of which is non-atomic.
74//! s.spawn(|| atomic.store(1, Ordering::Relaxed)); // atomic store
75//! s.spawn(|| unsafe { atomic.as_ptr().write(2) }); // non-atomic write
76//! });
77//!
78//! thread::scope(|s| {
79//! // This is fine: the accesses do not conflict (as none of them performs any modification).
80//! // In C++ this would be disallowed since creating an `atomic_ref` precludes
81//! // further non-atomic accesses, but Rust does not have that limitation.
82//! s.spawn(|| atomic.load(Ordering::Relaxed)); // atomic load
83//! s.spawn(|| unsafe { atomic.as_ptr().read() }); // non-atomic read
84//! });
85//!
86//! thread::scope(|s| {
87//! // This is fine: `join` synchronizes the code in a way such that the atomic
88//! // store happens-before the non-atomic write.
89//! let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed)); // atomic store
90//! handle.join().expect("thread won't panic"); // synchronize
91//! s.spawn(|| unsafe { atomic.as_ptr().write(2) }); // non-atomic write
92//! });
93//!
94//! thread::scope(|s| {
95//! // This is UB: non-synchronized conflicting differently-sized atomic accesses.
96//! s.spawn(|| atomic.store(1, Ordering::Relaxed));
97//! s.spawn(|| unsafe {
98//! let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic);
99//! differently_sized.store(2, Ordering::Relaxed);
100//! });
101//! });
102//!
103//! thread::scope(|s| {
104//! // This is fine: `join` synchronizes the code in a way such that
105//! // the 1-byte store happens-before the 2-byte store.
106//! let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed));
107//! handle.join().expect("thread won't panic");
108//! s.spawn(|| unsafe {
109//! let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic);
110//! differently_sized.store(2, Ordering::Relaxed);
111//! });
112//! });
113//! ```
114//!
115//! # Portability
116//!
117//! All atomic types in this module are guaranteed to be [lock-free] if they're
118//! available. This means they don't internally acquire a global mutex. Atomic
119//! types and operations are not guaranteed to be wait-free. This means that
120//! operations like `fetch_or` may be implemented with a compare-and-swap loop.
121//!
122//! Atomic operations may be implemented at the instruction layer with
123//! larger-size atomics. For example some platforms use 4-byte atomic
124//! instructions to implement `AtomicI8`. Note that this emulation should not
125//! have an impact on correctness of code, it's just something to be aware of.
126//!
127//! The atomic types in this module might not be available on all platforms. The
128//! atomic types here are all widely available, however, and can generally be
129//! relied upon existing. Some notable exceptions are:
130//!
131//! * PowerPC and MIPS platforms with 32-bit pointers do not have `AtomicU64` or
132//! `AtomicI64` types.
133//! * Legacy ARM platforms like ARMv4T and ARMv5TE have very limited hardware
134//! support for atomics. The bare-metal targets disable this module
135//! entirely, but the Linux targets [use the kernel] to assist (which comes
136//! with a performance penalty). It's not until ARMv6K onwards that ARM CPUs
137//! have support for load/store and Compare and Swap (CAS) atomics in hardware.
138//! * ARMv6-M and ARMv8-M baseline targets (`thumbv6m-*` and
139//! `thumbv8m.base-*`) only provide `load` and `store` operations, and do
140//! not support Compare and Swap (CAS) operations, such as `swap`,
141//! `fetch_add`, etc. Full CAS support is available on ARMv7-M and ARMv8-M
142//! Mainline (`thumbv7m-*`, `thumbv7em*` and `thumbv8m.main-*`).
143//!
144//! [use the kernel]: https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt
145//!
146//! Note that future platforms may be added that also do not have support for
147//! some atomic operations. Maximally portable code will want to be careful
148//! about which atomic types are used. `AtomicUsize` and `AtomicIsize` are
149//! generally the most portable, but even then they're not available everywhere.
150//! For reference, the `std` library requires `AtomicBool`s and pointer-sized atomics, although
151//! `core` does not.
152//!
153//! The `#[cfg(target_has_atomic)]` attribute can be used to conditionally
154//! compile based on the target's supported bit widths. It is a key-value
155//! option set for each supported size, with values "8", "16", "32", "64",
156//! "128", and "ptr" for pointer-sized atomics.
157//!
158//! [lock-free]: https://en.wikipedia.org/wiki/Non-blocking_algorithm
159//!
160//! # Atomic accesses to read-only memory
161//!
162//! In general, *all* atomic accesses on read-only memory are undefined behavior. For instance, attempting
163//! to do a `compare_exchange` that will definitely fail (making it conceptually a read-only
164//! operation) can still cause a segmentation fault if the underlying memory page is mapped read-only. Since
165//! atomic `load`s might be implemented using compare-exchange operations, even a `load` can fault
166//! on read-only memory.
167//!
168//! For the purpose of this section, "read-only memory" is defined as memory that is read-only in
169//! the underlying target, i.e., the pages are mapped with a read-only flag and any attempt to write
170//! will cause a page fault. In particular, an `&u128` reference that points to memory that is
171//! read-write mapped is *not* considered to point to "read-only memory". In Rust, almost all memory
172//! is read-write; the only exceptions are memory created by `const` items or `static` items without
173//! interior mutability, and memory that was specifically marked as read-only by the operating
174//! system via platform-specific APIs.
175//!
176//! As an exception from the general rule stated above, "sufficiently small" atomic loads with
177//! `Ordering::Relaxed` are implemented in a way that works on read-only memory, and are hence not
178//! undefined behavior. The exact size limit for what makes a load "sufficiently small" varies
179//! depending on the target:
180//!
181//! | `target_arch` | Size limit |
182//! |---------------|---------|
183//! | `x86`, `arm`, `loongarch32`, `mips`, `mips32r6`, `powerpc`, `riscv32`, `sparc`, `hexagon` | 4 bytes |
184//! | `x86_64`, `aarch64`, `loongarch64`, `mips64`, `mips64r6`, `powerpc64`, `riscv64`, `sparc64`, `s390x` | 8 bytes |
185//!
186//! Atomics loads that are larger than this limit as well as atomic loads with ordering other
187//! than `Relaxed`, as well as *all* atomic loads on targets not listed in the table, might still be
188//! read-only under certain conditions, but that is not a stable guarantee and should not be relied
189//! upon.
190//!
191//! If you need to do an acquire load on read-only memory, you can do a relaxed load followed by an
192//! acquire fence instead.
193//!
194//! # Examples
195//!
196//! A simple spinlock:
197//!
198//! ```ignore-wasm
199//! use std::sync::Arc;
200//! use std::sync::atomic::{AtomicUsize, Ordering};
201//! use std::{hint, thread};
202//!
203//! fn main() {
204//! let spinlock = Arc::new(AtomicUsize::new(1));
205//!
206//! let spinlock_clone = Arc::clone(&spinlock);
207//!
208//! let thread = thread::spawn(move || {
209//! spinlock_clone.store(0, Ordering::Release);
210//! });
211//!
212//! // Wait for the other thread to release the lock
213//! while spinlock.load(Ordering::Acquire) != 0 {
214//! hint::spin_loop();
215//! }
216//!
217//! if let Err(panic) = thread.join() {
218//! println!("Thread had an error: {panic:?}");
219//! }
220//! }
221//! ```
222//!
223//! Keep a global count of live threads:
224//!
225//! ```
226//! use std::sync::atomic::{AtomicUsize, Ordering};
227//!
228//! static GLOBAL_THREAD_COUNT: AtomicUsize = AtomicUsize::new(0);
229//!
230//! // Note that Relaxed ordering doesn't synchronize anything
231//! // except the global thread counter itself.
232//! let old_thread_count = GLOBAL_THREAD_COUNT.fetch_add(1, Ordering::Relaxed);
233//! // Note that this number may not be true at the moment of printing
234//! // because some other thread may have changed static value already.
235//! println!("live threads: {}", old_thread_count + 1);
236//! ```
237
238#![stable(feature = "rust1", since = "1.0.0")]
239#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(dead_code))]
240#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(unused_imports))]
241#![rustc_diagnostic_item = "atomic_mod"]
242// Clippy complains about the pattern of "safe function calling unsafe function taking pointers".
243// This happens with AtomicPtr intrinsics but is fine, as the pointers clippy is concerned about
244// are just normal values that get loaded/stored, but not dereferenced.
245#![allow(clippy::not_unsafe_ptr_arg_deref)]
246
247use self::Ordering::*;
248use crate::cell::UnsafeCell;
249#[cfg(not(feature = "ferrocene_subset"))]
250use crate::hint::spin_loop;
251use crate::intrinsics::AtomicOrdering as AO;
252#[cfg(not(feature = "ferrocene_subset"))]
253use crate::{fmt, intrinsics};
254
255// Ferrocene addition: imports for certified subset
256#[cfg(feature = "ferrocene_subset")]
257#[rustfmt::skip]
258use crate::intrinsics;
259
260trait Sealed {}
261
262/// A marker trait for primitive types which can be modified atomically.
263///
264/// This is an implementation detail for <code>[Atomic]\<T></code> which may disappear or be replaced at any time.
265///
266/// # Safety
267///
268/// Types implementing this trait must be primitives that can be modified atomically.
269///
270/// The associated `Self::AtomicInner` type must have the same size and bit validity as `Self`,
271/// but may have a higher alignment requirement, so the following `transmute`s are sound:
272///
273/// - `&mut Self::AtomicInner` as `&mut Self`
274/// - `Self` as `Self::AtomicInner` or the reverse
275#[unstable(
276 feature = "atomic_internals",
277 reason = "implementation detail which may disappear or be replaced at any time",
278 issue = "none"
279)]
280#[expect(private_bounds)]
281pub unsafe trait AtomicPrimitive: Sized + Copy + Sealed {
282 /// Temporary implementation detail.
283 type AtomicInner: Sized;
284}
285
286macro impl_atomic_primitive(
287 $Atom:ident $(<$T:ident>)? ($Primitive:ty),
288 size($size:literal),
289 align($align:literal) $(,)?
290) {
291 impl $(<$T>)? Sealed for $Primitive {}
292
293 #[unstable(
294 feature = "atomic_internals",
295 reason = "implementation detail which may disappear or be replaced at any time",
296 issue = "none"
297 )]
298 #[cfg(target_has_atomic_load_store = $size)]
299 unsafe impl $(<$T>)? AtomicPrimitive for $Primitive {
300 type AtomicInner = $Atom $(<$T>)?;
301 }
302}
303
304impl_atomic_primitive!(AtomicBool(bool), size("8"), align(1));
305#[cfg(not(feature = "ferrocene_subset"))]
306impl_atomic_primitive!(AtomicI8(i8), size("8"), align(1));
307impl_atomic_primitive!(AtomicU8(u8), size("8"), align(1));
308#[cfg(not(feature = "ferrocene_subset"))]
309impl_atomic_primitive!(AtomicI16(i16), size("16"), align(2));
310#[cfg(not(feature = "ferrocene_subset"))]
311impl_atomic_primitive!(AtomicU16(u16), size("16"), align(2));
312#[cfg(not(feature = "ferrocene_subset"))]
313impl_atomic_primitive!(AtomicI32(i32), size("32"), align(4));
314impl_atomic_primitive!(AtomicU32(u32), size("32"), align(4));
315#[cfg(not(feature = "ferrocene_subset"))]
316impl_atomic_primitive!(AtomicI64(i64), size("64"), align(8));
317impl_atomic_primitive!(AtomicU64(u64), size("64"), align(8));
318#[cfg(not(feature = "ferrocene_subset"))]
319impl_atomic_primitive!(AtomicI128(i128), size("128"), align(16));
320#[cfg(not(feature = "ferrocene_subset"))]
321impl_atomic_primitive!(AtomicU128(u128), size("128"), align(16));
322
323#[cfg(target_pointer_width = "16")]
324#[cfg(not(feature = "ferrocene_subset"))]
325impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(2));
326#[cfg(target_pointer_width = "32")]
327#[cfg(not(feature = "ferrocene_subset"))]
328impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(4));
329#[cfg(target_pointer_width = "64")]
330#[cfg(not(feature = "ferrocene_subset"))]
331impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(8));
332
333#[cfg(target_pointer_width = "16")]
334impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(2));
335#[cfg(target_pointer_width = "32")]
336impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(4));
337#[cfg(target_pointer_width = "64")]
338impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(8));
339
340#[cfg(target_pointer_width = "16")]
341#[cfg(not(feature = "ferrocene_subset"))]
342impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(2));
343#[cfg(target_pointer_width = "32")]
344#[cfg(not(feature = "ferrocene_subset"))]
345impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(4));
346#[cfg(target_pointer_width = "64")]
347#[cfg(not(feature = "ferrocene_subset"))]
348impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(8));
349
350/// A memory location which can be safely modified from multiple threads.
351///
352/// This has the same size and bit validity as the underlying type `T`. However,
353/// the alignment of this type is always equal to its size, even on targets where
354/// `T` has alignment less than its size.
355///
356/// For more about the differences between atomic types and non-atomic types as
357/// well as information about the portability of this type, please see the
358/// [module-level documentation].
359///
360/// **Note:** This type is only available on platforms that support atomic loads
361/// and stores of `T`.
362///
363/// [module-level documentation]: crate::sync::atomic
364#[unstable(feature = "generic_atomic", issue = "130539")]
365#[cfg(not(feature = "ferrocene_subset"))]
366pub type Atomic<T> = <T as AtomicPrimitive>::AtomicInner;
367
368// Some architectures don't have byte-sized atomics, which results in LLVM
369// emulating them using a LL/SC loop. However for AtomicBool we can take
370// advantage of the fact that it only ever contains 0 or 1 and use atomic OR/AND
371// instead, which LLVM can emulate using a larger atomic OR/AND operation.
372//
373// This list should only contain architectures which have word-sized atomic-or/
374// atomic-and instructions but don't natively support byte-sized atomics.
375#[cfg(target_has_atomic = "8")]
376const EMULATE_ATOMIC_BOOL: bool = cfg!(any(
377 target_arch = "riscv32",
378 target_arch = "riscv64",
379 target_arch = "loongarch32",
380 target_arch = "loongarch64"
381));
382
383/// A boolean type which can be safely shared between threads.
384///
385/// This type has the same size, alignment, and bit validity as a [`bool`].
386///
387/// **Note**: This type is only available on platforms that support atomic
388/// loads and stores of `u8`.
389#[cfg(target_has_atomic_load_store = "8")]
390#[stable(feature = "rust1", since = "1.0.0")]
391#[rustc_diagnostic_item = "AtomicBool"]
392#[repr(C, align(1))]
393pub struct AtomicBool {
394 v: UnsafeCell<u8>,
395}
396
397#[cfg(target_has_atomic_load_store = "8")]
398#[stable(feature = "rust1", since = "1.0.0")]
399#[cfg(not(feature = "ferrocene_subset"))]
400impl Default for AtomicBool {
401 /// Creates an `AtomicBool` initialized to `false`.
402 #[inline]
403 fn default() -> Self {
404 Self::new(false)
405 }
406}
407
408// Send is implicitly implemented for AtomicBool.
409#[cfg(target_has_atomic_load_store = "8")]
410#[stable(feature = "rust1", since = "1.0.0")]
411unsafe impl Sync for AtomicBool {}
412
413/// A raw pointer type which can be safely shared between threads.
414///
415/// This type has the same size and bit validity as a `*mut T`.
416///
417/// **Note**: This type is only available on platforms that support atomic
418/// loads and stores of pointers. Its size depends on the target pointer's size.
419#[cfg(target_has_atomic_load_store = "ptr")]
420#[stable(feature = "rust1", since = "1.0.0")]
421#[rustc_diagnostic_item = "AtomicPtr"]
422#[cfg_attr(target_pointer_width = "16", repr(C, align(2)))]
423#[cfg_attr(target_pointer_width = "32", repr(C, align(4)))]
424#[cfg_attr(target_pointer_width = "64", repr(C, align(8)))]
425#[cfg(not(feature = "ferrocene_subset"))]
426pub struct AtomicPtr<T> {
427 p: UnsafeCell<*mut T>,
428}
429
430#[cfg(target_has_atomic_load_store = "ptr")]
431#[stable(feature = "rust1", since = "1.0.0")]
432#[cfg(not(feature = "ferrocene_subset"))]
433impl<T> Default for AtomicPtr<T> {
434 /// Creates a null `AtomicPtr<T>`.
435 fn default() -> AtomicPtr<T> {
436 AtomicPtr::new(crate::ptr::null_mut())
437 }
438}
439
440#[cfg(target_has_atomic_load_store = "ptr")]
441#[stable(feature = "rust1", since = "1.0.0")]
442#[cfg(not(feature = "ferrocene_subset"))]
443unsafe impl<T> Send for AtomicPtr<T> {}
444#[cfg(target_has_atomic_load_store = "ptr")]
445#[stable(feature = "rust1", since = "1.0.0")]
446#[cfg(not(feature = "ferrocene_subset"))]
447unsafe impl<T> Sync for AtomicPtr<T> {}
448
449/// Atomic memory orderings
450///
451/// Memory orderings specify the way atomic operations synchronize memory.
452/// In its weakest [`Ordering::Relaxed`], only the memory directly touched by the
453/// operation is synchronized. On the other hand, a store-load pair of [`Ordering::SeqCst`]
454/// operations synchronize other memory while additionally preserving a total order of such
455/// operations across all threads.
456///
457/// Rust's memory orderings are [the same as those of
458/// C++20](https://en.cppreference.com/w/cpp/atomic/memory_order).
459///
460/// For more information see the [nomicon].
461///
462/// [nomicon]: ../../../nomicon/atomics.html
463#[stable(feature = "rust1", since = "1.0.0")]
464#[cfg_attr(not(feature = "ferrocene_subset"), derive(Copy, Clone, Debug, Eq, PartialEq, Hash))]
465#[cfg_attr(feature = "ferrocene_subset", derive(Copy, Clone))]
466#[non_exhaustive]
467#[rustc_diagnostic_item = "Ordering"]
468pub enum Ordering {
469 /// No ordering constraints, only atomic operations.
470 ///
471 /// Corresponds to [`memory_order_relaxed`] in C++20.
472 ///
473 /// [`memory_order_relaxed`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Relaxed_ordering
474 #[stable(feature = "rust1", since = "1.0.0")]
475 Relaxed,
476 /// When coupled with a store, all previous operations become ordered
477 /// before any load of this value with [`Acquire`] (or stronger) ordering.
478 /// In particular, all previous writes become visible to all threads
479 /// that perform an [`Acquire`] (or stronger) load of this value.
480 ///
481 /// Notice that using this ordering for an operation that combines loads
482 /// and stores leads to a [`Relaxed`] load operation!
483 ///
484 /// This ordering is only applicable for operations that can perform a store.
485 ///
486 /// Corresponds to [`memory_order_release`] in C++20.
487 ///
488 /// [`memory_order_release`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
489 #[stable(feature = "rust1", since = "1.0.0")]
490 Release,
491 /// When coupled with a load, if the loaded value was written by a store operation with
492 /// [`Release`] (or stronger) ordering, then all subsequent operations
493 /// become ordered after that store. In particular, all subsequent loads will see data
494 /// written before the store.
495 ///
496 /// Notice that using this ordering for an operation that combines loads
497 /// and stores leads to a [`Relaxed`] store operation!
498 ///
499 /// This ordering is only applicable for operations that can perform a load.
500 ///
501 /// Corresponds to [`memory_order_acquire`] in C++20.
502 ///
503 /// [`memory_order_acquire`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
504 #[stable(feature = "rust1", since = "1.0.0")]
505 Acquire,
506 /// Has the effects of both [`Acquire`] and [`Release`] together:
507 /// For loads it uses [`Acquire`] ordering. For stores it uses the [`Release`] ordering.
508 ///
509 /// Notice that in the case of `compare_and_swap`, it is possible that the operation ends up
510 /// not performing any store and hence it has just [`Acquire`] ordering. However,
511 /// `AcqRel` will never perform [`Relaxed`] accesses.
512 ///
513 /// This ordering is only applicable for operations that combine both loads and stores.
514 ///
515 /// Corresponds to [`memory_order_acq_rel`] in C++20.
516 ///
517 /// [`memory_order_acq_rel`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
518 #[stable(feature = "rust1", since = "1.0.0")]
519 AcqRel,
520 /// Like [`Acquire`]/[`Release`]/[`AcqRel`] (for load, store, and load-with-store
521 /// operations, respectively) with the additional guarantee that all threads see all
522 /// sequentially consistent operations in the same order.
523 ///
524 /// Corresponds to [`memory_order_seq_cst`] in C++20.
525 ///
526 /// [`memory_order_seq_cst`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Sequentially-consistent_ordering
527 #[stable(feature = "rust1", since = "1.0.0")]
528 SeqCst,
529}
530
531/// An [`AtomicBool`] initialized to `false`.
532#[cfg(target_has_atomic_load_store = "8")]
533#[stable(feature = "rust1", since = "1.0.0")]
534#[deprecated(
535 since = "1.34.0",
536 note = "the `new` function is now preferred",
537 suggestion = "AtomicBool::new(false)"
538)]
539#[cfg(not(feature = "ferrocene_subset"))]
540pub const ATOMIC_BOOL_INIT: AtomicBool = AtomicBool::new(false);
541
542#[cfg(target_has_atomic_load_store = "8")]
543impl AtomicBool {
544 /// Creates a new `AtomicBool`.
545 ///
546 /// # Examples
547 ///
548 /// ```
549 /// use std::sync::atomic::AtomicBool;
550 ///
551 /// let atomic_true = AtomicBool::new(true);
552 /// let atomic_false = AtomicBool::new(false);
553 /// ```
554 #[inline]
555 #[stable(feature = "rust1", since = "1.0.0")]
556 #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
557 #[must_use]
558 pub const fn new(v: bool) -> AtomicBool {
559 AtomicBool { v: UnsafeCell::new(v as u8) }
560 }
561
562 /// Creates a new `AtomicBool` from a pointer.
563 ///
564 /// # Examples
565 ///
566 /// ```
567 /// use std::sync::atomic::{self, AtomicBool};
568 ///
569 /// // Get a pointer to an allocated value
570 /// let ptr: *mut bool = Box::into_raw(Box::new(false));
571 ///
572 /// assert!(ptr.cast::<AtomicBool>().is_aligned());
573 ///
574 /// {
575 /// // Create an atomic view of the allocated value
576 /// let atomic = unsafe { AtomicBool::from_ptr(ptr) };
577 ///
578 /// // Use `atomic` for atomic operations, possibly share it with other threads
579 /// atomic.store(true, atomic::Ordering::Relaxed);
580 /// }
581 ///
582 /// // It's ok to non-atomically access the value behind `ptr`,
583 /// // since the reference to the atomic ended its lifetime in the block above
584 /// assert_eq!(unsafe { *ptr }, true);
585 ///
586 /// // Deallocate the value
587 /// unsafe { drop(Box::from_raw(ptr)) }
588 /// ```
589 ///
590 /// # Safety
591 ///
592 /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that this is always true, since
593 /// `align_of::<AtomicBool>() == 1`).
594 /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
595 /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
596 /// allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different
597 /// sizes, without synchronization.
598 ///
599 /// [valid]: crate::ptr#safety
600 /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
601 #[inline]
602 #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
603 #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
604 #[cfg(not(feature = "ferrocene_subset"))]
605 pub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a AtomicBool {
606 // SAFETY: guaranteed by the caller
607 unsafe { &*ptr.cast() }
608 }
609
610 /// Returns a mutable reference to the underlying [`bool`].
611 ///
612 /// This is safe because the mutable reference guarantees that no other threads are
613 /// concurrently accessing the atomic data.
614 ///
615 /// # Examples
616 ///
617 /// ```
618 /// use std::sync::atomic::{AtomicBool, Ordering};
619 ///
620 /// let mut some_bool = AtomicBool::new(true);
621 /// assert_eq!(*some_bool.get_mut(), true);
622 /// *some_bool.get_mut() = false;
623 /// assert_eq!(some_bool.load(Ordering::SeqCst), false);
624 /// ```
625 #[inline]
626 #[stable(feature = "atomic_access", since = "1.15.0")]
627 #[cfg(not(feature = "ferrocene_subset"))]
628 pub fn get_mut(&mut self) -> &mut bool {
629 // SAFETY: the mutable reference guarantees unique ownership.
630 unsafe { &mut *(self.v.get() as *mut bool) }
631 }
632
633 /// Gets atomic access to a `&mut bool`.
634 ///
635 /// # Examples
636 ///
637 /// ```
638 /// #![feature(atomic_from_mut)]
639 /// use std::sync::atomic::{AtomicBool, Ordering};
640 ///
641 /// let mut some_bool = true;
642 /// let a = AtomicBool::from_mut(&mut some_bool);
643 /// a.store(false, Ordering::Relaxed);
644 /// assert_eq!(some_bool, false);
645 /// ```
646 #[inline]
647 #[cfg(target_has_atomic_equal_alignment = "8")]
648 #[unstable(feature = "atomic_from_mut", issue = "76314")]
649 #[cfg(not(feature = "ferrocene_subset"))]
650 pub fn from_mut(v: &mut bool) -> &mut Self {
651 // SAFETY: the mutable reference guarantees unique ownership, and
652 // alignment of both `bool` and `Self` is 1.
653 unsafe { &mut *(v as *mut bool as *mut Self) }
654 }
655
656 /// Gets non-atomic access to a `&mut [AtomicBool]` slice.
657 ///
658 /// This is safe because the mutable reference guarantees that no other threads are
659 /// concurrently accessing the atomic data.
660 ///
661 /// # Examples
662 ///
663 /// ```ignore-wasm
664 /// #![feature(atomic_from_mut)]
665 /// use std::sync::atomic::{AtomicBool, Ordering};
666 ///
667 /// let mut some_bools = [const { AtomicBool::new(false) }; 10];
668 ///
669 /// let view: &mut [bool] = AtomicBool::get_mut_slice(&mut some_bools);
670 /// assert_eq!(view, [false; 10]);
671 /// view[..5].copy_from_slice(&[true; 5]);
672 ///
673 /// std::thread::scope(|s| {
674 /// for t in &some_bools[..5] {
675 /// s.spawn(move || assert_eq!(t.load(Ordering::Relaxed), true));
676 /// }
677 ///
678 /// for f in &some_bools[5..] {
679 /// s.spawn(move || assert_eq!(f.load(Ordering::Relaxed), false));
680 /// }
681 /// });
682 /// ```
683 #[inline]
684 #[unstable(feature = "atomic_from_mut", issue = "76314")]
685 #[cfg(not(feature = "ferrocene_subset"))]
686 pub fn get_mut_slice(this: &mut [Self]) -> &mut [bool] {
687 // SAFETY: the mutable reference guarantees unique ownership.
688 unsafe { &mut *(this as *mut [Self] as *mut [bool]) }
689 }
690
691 /// Gets atomic access to a `&mut [bool]` slice.
692 ///
693 /// # Examples
694 ///
695 /// ```rust,ignore-wasm
696 /// #![feature(atomic_from_mut)]
697 /// use std::sync::atomic::{AtomicBool, Ordering};
698 ///
699 /// let mut some_bools = [false; 10];
700 /// let a = &*AtomicBool::from_mut_slice(&mut some_bools);
701 /// std::thread::scope(|s| {
702 /// for i in 0..a.len() {
703 /// s.spawn(move || a[i].store(true, Ordering::Relaxed));
704 /// }
705 /// });
706 /// assert_eq!(some_bools, [true; 10]);
707 /// ```
708 #[inline]
709 #[cfg(target_has_atomic_equal_alignment = "8")]
710 #[unstable(feature = "atomic_from_mut", issue = "76314")]
711 #[cfg(not(feature = "ferrocene_subset"))]
712 pub fn from_mut_slice(v: &mut [bool]) -> &mut [Self] {
713 // SAFETY: the mutable reference guarantees unique ownership, and
714 // alignment of both `bool` and `Self` is 1.
715 unsafe { &mut *(v as *mut [bool] as *mut [Self]) }
716 }
717
718 /// Consumes the atomic and returns the contained value.
719 ///
720 /// This is safe because passing `self` by value guarantees that no other threads are
721 /// concurrently accessing the atomic data.
722 ///
723 /// # Examples
724 ///
725 /// ```
726 /// use std::sync::atomic::AtomicBool;
727 ///
728 /// let some_bool = AtomicBool::new(true);
729 /// assert_eq!(some_bool.into_inner(), true);
730 /// ```
731 #[inline]
732 #[stable(feature = "atomic_access", since = "1.15.0")]
733 #[rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0")]
734 #[cfg(not(feature = "ferrocene_subset"))]
735 pub const fn into_inner(self) -> bool {
736 self.v.into_inner() != 0
737 }
738
739 /// Loads a value from the bool.
740 ///
741 /// `load` takes an [`Ordering`] argument which describes the memory ordering
742 /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
743 ///
744 /// # Panics
745 ///
746 /// Panics if `order` is [`Release`] or [`AcqRel`].
747 ///
748 /// # Examples
749 ///
750 /// ```
751 /// use std::sync::atomic::{AtomicBool, Ordering};
752 ///
753 /// let some_bool = AtomicBool::new(true);
754 ///
755 /// assert_eq!(some_bool.load(Ordering::Relaxed), true);
756 /// ```
757 #[inline]
758 #[stable(feature = "rust1", since = "1.0.0")]
759 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
760 pub fn load(&self, order: Ordering) -> bool {
761 // SAFETY: any data races are prevented by atomic intrinsics and the raw
762 // pointer passed in is valid because we got it from a reference.
763 unsafe { atomic_load(self.v.get(), order) != 0 }
764 }
765
766 /// Stores a value into the bool.
767 ///
768 /// `store` takes an [`Ordering`] argument which describes the memory ordering
769 /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
770 ///
771 /// # Panics
772 ///
773 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
774 ///
775 /// # Examples
776 ///
777 /// ```
778 /// use std::sync::atomic::{AtomicBool, Ordering};
779 ///
780 /// let some_bool = AtomicBool::new(true);
781 ///
782 /// some_bool.store(false, Ordering::Relaxed);
783 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
784 /// ```
785 #[inline]
786 #[stable(feature = "rust1", since = "1.0.0")]
787 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
788 #[rustc_should_not_be_called_on_const_items]
789 pub fn store(&self, val: bool, order: Ordering) {
790 // SAFETY: any data races are prevented by atomic intrinsics and the raw
791 // pointer passed in is valid because we got it from a reference.
792 unsafe {
793 atomic_store(self.v.get(), val as u8, order);
794 }
795 }
796
797 /// Stores a value into the bool, returning the previous value.
798 ///
799 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
800 /// of this operation. All ordering modes are possible. Note that using
801 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
802 /// using [`Release`] makes the load part [`Relaxed`].
803 ///
804 /// **Note:** This method is only available on platforms that support atomic
805 /// operations on `u8`.
806 ///
807 /// # Examples
808 ///
809 /// ```
810 /// use std::sync::atomic::{AtomicBool, Ordering};
811 ///
812 /// let some_bool = AtomicBool::new(true);
813 ///
814 /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true);
815 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
816 /// ```
817 #[inline]
818 #[stable(feature = "rust1", since = "1.0.0")]
819 #[cfg(target_has_atomic = "8")]
820 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
821 #[rustc_should_not_be_called_on_const_items]
822 pub fn swap(&self, val: bool, order: Ordering) -> bool {
823 if EMULATE_ATOMIC_BOOL {
824 if val { self.fetch_or(true, order) } else { self.fetch_and(false, order) }
825 } else {
826 // SAFETY: data races are prevented by atomic intrinsics.
827 unsafe { atomic_swap(self.v.get(), val as u8, order) != 0 }
828 }
829 }
830
831 /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
832 ///
833 /// The return value is always the previous value. If it is equal to `current`, then the value
834 /// was updated.
835 ///
836 /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
837 /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
838 /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
839 /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
840 /// happens, and using [`Release`] makes the load part [`Relaxed`].
841 ///
842 /// **Note:** This method is only available on platforms that support atomic
843 /// operations on `u8`.
844 ///
845 /// # Migrating to `compare_exchange` and `compare_exchange_weak`
846 ///
847 /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
848 /// memory orderings:
849 ///
850 /// Original | Success | Failure
851 /// -------- | ------- | -------
852 /// Relaxed | Relaxed | Relaxed
853 /// Acquire | Acquire | Acquire
854 /// Release | Release | Relaxed
855 /// AcqRel | AcqRel | Acquire
856 /// SeqCst | SeqCst | SeqCst
857 ///
858 /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
859 /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
860 /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
861 /// rather than to infer success vs failure based on the value that was read.
862 ///
863 /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
864 /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
865 /// which allows the compiler to generate better assembly code when the compare and swap
866 /// is used in a loop.
867 ///
868 /// # Examples
869 ///
870 /// ```
871 /// use std::sync::atomic::{AtomicBool, Ordering};
872 ///
873 /// let some_bool = AtomicBool::new(true);
874 ///
875 /// assert_eq!(some_bool.compare_and_swap(true, false, Ordering::Relaxed), true);
876 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
877 ///
878 /// assert_eq!(some_bool.compare_and_swap(true, true, Ordering::Relaxed), false);
879 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
880 /// ```
881 #[cfg(not(feature = "ferrocene_subset"))]
882 #[inline]
883 #[stable(feature = "rust1", since = "1.0.0")]
884 #[deprecated(
885 since = "1.50.0",
886 note = "Use `compare_exchange` or `compare_exchange_weak` instead"
887 )]
888 #[cfg(target_has_atomic = "8")]
889 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
890 #[cfg(not(feature = "ferrocene_subset"))]
891 #[rustc_should_not_be_called_on_const_items]
892 pub fn compare_and_swap(&self, current: bool, new: bool, order: Ordering) -> bool {
893 match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
894 Ok(x) => x,
895 Err(x) => x,
896 }
897 }
898
899 /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
900 ///
901 /// The return value is a result indicating whether the new value was written and containing
902 /// the previous value. On success this value is guaranteed to be equal to `current`.
903 ///
904 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
905 /// ordering of this operation. `success` describes the required ordering for the
906 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
907 /// `failure` describes the required ordering for the load operation that takes place when
908 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
909 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
910 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
911 ///
912 /// **Note:** This method is only available on platforms that support atomic
913 /// operations on `u8`.
914 ///
915 /// # Examples
916 ///
917 /// ```
918 /// use std::sync::atomic::{AtomicBool, Ordering};
919 ///
920 /// let some_bool = AtomicBool::new(true);
921 ///
922 /// assert_eq!(some_bool.compare_exchange(true,
923 /// false,
924 /// Ordering::Acquire,
925 /// Ordering::Relaxed),
926 /// Ok(true));
927 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
928 ///
929 /// assert_eq!(some_bool.compare_exchange(true, true,
930 /// Ordering::SeqCst,
931 /// Ordering::Acquire),
932 /// Err(false));
933 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
934 /// ```
935 ///
936 /// # Considerations
937 ///
938 /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
939 /// of CAS operations. In particular, a load of the value followed by a successful
940 /// `compare_exchange` with the previous load *does not ensure* that other threads have not
941 /// changed the value in the interim. This is usually important when the *equality* check in
942 /// the `compare_exchange` is being used to check the *identity* of a value, but equality
943 /// does not necessarily imply identity. In this case, `compare_exchange` can lead to the
944 /// [ABA problem].
945 ///
946 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
947 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
948 #[inline]
949 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
950 #[doc(alias = "compare_and_swap")]
951 #[cfg(target_has_atomic = "8")]
952 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
953 #[rustc_should_not_be_called_on_const_items]
954 pub fn compare_exchange(
955 &self,
956 current: bool,
957 new: bool,
958 success: Ordering,
959 failure: Ordering,
960 ) -> Result<bool, bool> {
961 if EMULATE_ATOMIC_BOOL {
962 // Pick the strongest ordering from success and failure.
963 let order = match (success, failure) {
964 (SeqCst, _) => SeqCst,
965 (_, SeqCst) => SeqCst,
966 (AcqRel, _) => AcqRel,
967 (_, AcqRel) => {
968 panic!("there is no such thing as an acquire-release failure ordering")
969 }
970 (Release, Acquire) => AcqRel,
971 (Acquire, _) => Acquire,
972 (_, Acquire) => Acquire,
973 (Release, Relaxed) => Release,
974 (_, Release) => panic!("there is no such thing as a release failure ordering"),
975 (Relaxed, Relaxed) => Relaxed,
976 };
977 let old = if current == new {
978 // This is a no-op, but we still need to perform the operation
979 // for memory ordering reasons.
980 self.fetch_or(false, order)
981 } else {
982 // This sets the value to the new one and returns the old one.
983 self.swap(new, order)
984 };
985 if old == current { Ok(old) } else { Err(old) }
986 } else {
987 // SAFETY: data races are prevented by atomic intrinsics.
988 match unsafe {
989 atomic_compare_exchange(self.v.get(), current as u8, new as u8, success, failure)
990 } {
991 Ok(x) => Ok(x != 0),
992 Err(x) => Err(x != 0),
993 }
994 }
995 }
996
997 /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
998 ///
999 /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the
1000 /// comparison succeeds, which can result in more efficient code on some platforms. The
1001 /// return value is a result indicating whether the new value was written and containing the
1002 /// previous value.
1003 ///
1004 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1005 /// ordering of this operation. `success` describes the required ordering for the
1006 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1007 /// `failure` describes the required ordering for the load operation that takes place when
1008 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1009 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1010 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1011 ///
1012 /// **Note:** This method is only available on platforms that support atomic
1013 /// operations on `u8`.
1014 ///
1015 /// # Examples
1016 ///
1017 /// ```
1018 /// use std::sync::atomic::{AtomicBool, Ordering};
1019 ///
1020 /// let val = AtomicBool::new(false);
1021 ///
1022 /// let new = true;
1023 /// let mut old = val.load(Ordering::Relaxed);
1024 /// loop {
1025 /// match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1026 /// Ok(_) => break,
1027 /// Err(x) => old = x,
1028 /// }
1029 /// }
1030 /// ```
1031 ///
1032 /// # Considerations
1033 ///
1034 /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
1035 /// of CAS operations. In particular, a load of the value followed by a successful
1036 /// `compare_exchange` with the previous load *does not ensure* that other threads have not
1037 /// changed the value in the interim. This is usually important when the *equality* check in
1038 /// the `compare_exchange` is being used to check the *identity* of a value, but equality
1039 /// does not necessarily imply identity. In this case, `compare_exchange` can lead to the
1040 /// [ABA problem].
1041 ///
1042 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1043 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1044 #[inline]
1045 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1046 #[doc(alias = "compare_and_swap")]
1047 #[cfg(target_has_atomic = "8")]
1048 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1049 #[cfg(not(feature = "ferrocene_subset"))]
1050 #[rustc_should_not_be_called_on_const_items]
1051 pub fn compare_exchange_weak(
1052 &self,
1053 current: bool,
1054 new: bool,
1055 success: Ordering,
1056 failure: Ordering,
1057 ) -> Result<bool, bool> {
1058 if EMULATE_ATOMIC_BOOL {
1059 return self.compare_exchange(current, new, success, failure);
1060 }
1061
1062 // SAFETY: data races are prevented by atomic intrinsics.
1063 match unsafe {
1064 atomic_compare_exchange_weak(self.v.get(), current as u8, new as u8, success, failure)
1065 } {
1066 Ok(x) => Ok(x != 0),
1067 Err(x) => Err(x != 0),
1068 }
1069 }
1070
1071 /// Logical "and" with a boolean value.
1072 ///
1073 /// Performs a logical "and" operation on the current value and the argument `val`, and sets
1074 /// the new value to the result.
1075 ///
1076 /// Returns the previous value.
1077 ///
1078 /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
1079 /// of this operation. All ordering modes are possible. Note that using
1080 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1081 /// using [`Release`] makes the load part [`Relaxed`].
1082 ///
1083 /// **Note:** This method is only available on platforms that support atomic
1084 /// operations on `u8`.
1085 ///
1086 /// # Examples
1087 ///
1088 /// ```
1089 /// use std::sync::atomic::{AtomicBool, Ordering};
1090 ///
1091 /// let foo = AtomicBool::new(true);
1092 /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
1093 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1094 ///
1095 /// let foo = AtomicBool::new(true);
1096 /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
1097 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1098 ///
1099 /// let foo = AtomicBool::new(false);
1100 /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
1101 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1102 /// ```
1103 #[inline]
1104 #[stable(feature = "rust1", since = "1.0.0")]
1105 #[cfg(target_has_atomic = "8")]
1106 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1107 #[rustc_should_not_be_called_on_const_items]
1108 pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
1109 // SAFETY: data races are prevented by atomic intrinsics.
1110 unsafe { atomic_and(self.v.get(), val as u8, order) != 0 }
1111 }
1112
1113 /// Logical "nand" with a boolean value.
1114 ///
1115 /// Performs a logical "nand" operation on the current value and the argument `val`, and sets
1116 /// the new value to the result.
1117 ///
1118 /// Returns the previous value.
1119 ///
1120 /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
1121 /// of this operation. All ordering modes are possible. Note that using
1122 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1123 /// using [`Release`] makes the load part [`Relaxed`].
1124 ///
1125 /// **Note:** This method is only available on platforms that support atomic
1126 /// operations on `u8`.
1127 ///
1128 /// # Examples
1129 ///
1130 /// ```
1131 /// use std::sync::atomic::{AtomicBool, Ordering};
1132 ///
1133 /// let foo = AtomicBool::new(true);
1134 /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
1135 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1136 ///
1137 /// let foo = AtomicBool::new(true);
1138 /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
1139 /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
1140 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1141 ///
1142 /// let foo = AtomicBool::new(false);
1143 /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
1144 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1145 /// ```
1146 #[inline]
1147 #[stable(feature = "rust1", since = "1.0.0")]
1148 #[cfg(target_has_atomic = "8")]
1149 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1150 #[cfg(not(feature = "ferrocene_subset"))]
1151 #[rustc_should_not_be_called_on_const_items]
1152 pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
1153 // We can't use atomic_nand here because it can result in a bool with
1154 // an invalid value. This happens because the atomic operation is done
1155 // with an 8-bit integer internally, which would set the upper 7 bits.
1156 // So we just use fetch_xor or swap instead.
1157 if val {
1158 // !(x & true) == !x
1159 // We must invert the bool.
1160 self.fetch_xor(true, order)
1161 } else {
1162 // !(x & false) == true
1163 // We must set the bool to true.
1164 self.swap(true, order)
1165 }
1166 }
1167
1168 /// Logical "or" with a boolean value.
1169 ///
1170 /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1171 /// new value to the result.
1172 ///
1173 /// Returns the previous value.
1174 ///
1175 /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
1176 /// of this operation. All ordering modes are possible. Note that using
1177 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1178 /// using [`Release`] makes the load part [`Relaxed`].
1179 ///
1180 /// **Note:** This method is only available on platforms that support atomic
1181 /// operations on `u8`.
1182 ///
1183 /// # Examples
1184 ///
1185 /// ```
1186 /// use std::sync::atomic::{AtomicBool, Ordering};
1187 ///
1188 /// let foo = AtomicBool::new(true);
1189 /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
1190 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1191 ///
1192 /// let foo = AtomicBool::new(true);
1193 /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true);
1194 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1195 ///
1196 /// let foo = AtomicBool::new(false);
1197 /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
1198 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1199 /// ```
1200 #[inline]
1201 #[stable(feature = "rust1", since = "1.0.0")]
1202 #[cfg(target_has_atomic = "8")]
1203 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1204 #[rustc_should_not_be_called_on_const_items]
1205 pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
1206 // SAFETY: data races are prevented by atomic intrinsics.
1207 unsafe { atomic_or(self.v.get(), val as u8, order) != 0 }
1208 }
1209
1210 /// Logical "xor" with a boolean value.
1211 ///
1212 /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1213 /// the new value to the result.
1214 ///
1215 /// Returns the previous value.
1216 ///
1217 /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
1218 /// of this operation. All ordering modes are possible. Note that using
1219 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1220 /// using [`Release`] makes the load part [`Relaxed`].
1221 ///
1222 /// **Note:** This method is only available on platforms that support atomic
1223 /// operations on `u8`.
1224 ///
1225 /// # Examples
1226 ///
1227 /// ```
1228 /// use std::sync::atomic::{AtomicBool, Ordering};
1229 ///
1230 /// let foo = AtomicBool::new(true);
1231 /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
1232 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1233 ///
1234 /// let foo = AtomicBool::new(true);
1235 /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
1236 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1237 ///
1238 /// let foo = AtomicBool::new(false);
1239 /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
1240 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1241 /// ```
1242 #[inline]
1243 #[stable(feature = "rust1", since = "1.0.0")]
1244 #[cfg(target_has_atomic = "8")]
1245 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1246 #[cfg(not(feature = "ferrocene_subset"))]
1247 #[rustc_should_not_be_called_on_const_items]
1248 pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
1249 // SAFETY: data races are prevented by atomic intrinsics.
1250 unsafe { atomic_xor(self.v.get(), val as u8, order) != 0 }
1251 }
1252
1253 /// Logical "not" with a boolean value.
1254 ///
1255 /// Performs a logical "not" operation on the current value, and sets
1256 /// the new value to the result.
1257 ///
1258 /// Returns the previous value.
1259 ///
1260 /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering
1261 /// of this operation. All ordering modes are possible. Note that using
1262 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1263 /// using [`Release`] makes the load part [`Relaxed`].
1264 ///
1265 /// **Note:** This method is only available on platforms that support atomic
1266 /// operations on `u8`.
1267 ///
1268 /// # Examples
1269 ///
1270 /// ```
1271 /// use std::sync::atomic::{AtomicBool, Ordering};
1272 ///
1273 /// let foo = AtomicBool::new(true);
1274 /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
1275 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1276 ///
1277 /// let foo = AtomicBool::new(false);
1278 /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
1279 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1280 /// ```
1281 #[inline]
1282 #[stable(feature = "atomic_bool_fetch_not", since = "1.81.0")]
1283 #[cfg(target_has_atomic = "8")]
1284 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1285 #[cfg(not(feature = "ferrocene_subset"))]
1286 #[rustc_should_not_be_called_on_const_items]
1287 pub fn fetch_not(&self, order: Ordering) -> bool {
1288 self.fetch_xor(true, order)
1289 }
1290
1291 /// Returns a mutable pointer to the underlying [`bool`].
1292 ///
1293 /// Doing non-atomic reads and writes on the resulting boolean can be a data race.
1294 /// This method is mostly useful for FFI, where the function signature may use
1295 /// `*mut bool` instead of `&AtomicBool`.
1296 ///
1297 /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
1298 /// atomic types work with interior mutability. All modifications of an atomic change the value
1299 /// through a shared reference, and can do so safely as long as they use atomic operations. Any
1300 /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the
1301 /// requirements of the [memory model].
1302 ///
1303 /// # Examples
1304 ///
1305 /// ```ignore (extern-declaration)
1306 /// # fn main() {
1307 /// use std::sync::atomic::AtomicBool;
1308 ///
1309 /// extern "C" {
1310 /// fn my_atomic_op(arg: *mut bool);
1311 /// }
1312 ///
1313 /// let mut atomic = AtomicBool::new(true);
1314 /// unsafe {
1315 /// my_atomic_op(atomic.as_ptr());
1316 /// }
1317 /// # }
1318 /// ```
1319 ///
1320 /// [memory model]: self#memory-model-for-atomic-accesses
1321 #[inline]
1322 #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
1323 #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
1324 #[rustc_never_returns_null_ptr]
1325 #[cfg(not(feature = "ferrocene_subset"))]
1326 #[rustc_should_not_be_called_on_const_items]
1327 pub const fn as_ptr(&self) -> *mut bool {
1328 self.v.get().cast()
1329 }
1330
1331 /// Fetches the value, and applies a function to it that returns an optional
1332 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1333 /// returned `Some(_)`, else `Err(previous_value)`.
1334 ///
1335 /// Note: This may call the function multiple times if the value has been
1336 /// changed from other threads in the meantime, as long as the function
1337 /// returns `Some(_)`, but the function will have been applied only once to
1338 /// the stored value.
1339 ///
1340 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1341 /// ordering of this operation. The first describes the required ordering for
1342 /// when the operation finally succeeds while the second describes the
1343 /// required ordering for loads. These correspond to the success and failure
1344 /// orderings of [`AtomicBool::compare_exchange`] respectively.
1345 ///
1346 /// Using [`Acquire`] as success ordering makes the store part of this
1347 /// operation [`Relaxed`], and using [`Release`] makes the final successful
1348 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1349 /// [`Acquire`] or [`Relaxed`].
1350 ///
1351 /// **Note:** This method is only available on platforms that support atomic
1352 /// operations on `u8`.
1353 ///
1354 /// # Considerations
1355 ///
1356 /// This method is not magic; it is not provided by the hardware, and does not act like a
1357 /// critical section or mutex.
1358 ///
1359 /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
1360 /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem].
1361 ///
1362 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1363 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1364 ///
1365 /// # Examples
1366 ///
1367 /// ```rust
1368 /// use std::sync::atomic::{AtomicBool, Ordering};
1369 ///
1370 /// let x = AtomicBool::new(false);
1371 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1372 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1373 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1374 /// assert_eq!(x.load(Ordering::SeqCst), false);
1375 /// ```
1376 #[inline]
1377 #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
1378 #[cfg(target_has_atomic = "8")]
1379 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1380 #[cfg(not(feature = "ferrocene_subset"))]
1381 #[rustc_should_not_be_called_on_const_items]
1382 pub fn fetch_update<F>(
1383 &self,
1384 set_order: Ordering,
1385 fetch_order: Ordering,
1386 mut f: F,
1387 ) -> Result<bool, bool>
1388 where
1389 F: FnMut(bool) -> Option<bool>,
1390 {
1391 let mut prev = self.load(fetch_order);
1392 while let Some(next) = f(prev) {
1393 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1394 x @ Ok(_) => return x,
1395 Err(next_prev) => prev = next_prev,
1396 }
1397 }
1398 Err(prev)
1399 }
1400
1401 /// Fetches the value, and applies a function to it that returns an optional
1402 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1403 /// returned `Some(_)`, else `Err(previous_value)`.
1404 ///
1405 /// See also: [`update`](`AtomicBool::update`).
1406 ///
1407 /// Note: This may call the function multiple times if the value has been
1408 /// changed from other threads in the meantime, as long as the function
1409 /// returns `Some(_)`, but the function will have been applied only once to
1410 /// the stored value.
1411 ///
1412 /// `try_update` takes two [`Ordering`] arguments to describe the memory
1413 /// ordering of this operation. The first describes the required ordering for
1414 /// when the operation finally succeeds while the second describes the
1415 /// required ordering for loads. These correspond to the success and failure
1416 /// orderings of [`AtomicBool::compare_exchange`] respectively.
1417 ///
1418 /// Using [`Acquire`] as success ordering makes the store part of this
1419 /// operation [`Relaxed`], and using [`Release`] makes the final successful
1420 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1421 /// [`Acquire`] or [`Relaxed`].
1422 ///
1423 /// **Note:** This method is only available on platforms that support atomic
1424 /// operations on `u8`.
1425 ///
1426 /// # Considerations
1427 ///
1428 /// This method is not magic; it is not provided by the hardware, and does not act like a
1429 /// critical section or mutex.
1430 ///
1431 /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
1432 /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem].
1433 ///
1434 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1435 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1436 ///
1437 /// # Examples
1438 ///
1439 /// ```rust
1440 /// #![feature(atomic_try_update)]
1441 /// use std::sync::atomic::{AtomicBool, Ordering};
1442 ///
1443 /// let x = AtomicBool::new(false);
1444 /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1445 /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1446 /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1447 /// assert_eq!(x.load(Ordering::SeqCst), false);
1448 /// ```
1449 #[inline]
1450 #[unstable(feature = "atomic_try_update", issue = "135894")]
1451 #[cfg(target_has_atomic = "8")]
1452 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1453 #[cfg(not(feature = "ferrocene_subset"))]
1454 #[rustc_should_not_be_called_on_const_items]
1455 pub fn try_update(
1456 &self,
1457 set_order: Ordering,
1458 fetch_order: Ordering,
1459 f: impl FnMut(bool) -> Option<bool>,
1460 ) -> Result<bool, bool> {
1461 // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`;
1462 // when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`.
1463 self.fetch_update(set_order, fetch_order, f)
1464 }
1465
1466 /// Fetches the value, applies a function to it that it return a new value.
1467 /// The new value is stored and the old value is returned.
1468 ///
1469 /// See also: [`try_update`](`AtomicBool::try_update`).
1470 ///
1471 /// Note: This may call the function multiple times if the value has been changed from other threads in
1472 /// the meantime, but the function will have been applied only once to the stored value.
1473 ///
1474 /// `update` takes two [`Ordering`] arguments to describe the memory
1475 /// ordering of this operation. The first describes the required ordering for
1476 /// when the operation finally succeeds while the second describes the
1477 /// required ordering for loads. These correspond to the success and failure
1478 /// orderings of [`AtomicBool::compare_exchange`] respectively.
1479 ///
1480 /// Using [`Acquire`] as success ordering makes the store part
1481 /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
1482 /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1483 ///
1484 /// **Note:** This method is only available on platforms that support atomic operations on `u8`.
1485 ///
1486 /// # Considerations
1487 ///
1488 /// This method is not magic; it is not provided by the hardware, and does not act like a
1489 /// critical section or mutex.
1490 ///
1491 /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
1492 /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem].
1493 ///
1494 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1495 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1496 ///
1497 /// # Examples
1498 ///
1499 /// ```rust
1500 /// #![feature(atomic_try_update)]
1501 ///
1502 /// use std::sync::atomic::{AtomicBool, Ordering};
1503 ///
1504 /// let x = AtomicBool::new(false);
1505 /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| !x), false);
1506 /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| !x), true);
1507 /// assert_eq!(x.load(Ordering::SeqCst), false);
1508 /// ```
1509 #[inline]
1510 #[unstable(feature = "atomic_try_update", issue = "135894")]
1511 #[cfg(target_has_atomic = "8")]
1512 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1513 #[cfg(not(feature = "ferrocene_subset"))]
1514 #[rustc_should_not_be_called_on_const_items]
1515 pub fn update(
1516 &self,
1517 set_order: Ordering,
1518 fetch_order: Ordering,
1519 mut f: impl FnMut(bool) -> bool,
1520 ) -> bool {
1521 let mut prev = self.load(fetch_order);
1522 loop {
1523 match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
1524 Ok(x) => break x,
1525 Err(next_prev) => prev = next_prev,
1526 }
1527 }
1528 }
1529}
1530
1531#[cfg(target_has_atomic_load_store = "ptr")]
1532#[cfg(not(feature = "ferrocene_subset"))]
1533impl<T> AtomicPtr<T> {
1534 /// Creates a new `AtomicPtr`.
1535 ///
1536 /// # Examples
1537 ///
1538 /// ```
1539 /// use std::sync::atomic::AtomicPtr;
1540 ///
1541 /// let ptr = &mut 5;
1542 /// let atomic_ptr = AtomicPtr::new(ptr);
1543 /// ```
1544 #[inline]
1545 #[stable(feature = "rust1", since = "1.0.0")]
1546 #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
1547 pub const fn new(p: *mut T) -> AtomicPtr<T> {
1548 AtomicPtr { p: UnsafeCell::new(p) }
1549 }
1550
1551 /// Creates a new `AtomicPtr` from a pointer.
1552 ///
1553 /// # Examples
1554 ///
1555 /// ```
1556 /// use std::sync::atomic::{self, AtomicPtr};
1557 ///
1558 /// // Get a pointer to an allocated value
1559 /// let ptr: *mut *mut u8 = Box::into_raw(Box::new(std::ptr::null_mut()));
1560 ///
1561 /// assert!(ptr.cast::<AtomicPtr<u8>>().is_aligned());
1562 ///
1563 /// {
1564 /// // Create an atomic view of the allocated value
1565 /// let atomic = unsafe { AtomicPtr::from_ptr(ptr) };
1566 ///
1567 /// // Use `atomic` for atomic operations, possibly share it with other threads
1568 /// atomic.store(std::ptr::NonNull::dangling().as_ptr(), atomic::Ordering::Relaxed);
1569 /// }
1570 ///
1571 /// // It's ok to non-atomically access the value behind `ptr`,
1572 /// // since the reference to the atomic ended its lifetime in the block above
1573 /// assert!(!unsafe { *ptr }.is_null());
1574 ///
1575 /// // Deallocate the value
1576 /// unsafe { drop(Box::from_raw(ptr)) }
1577 /// ```
1578 ///
1579 /// # Safety
1580 ///
1581 /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this
1582 /// can be bigger than `align_of::<*mut T>()`).
1583 /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
1584 /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
1585 /// allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different
1586 /// sizes, without synchronization.
1587 ///
1588 /// [valid]: crate::ptr#safety
1589 /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
1590 #[inline]
1591 #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
1592 #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
1593 pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a AtomicPtr<T> {
1594 // SAFETY: guaranteed by the caller
1595 unsafe { &*ptr.cast() }
1596 }
1597
1598 /// Returns a mutable reference to the underlying pointer.
1599 ///
1600 /// This is safe because the mutable reference guarantees that no other threads are
1601 /// concurrently accessing the atomic data.
1602 ///
1603 /// # Examples
1604 ///
1605 /// ```
1606 /// use std::sync::atomic::{AtomicPtr, Ordering};
1607 ///
1608 /// let mut data = 10;
1609 /// let mut atomic_ptr = AtomicPtr::new(&mut data);
1610 /// let mut other_data = 5;
1611 /// *atomic_ptr.get_mut() = &mut other_data;
1612 /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
1613 /// ```
1614 #[inline]
1615 #[stable(feature = "atomic_access", since = "1.15.0")]
1616 pub fn get_mut(&mut self) -> &mut *mut T {
1617 self.p.get_mut()
1618 }
1619
1620 /// Gets atomic access to a pointer.
1621 ///
1622 /// **Note:** This function is only available on targets where `AtomicPtr<T>` has the same alignment as `*const T`
1623 ///
1624 /// # Examples
1625 ///
1626 /// ```
1627 /// #![feature(atomic_from_mut)]
1628 /// use std::sync::atomic::{AtomicPtr, Ordering};
1629 ///
1630 /// let mut data = 123;
1631 /// let mut some_ptr = &mut data as *mut i32;
1632 /// let a = AtomicPtr::from_mut(&mut some_ptr);
1633 /// let mut other_data = 456;
1634 /// a.store(&mut other_data, Ordering::Relaxed);
1635 /// assert_eq!(unsafe { *some_ptr }, 456);
1636 /// ```
1637 #[inline]
1638 #[cfg(target_has_atomic_equal_alignment = "ptr")]
1639 #[unstable(feature = "atomic_from_mut", issue = "76314")]
1640 pub fn from_mut(v: &mut *mut T) -> &mut Self {
1641 let [] = [(); align_of::<AtomicPtr<()>>() - align_of::<*mut ()>()];
1642 // SAFETY:
1643 // - the mutable reference guarantees unique ownership.
1644 // - the alignment of `*mut T` and `Self` is the same on all platforms
1645 // supported by rust, as verified above.
1646 unsafe { &mut *(v as *mut *mut T as *mut Self) }
1647 }
1648
1649 /// Gets non-atomic access to a `&mut [AtomicPtr]` slice.
1650 ///
1651 /// This is safe because the mutable reference guarantees that no other threads are
1652 /// concurrently accessing the atomic data.
1653 ///
1654 /// # Examples
1655 ///
1656 /// ```ignore-wasm
1657 /// #![feature(atomic_from_mut)]
1658 /// use std::ptr::null_mut;
1659 /// use std::sync::atomic::{AtomicPtr, Ordering};
1660 ///
1661 /// let mut some_ptrs = [const { AtomicPtr::new(null_mut::<String>()) }; 10];
1662 ///
1663 /// let view: &mut [*mut String] = AtomicPtr::get_mut_slice(&mut some_ptrs);
1664 /// assert_eq!(view, [null_mut::<String>(); 10]);
1665 /// view
1666 /// .iter_mut()
1667 /// .enumerate()
1668 /// .for_each(|(i, ptr)| *ptr = Box::into_raw(Box::new(format!("iteration#{i}"))));
1669 ///
1670 /// std::thread::scope(|s| {
1671 /// for ptr in &some_ptrs {
1672 /// s.spawn(move || {
1673 /// let ptr = ptr.load(Ordering::Relaxed);
1674 /// assert!(!ptr.is_null());
1675 ///
1676 /// let name = unsafe { Box::from_raw(ptr) };
1677 /// println!("Hello, {name}!");
1678 /// });
1679 /// }
1680 /// });
1681 /// ```
1682 #[inline]
1683 #[unstable(feature = "atomic_from_mut", issue = "76314")]
1684 pub fn get_mut_slice(this: &mut [Self]) -> &mut [*mut T] {
1685 // SAFETY: the mutable reference guarantees unique ownership.
1686 unsafe { &mut *(this as *mut [Self] as *mut [*mut T]) }
1687 }
1688
1689 /// Gets atomic access to a slice of pointers.
1690 ///
1691 /// **Note:** This function is only available on targets where `AtomicPtr<T>` has the same alignment as `*const T`
1692 ///
1693 /// # Examples
1694 ///
1695 /// ```ignore-wasm
1696 /// #![feature(atomic_from_mut)]
1697 /// use std::ptr::null_mut;
1698 /// use std::sync::atomic::{AtomicPtr, Ordering};
1699 ///
1700 /// let mut some_ptrs = [null_mut::<String>(); 10];
1701 /// let a = &*AtomicPtr::from_mut_slice(&mut some_ptrs);
1702 /// std::thread::scope(|s| {
1703 /// for i in 0..a.len() {
1704 /// s.spawn(move || {
1705 /// let name = Box::new(format!("thread{i}"));
1706 /// a[i].store(Box::into_raw(name), Ordering::Relaxed);
1707 /// });
1708 /// }
1709 /// });
1710 /// for p in some_ptrs {
1711 /// assert!(!p.is_null());
1712 /// let name = unsafe { Box::from_raw(p) };
1713 /// println!("Hello, {name}!");
1714 /// }
1715 /// ```
1716 #[inline]
1717 #[cfg(target_has_atomic_equal_alignment = "ptr")]
1718 #[unstable(feature = "atomic_from_mut", issue = "76314")]
1719 pub fn from_mut_slice(v: &mut [*mut T]) -> &mut [Self] {
1720 // SAFETY:
1721 // - the mutable reference guarantees unique ownership.
1722 // - the alignment of `*mut T` and `Self` is the same on all platforms
1723 // supported by rust, as verified above.
1724 unsafe { &mut *(v as *mut [*mut T] as *mut [Self]) }
1725 }
1726
1727 /// Consumes the atomic and returns the contained value.
1728 ///
1729 /// This is safe because passing `self` by value guarantees that no other threads are
1730 /// concurrently accessing the atomic data.
1731 ///
1732 /// # Examples
1733 ///
1734 /// ```
1735 /// use std::sync::atomic::AtomicPtr;
1736 ///
1737 /// let mut data = 5;
1738 /// let atomic_ptr = AtomicPtr::new(&mut data);
1739 /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
1740 /// ```
1741 #[inline]
1742 #[stable(feature = "atomic_access", since = "1.15.0")]
1743 #[rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0")]
1744 pub const fn into_inner(self) -> *mut T {
1745 self.p.into_inner()
1746 }
1747
1748 /// Loads a value from the pointer.
1749 ///
1750 /// `load` takes an [`Ordering`] argument which describes the memory ordering
1751 /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
1752 ///
1753 /// # Panics
1754 ///
1755 /// Panics if `order` is [`Release`] or [`AcqRel`].
1756 ///
1757 /// # Examples
1758 ///
1759 /// ```
1760 /// use std::sync::atomic::{AtomicPtr, Ordering};
1761 ///
1762 /// let ptr = &mut 5;
1763 /// let some_ptr = AtomicPtr::new(ptr);
1764 ///
1765 /// let value = some_ptr.load(Ordering::Relaxed);
1766 /// ```
1767 #[inline]
1768 #[stable(feature = "rust1", since = "1.0.0")]
1769 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1770 pub fn load(&self, order: Ordering) -> *mut T {
1771 // SAFETY: data races are prevented by atomic intrinsics.
1772 unsafe { atomic_load(self.p.get(), order) }
1773 }
1774
1775 /// Stores a value into the pointer.
1776 ///
1777 /// `store` takes an [`Ordering`] argument which describes the memory ordering
1778 /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
1779 ///
1780 /// # Panics
1781 ///
1782 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
1783 ///
1784 /// # Examples
1785 ///
1786 /// ```
1787 /// use std::sync::atomic::{AtomicPtr, Ordering};
1788 ///
1789 /// let ptr = &mut 5;
1790 /// let some_ptr = AtomicPtr::new(ptr);
1791 ///
1792 /// let other_ptr = &mut 10;
1793 ///
1794 /// some_ptr.store(other_ptr, Ordering::Relaxed);
1795 /// ```
1796 #[inline]
1797 #[stable(feature = "rust1", since = "1.0.0")]
1798 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1799 #[rustc_should_not_be_called_on_const_items]
1800 pub fn store(&self, ptr: *mut T, order: Ordering) {
1801 // SAFETY: data races are prevented by atomic intrinsics.
1802 unsafe {
1803 atomic_store(self.p.get(), ptr, order);
1804 }
1805 }
1806
1807 /// Stores a value into the pointer, returning the previous value.
1808 ///
1809 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
1810 /// of this operation. All ordering modes are possible. Note that using
1811 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1812 /// using [`Release`] makes the load part [`Relaxed`].
1813 ///
1814 /// **Note:** This method is only available on platforms that support atomic
1815 /// operations on pointers.
1816 ///
1817 /// # Examples
1818 ///
1819 /// ```
1820 /// use std::sync::atomic::{AtomicPtr, Ordering};
1821 ///
1822 /// let ptr = &mut 5;
1823 /// let some_ptr = AtomicPtr::new(ptr);
1824 ///
1825 /// let other_ptr = &mut 10;
1826 ///
1827 /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
1828 /// ```
1829 #[inline]
1830 #[stable(feature = "rust1", since = "1.0.0")]
1831 #[cfg(target_has_atomic = "ptr")]
1832 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1833 #[rustc_should_not_be_called_on_const_items]
1834 pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
1835 // SAFETY: data races are prevented by atomic intrinsics.
1836 unsafe { atomic_swap(self.p.get(), ptr, order) }
1837 }
1838
1839 /// Stores a value into the pointer if the current value is the same as the `current` value.
1840 ///
1841 /// The return value is always the previous value. If it is equal to `current`, then the value
1842 /// was updated.
1843 ///
1844 /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
1845 /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
1846 /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
1847 /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
1848 /// happens, and using [`Release`] makes the load part [`Relaxed`].
1849 ///
1850 /// **Note:** This method is only available on platforms that support atomic
1851 /// operations on pointers.
1852 ///
1853 /// # Migrating to `compare_exchange` and `compare_exchange_weak`
1854 ///
1855 /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
1856 /// memory orderings:
1857 ///
1858 /// Original | Success | Failure
1859 /// -------- | ------- | -------
1860 /// Relaxed | Relaxed | Relaxed
1861 /// Acquire | Acquire | Acquire
1862 /// Release | Release | Relaxed
1863 /// AcqRel | AcqRel | Acquire
1864 /// SeqCst | SeqCst | SeqCst
1865 ///
1866 /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
1867 /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
1868 /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
1869 /// rather than to infer success vs failure based on the value that was read.
1870 ///
1871 /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
1872 /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
1873 /// which allows the compiler to generate better assembly code when the compare and swap
1874 /// is used in a loop.
1875 ///
1876 /// # Examples
1877 ///
1878 /// ```
1879 /// use std::sync::atomic::{AtomicPtr, Ordering};
1880 ///
1881 /// let ptr = &mut 5;
1882 /// let some_ptr = AtomicPtr::new(ptr);
1883 ///
1884 /// let other_ptr = &mut 10;
1885 ///
1886 /// let value = some_ptr.compare_and_swap(ptr, other_ptr, Ordering::Relaxed);
1887 /// ```
1888 #[inline]
1889 #[stable(feature = "rust1", since = "1.0.0")]
1890 #[deprecated(
1891 since = "1.50.0",
1892 note = "Use `compare_exchange` or `compare_exchange_weak` instead"
1893 )]
1894 #[cfg(target_has_atomic = "ptr")]
1895 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1896 #[rustc_should_not_be_called_on_const_items]
1897 pub fn compare_and_swap(&self, current: *mut T, new: *mut T, order: Ordering) -> *mut T {
1898 match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
1899 Ok(x) => x,
1900 Err(x) => x,
1901 }
1902 }
1903
1904 /// Stores a value into the pointer if the current value is the same as the `current` value.
1905 ///
1906 /// The return value is a result indicating whether the new value was written and containing
1907 /// the previous value. On success this value is guaranteed to be equal to `current`.
1908 ///
1909 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
1910 /// ordering of this operation. `success` describes the required ordering for the
1911 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1912 /// `failure` describes the required ordering for the load operation that takes place when
1913 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1914 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1915 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1916 ///
1917 /// **Note:** This method is only available on platforms that support atomic
1918 /// operations on pointers.
1919 ///
1920 /// # Examples
1921 ///
1922 /// ```
1923 /// use std::sync::atomic::{AtomicPtr, Ordering};
1924 ///
1925 /// let ptr = &mut 5;
1926 /// let some_ptr = AtomicPtr::new(ptr);
1927 ///
1928 /// let other_ptr = &mut 10;
1929 ///
1930 /// let value = some_ptr.compare_exchange(ptr, other_ptr,
1931 /// Ordering::SeqCst, Ordering::Relaxed);
1932 /// ```
1933 ///
1934 /// # Considerations
1935 ///
1936 /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
1937 /// of CAS operations. In particular, a load of the value followed by a successful
1938 /// `compare_exchange` with the previous load *does not ensure* that other threads have not
1939 /// changed the value in the interim. This is usually important when the *equality* check in
1940 /// the `compare_exchange` is being used to check the *identity* of a value, but equality
1941 /// does not necessarily imply identity. This is a particularly common case for pointers, as
1942 /// a pointer holding the same address does not imply that the same object exists at that
1943 /// address! In this case, `compare_exchange` can lead to the [ABA problem].
1944 ///
1945 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1946 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1947 #[inline]
1948 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1949 #[cfg(target_has_atomic = "ptr")]
1950 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1951 #[rustc_should_not_be_called_on_const_items]
1952 pub fn compare_exchange(
1953 &self,
1954 current: *mut T,
1955 new: *mut T,
1956 success: Ordering,
1957 failure: Ordering,
1958 ) -> Result<*mut T, *mut T> {
1959 // SAFETY: data races are prevented by atomic intrinsics.
1960 unsafe { atomic_compare_exchange(self.p.get(), current, new, success, failure) }
1961 }
1962
1963 /// Stores a value into the pointer if the current value is the same as the `current` value.
1964 ///
1965 /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the
1966 /// comparison succeeds, which can result in more efficient code on some platforms. The
1967 /// return value is a result indicating whether the new value was written and containing the
1968 /// previous value.
1969 ///
1970 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1971 /// ordering of this operation. `success` describes the required ordering for the
1972 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1973 /// `failure` describes the required ordering for the load operation that takes place when
1974 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1975 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1976 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1977 ///
1978 /// **Note:** This method is only available on platforms that support atomic
1979 /// operations on pointers.
1980 ///
1981 /// # Examples
1982 ///
1983 /// ```
1984 /// use std::sync::atomic::{AtomicPtr, Ordering};
1985 ///
1986 /// let some_ptr = AtomicPtr::new(&mut 5);
1987 ///
1988 /// let new = &mut 10;
1989 /// let mut old = some_ptr.load(Ordering::Relaxed);
1990 /// loop {
1991 /// match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1992 /// Ok(_) => break,
1993 /// Err(x) => old = x,
1994 /// }
1995 /// }
1996 /// ```
1997 ///
1998 /// # Considerations
1999 ///
2000 /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
2001 /// of CAS operations. In particular, a load of the value followed by a successful
2002 /// `compare_exchange` with the previous load *does not ensure* that other threads have not
2003 /// changed the value in the interim. This is usually important when the *equality* check in
2004 /// the `compare_exchange` is being used to check the *identity* of a value, but equality
2005 /// does not necessarily imply identity. This is a particularly common case for pointers, as
2006 /// a pointer holding the same address does not imply that the same object exists at that
2007 /// address! In this case, `compare_exchange` can lead to the [ABA problem].
2008 ///
2009 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2010 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
2011 #[inline]
2012 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
2013 #[cfg(target_has_atomic = "ptr")]
2014 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2015 #[rustc_should_not_be_called_on_const_items]
2016 pub fn compare_exchange_weak(
2017 &self,
2018 current: *mut T,
2019 new: *mut T,
2020 success: Ordering,
2021 failure: Ordering,
2022 ) -> Result<*mut T, *mut T> {
2023 // SAFETY: This intrinsic is unsafe because it operates on a raw pointer
2024 // but we know for sure that the pointer is valid (we just got it from
2025 // an `UnsafeCell` that we have by reference) and the atomic operation
2026 // itself allows us to safely mutate the `UnsafeCell` contents.
2027 unsafe { atomic_compare_exchange_weak(self.p.get(), current, new, success, failure) }
2028 }
2029
2030 /// Fetches the value, and applies a function to it that returns an optional
2031 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
2032 /// returned `Some(_)`, else `Err(previous_value)`.
2033 ///
2034 /// Note: This may call the function multiple times if the value has been
2035 /// changed from other threads in the meantime, as long as the function
2036 /// returns `Some(_)`, but the function will have been applied only once to
2037 /// the stored value.
2038 ///
2039 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
2040 /// ordering of this operation. The first describes the required ordering for
2041 /// when the operation finally succeeds while the second describes the
2042 /// required ordering for loads. These correspond to the success and failure
2043 /// orderings of [`AtomicPtr::compare_exchange`] respectively.
2044 ///
2045 /// Using [`Acquire`] as success ordering makes the store part of this
2046 /// operation [`Relaxed`], and using [`Release`] makes the final successful
2047 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
2048 /// [`Acquire`] or [`Relaxed`].
2049 ///
2050 /// **Note:** This method is only available on platforms that support atomic
2051 /// operations on pointers.
2052 ///
2053 /// # Considerations
2054 ///
2055 /// This method is not magic; it is not provided by the hardware, and does not act like a
2056 /// critical section or mutex.
2057 ///
2058 /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
2059 /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem],
2060 /// which is a particularly common pitfall for pointers!
2061 ///
2062 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2063 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
2064 ///
2065 /// # Examples
2066 ///
2067 /// ```rust
2068 /// use std::sync::atomic::{AtomicPtr, Ordering};
2069 ///
2070 /// let ptr: *mut _ = &mut 5;
2071 /// let some_ptr = AtomicPtr::new(ptr);
2072 ///
2073 /// let new: *mut _ = &mut 10;
2074 /// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
2075 /// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
2076 /// if x == ptr {
2077 /// Some(new)
2078 /// } else {
2079 /// None
2080 /// }
2081 /// });
2082 /// assert_eq!(result, Ok(ptr));
2083 /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2084 /// ```
2085 #[inline]
2086 #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
2087 #[cfg(target_has_atomic = "ptr")]
2088 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2089 #[rustc_should_not_be_called_on_const_items]
2090 pub fn fetch_update<F>(
2091 &self,
2092 set_order: Ordering,
2093 fetch_order: Ordering,
2094 mut f: F,
2095 ) -> Result<*mut T, *mut T>
2096 where
2097 F: FnMut(*mut T) -> Option<*mut T>,
2098 {
2099 let mut prev = self.load(fetch_order);
2100 while let Some(next) = f(prev) {
2101 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
2102 x @ Ok(_) => return x,
2103 Err(next_prev) => prev = next_prev,
2104 }
2105 }
2106 Err(prev)
2107 }
2108 /// Fetches the value, and applies a function to it that returns an optional
2109 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
2110 /// returned `Some(_)`, else `Err(previous_value)`.
2111 ///
2112 /// See also: [`update`](`AtomicPtr::update`).
2113 ///
2114 /// Note: This may call the function multiple times if the value has been
2115 /// changed from other threads in the meantime, as long as the function
2116 /// returns `Some(_)`, but the function will have been applied only once to
2117 /// the stored value.
2118 ///
2119 /// `try_update` takes two [`Ordering`] arguments to describe the memory
2120 /// ordering of this operation. The first describes the required ordering for
2121 /// when the operation finally succeeds while the second describes the
2122 /// required ordering for loads. These correspond to the success and failure
2123 /// orderings of [`AtomicPtr::compare_exchange`] respectively.
2124 ///
2125 /// Using [`Acquire`] as success ordering makes the store part of this
2126 /// operation [`Relaxed`], and using [`Release`] makes the final successful
2127 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
2128 /// [`Acquire`] or [`Relaxed`].
2129 ///
2130 /// **Note:** This method is only available on platforms that support atomic
2131 /// operations on pointers.
2132 ///
2133 /// # Considerations
2134 ///
2135 /// This method is not magic; it is not provided by the hardware, and does not act like a
2136 /// critical section or mutex.
2137 ///
2138 /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
2139 /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem],
2140 /// which is a particularly common pitfall for pointers!
2141 ///
2142 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2143 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
2144 ///
2145 /// # Examples
2146 ///
2147 /// ```rust
2148 /// #![feature(atomic_try_update)]
2149 /// use std::sync::atomic::{AtomicPtr, Ordering};
2150 ///
2151 /// let ptr: *mut _ = &mut 5;
2152 /// let some_ptr = AtomicPtr::new(ptr);
2153 ///
2154 /// let new: *mut _ = &mut 10;
2155 /// assert_eq!(some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
2156 /// let result = some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
2157 /// if x == ptr {
2158 /// Some(new)
2159 /// } else {
2160 /// None
2161 /// }
2162 /// });
2163 /// assert_eq!(result, Ok(ptr));
2164 /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2165 /// ```
2166 #[inline]
2167 #[unstable(feature = "atomic_try_update", issue = "135894")]
2168 #[cfg(target_has_atomic = "ptr")]
2169 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2170 #[rustc_should_not_be_called_on_const_items]
2171 pub fn try_update(
2172 &self,
2173 set_order: Ordering,
2174 fetch_order: Ordering,
2175 f: impl FnMut(*mut T) -> Option<*mut T>,
2176 ) -> Result<*mut T, *mut T> {
2177 // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`;
2178 // when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`.
2179 self.fetch_update(set_order, fetch_order, f)
2180 }
2181
2182 /// Fetches the value, applies a function to it that it return a new value.
2183 /// The new value is stored and the old value is returned.
2184 ///
2185 /// See also: [`try_update`](`AtomicPtr::try_update`).
2186 ///
2187 /// Note: This may call the function multiple times if the value has been changed from other threads in
2188 /// the meantime, but the function will have been applied only once to the stored value.
2189 ///
2190 /// `update` takes two [`Ordering`] arguments to describe the memory
2191 /// ordering of this operation. The first describes the required ordering for
2192 /// when the operation finally succeeds while the second describes the
2193 /// required ordering for loads. These correspond to the success and failure
2194 /// orderings of [`AtomicPtr::compare_exchange`] respectively.
2195 ///
2196 /// Using [`Acquire`] as success ordering makes the store part
2197 /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
2198 /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2199 ///
2200 /// **Note:** This method is only available on platforms that support atomic
2201 /// operations on pointers.
2202 ///
2203 /// # Considerations
2204 ///
2205 /// This method is not magic; it is not provided by the hardware, and does not act like a
2206 /// critical section or mutex.
2207 ///
2208 /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
2209 /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem],
2210 /// which is a particularly common pitfall for pointers!
2211 ///
2212 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2213 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
2214 ///
2215 /// # Examples
2216 ///
2217 /// ```rust
2218 /// #![feature(atomic_try_update)]
2219 ///
2220 /// use std::sync::atomic::{AtomicPtr, Ordering};
2221 ///
2222 /// let ptr: *mut _ = &mut 5;
2223 /// let some_ptr = AtomicPtr::new(ptr);
2224 ///
2225 /// let new: *mut _ = &mut 10;
2226 /// let result = some_ptr.update(Ordering::SeqCst, Ordering::SeqCst, |_| new);
2227 /// assert_eq!(result, ptr);
2228 /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2229 /// ```
2230 #[inline]
2231 #[unstable(feature = "atomic_try_update", issue = "135894")]
2232 #[cfg(target_has_atomic = "8")]
2233 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2234 #[rustc_should_not_be_called_on_const_items]
2235 pub fn update(
2236 &self,
2237 set_order: Ordering,
2238 fetch_order: Ordering,
2239 mut f: impl FnMut(*mut T) -> *mut T,
2240 ) -> *mut T {
2241 let mut prev = self.load(fetch_order);
2242 loop {
2243 match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
2244 Ok(x) => break x,
2245 Err(next_prev) => prev = next_prev,
2246 }
2247 }
2248 }
2249
2250 /// Offsets the pointer's address by adding `val` (in units of `T`),
2251 /// returning the previous pointer.
2252 ///
2253 /// This is equivalent to using [`wrapping_add`] to atomically perform the
2254 /// equivalent of `ptr = ptr.wrapping_add(val);`.
2255 ///
2256 /// This method operates in units of `T`, which means that it cannot be used
2257 /// to offset the pointer by an amount which is not a multiple of
2258 /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2259 /// work with a deliberately misaligned pointer. In such cases, you may use
2260 /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead.
2261 ///
2262 /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the
2263 /// memory ordering of this operation. All ordering modes are possible. Note
2264 /// that using [`Acquire`] makes the store part of this operation
2265 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2266 ///
2267 /// **Note**: This method is only available on platforms that support atomic
2268 /// operations on [`AtomicPtr`].
2269 ///
2270 /// [`wrapping_add`]: pointer::wrapping_add
2271 ///
2272 /// # Examples
2273 ///
2274 /// ```
2275 /// use core::sync::atomic::{AtomicPtr, Ordering};
2276 ///
2277 /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2278 /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
2279 /// // Note: units of `size_of::<i64>()`.
2280 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
2281 /// ```
2282 #[inline]
2283 #[cfg(target_has_atomic = "ptr")]
2284 #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2285 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2286 #[rustc_should_not_be_called_on_const_items]
2287 pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T {
2288 self.fetch_byte_add(val.wrapping_mul(size_of::<T>()), order)
2289 }
2290
2291 /// Offsets the pointer's address by subtracting `val` (in units of `T`),
2292 /// returning the previous pointer.
2293 ///
2294 /// This is equivalent to using [`wrapping_sub`] to atomically perform the
2295 /// equivalent of `ptr = ptr.wrapping_sub(val);`.
2296 ///
2297 /// This method operates in units of `T`, which means that it cannot be used
2298 /// to offset the pointer by an amount which is not a multiple of
2299 /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2300 /// work with a deliberately misaligned pointer. In such cases, you may use
2301 /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead.
2302 ///
2303 /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory
2304 /// ordering of this operation. All ordering modes are possible. Note that
2305 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2306 /// and using [`Release`] makes the load part [`Relaxed`].
2307 ///
2308 /// **Note**: This method is only available on platforms that support atomic
2309 /// operations on [`AtomicPtr`].
2310 ///
2311 /// [`wrapping_sub`]: pointer::wrapping_sub
2312 ///
2313 /// # Examples
2314 ///
2315 /// ```
2316 /// use core::sync::atomic::{AtomicPtr, Ordering};
2317 ///
2318 /// let array = [1i32, 2i32];
2319 /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
2320 ///
2321 /// assert!(core::ptr::eq(
2322 /// atom.fetch_ptr_sub(1, Ordering::Relaxed),
2323 /// &array[1],
2324 /// ));
2325 /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
2326 /// ```
2327 #[inline]
2328 #[cfg(target_has_atomic = "ptr")]
2329 #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2330 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2331 #[rustc_should_not_be_called_on_const_items]
2332 pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T {
2333 self.fetch_byte_sub(val.wrapping_mul(size_of::<T>()), order)
2334 }
2335
2336 /// Offsets the pointer's address by adding `val` *bytes*, returning the
2337 /// previous pointer.
2338 ///
2339 /// This is equivalent to using [`wrapping_byte_add`] to atomically
2340 /// perform `ptr = ptr.wrapping_byte_add(val)`.
2341 ///
2342 /// `fetch_byte_add` takes an [`Ordering`] argument which describes the
2343 /// memory ordering of this operation. All ordering modes are possible. Note
2344 /// that using [`Acquire`] makes the store part of this operation
2345 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2346 ///
2347 /// **Note**: This method is only available on platforms that support atomic
2348 /// operations on [`AtomicPtr`].
2349 ///
2350 /// [`wrapping_byte_add`]: pointer::wrapping_byte_add
2351 ///
2352 /// # Examples
2353 ///
2354 /// ```
2355 /// use core::sync::atomic::{AtomicPtr, Ordering};
2356 ///
2357 /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2358 /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
2359 /// // Note: in units of bytes, not `size_of::<i64>()`.
2360 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
2361 /// ```
2362 #[inline]
2363 #[cfg(target_has_atomic = "ptr")]
2364 #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2365 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2366 #[rustc_should_not_be_called_on_const_items]
2367 pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T {
2368 // SAFETY: data races are prevented by atomic intrinsics.
2369 unsafe { atomic_add(self.p.get(), val, order).cast() }
2370 }
2371
2372 /// Offsets the pointer's address by subtracting `val` *bytes*, returning the
2373 /// previous pointer.
2374 ///
2375 /// This is equivalent to using [`wrapping_byte_sub`] to atomically
2376 /// perform `ptr = ptr.wrapping_byte_sub(val)`.
2377 ///
2378 /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the
2379 /// memory ordering of this operation. All ordering modes are possible. Note
2380 /// that using [`Acquire`] makes the store part of this operation
2381 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2382 ///
2383 /// **Note**: This method is only available on platforms that support atomic
2384 /// operations on [`AtomicPtr`].
2385 ///
2386 /// [`wrapping_byte_sub`]: pointer::wrapping_byte_sub
2387 ///
2388 /// # Examples
2389 ///
2390 /// ```
2391 /// use core::sync::atomic::{AtomicPtr, Ordering};
2392 ///
2393 /// let mut arr = [0i64, 1];
2394 /// let atom = AtomicPtr::<i64>::new(&raw mut arr[1]);
2395 /// assert_eq!(atom.fetch_byte_sub(8, Ordering::Relaxed).addr(), (&raw const arr[1]).addr());
2396 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), (&raw const arr[0]).addr());
2397 /// ```
2398 #[inline]
2399 #[cfg(target_has_atomic = "ptr")]
2400 #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2401 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2402 #[rustc_should_not_be_called_on_const_items]
2403 pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T {
2404 // SAFETY: data races are prevented by atomic intrinsics.
2405 unsafe { atomic_sub(self.p.get(), val, order).cast() }
2406 }
2407
2408 /// Performs a bitwise "or" operation on the address of the current pointer,
2409 /// and the argument `val`, and stores a pointer with provenance of the
2410 /// current pointer and the resulting address.
2411 ///
2412 /// This is equivalent to using [`map_addr`] to atomically perform
2413 /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged
2414 /// pointer schemes to atomically set tag bits.
2415 ///
2416 /// **Caveat**: This operation returns the previous value. To compute the
2417 /// stored value without losing provenance, you may use [`map_addr`]. For
2418 /// example: `a.fetch_or(val).map_addr(|a| a | val)`.
2419 ///
2420 /// `fetch_or` takes an [`Ordering`] argument which describes the memory
2421 /// ordering of this operation. All ordering modes are possible. Note that
2422 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2423 /// and using [`Release`] makes the load part [`Relaxed`].
2424 ///
2425 /// **Note**: This method is only available on platforms that support atomic
2426 /// operations on [`AtomicPtr`].
2427 ///
2428 /// This API and its claimed semantics are part of the Strict Provenance
2429 /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2430 /// details.
2431 ///
2432 /// [`map_addr`]: pointer::map_addr
2433 ///
2434 /// # Examples
2435 ///
2436 /// ```
2437 /// use core::sync::atomic::{AtomicPtr, Ordering};
2438 ///
2439 /// let pointer = &mut 3i64 as *mut i64;
2440 ///
2441 /// let atom = AtomicPtr::<i64>::new(pointer);
2442 /// // Tag the bottom bit of the pointer.
2443 /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
2444 /// // Extract and untag.
2445 /// let tagged = atom.load(Ordering::Relaxed);
2446 /// assert_eq!(tagged.addr() & 1, 1);
2447 /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2448 /// ```
2449 #[inline]
2450 #[cfg(target_has_atomic = "ptr")]
2451 #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2452 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2453 #[rustc_should_not_be_called_on_const_items]
2454 pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T {
2455 // SAFETY: data races are prevented by atomic intrinsics.
2456 unsafe { atomic_or(self.p.get(), val, order).cast() }
2457 }
2458
2459 /// Performs a bitwise "and" operation on the address of the current
2460 /// pointer, and the argument `val`, and stores a pointer with provenance of
2461 /// the current pointer and the resulting address.
2462 ///
2463 /// This is equivalent to using [`map_addr`] to atomically perform
2464 /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged
2465 /// pointer schemes to atomically unset tag bits.
2466 ///
2467 /// **Caveat**: This operation returns the previous value. To compute the
2468 /// stored value without losing provenance, you may use [`map_addr`]. For
2469 /// example: `a.fetch_and(val).map_addr(|a| a & val)`.
2470 ///
2471 /// `fetch_and` takes an [`Ordering`] argument which describes the memory
2472 /// ordering of this operation. All ordering modes are possible. Note that
2473 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2474 /// and using [`Release`] makes the load part [`Relaxed`].
2475 ///
2476 /// **Note**: This method is only available on platforms that support atomic
2477 /// operations on [`AtomicPtr`].
2478 ///
2479 /// This API and its claimed semantics are part of the Strict Provenance
2480 /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2481 /// details.
2482 ///
2483 /// [`map_addr`]: pointer::map_addr
2484 ///
2485 /// # Examples
2486 ///
2487 /// ```
2488 /// use core::sync::atomic::{AtomicPtr, Ordering};
2489 ///
2490 /// let pointer = &mut 3i64 as *mut i64;
2491 /// // A tagged pointer
2492 /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2493 /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
2494 /// // Untag, and extract the previously tagged pointer.
2495 /// let untagged = atom.fetch_and(!1, Ordering::Relaxed)
2496 /// .map_addr(|a| a & !1);
2497 /// assert_eq!(untagged, pointer);
2498 /// ```
2499 #[inline]
2500 #[cfg(target_has_atomic = "ptr")]
2501 #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2502 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2503 #[rustc_should_not_be_called_on_const_items]
2504 pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T {
2505 // SAFETY: data races are prevented by atomic intrinsics.
2506 unsafe { atomic_and(self.p.get(), val, order).cast() }
2507 }
2508
2509 /// Performs a bitwise "xor" operation on the address of the current
2510 /// pointer, and the argument `val`, and stores a pointer with provenance of
2511 /// the current pointer and the resulting address.
2512 ///
2513 /// This is equivalent to using [`map_addr`] to atomically perform
2514 /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged
2515 /// pointer schemes to atomically toggle tag bits.
2516 ///
2517 /// **Caveat**: This operation returns the previous value. To compute the
2518 /// stored value without losing provenance, you may use [`map_addr`]. For
2519 /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`.
2520 ///
2521 /// `fetch_xor` takes an [`Ordering`] argument which describes the memory
2522 /// ordering of this operation. All ordering modes are possible. Note that
2523 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2524 /// and using [`Release`] makes the load part [`Relaxed`].
2525 ///
2526 /// **Note**: This method is only available on platforms that support atomic
2527 /// operations on [`AtomicPtr`].
2528 ///
2529 /// This API and its claimed semantics are part of the Strict Provenance
2530 /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2531 /// details.
2532 ///
2533 /// [`map_addr`]: pointer::map_addr
2534 ///
2535 /// # Examples
2536 ///
2537 /// ```
2538 /// use core::sync::atomic::{AtomicPtr, Ordering};
2539 ///
2540 /// let pointer = &mut 3i64 as *mut i64;
2541 /// let atom = AtomicPtr::<i64>::new(pointer);
2542 ///
2543 /// // Toggle a tag bit on the pointer.
2544 /// atom.fetch_xor(1, Ordering::Relaxed);
2545 /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2546 /// ```
2547 #[inline]
2548 #[cfg(target_has_atomic = "ptr")]
2549 #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2550 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2551 #[rustc_should_not_be_called_on_const_items]
2552 pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T {
2553 // SAFETY: data races are prevented by atomic intrinsics.
2554 unsafe { atomic_xor(self.p.get(), val, order).cast() }
2555 }
2556
2557 /// Returns a mutable pointer to the underlying pointer.
2558 ///
2559 /// Doing non-atomic reads and writes on the resulting pointer can be a data race.
2560 /// This method is mostly useful for FFI, where the function signature may use
2561 /// `*mut *mut T` instead of `&AtomicPtr<T>`.
2562 ///
2563 /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
2564 /// atomic types work with interior mutability. All modifications of an atomic change the value
2565 /// through a shared reference, and can do so safely as long as they use atomic operations. Any
2566 /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the
2567 /// requirements of the [memory model].
2568 ///
2569 /// # Examples
2570 ///
2571 /// ```ignore (extern-declaration)
2572 /// use std::sync::atomic::AtomicPtr;
2573 ///
2574 /// extern "C" {
2575 /// fn my_atomic_op(arg: *mut *mut u32);
2576 /// }
2577 ///
2578 /// let mut value = 17;
2579 /// let atomic = AtomicPtr::new(&mut value);
2580 ///
2581 /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
2582 /// unsafe {
2583 /// my_atomic_op(atomic.as_ptr());
2584 /// }
2585 /// ```
2586 ///
2587 /// [memory model]: self#memory-model-for-atomic-accesses
2588 #[inline]
2589 #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
2590 #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
2591 #[rustc_never_returns_null_ptr]
2592 pub const fn as_ptr(&self) -> *mut *mut T {
2593 self.p.get()
2594 }
2595}
2596
2597#[cfg(target_has_atomic_load_store = "8")]
2598#[stable(feature = "atomic_bool_from", since = "1.24.0")]
2599#[rustc_const_unstable(feature = "const_convert", issue = "143773")]
2600#[cfg(not(feature = "ferrocene_subset"))]
2601impl const From<bool> for AtomicBool {
2602 /// Converts a `bool` into an `AtomicBool`.
2603 ///
2604 /// # Examples
2605 ///
2606 /// ```
2607 /// use std::sync::atomic::AtomicBool;
2608 /// let atomic_bool = AtomicBool::from(true);
2609 /// assert_eq!(format!("{atomic_bool:?}"), "true")
2610 /// ```
2611 #[inline]
2612 fn from(b: bool) -> Self {
2613 Self::new(b)
2614 }
2615}
2616
2617#[cfg(target_has_atomic_load_store = "ptr")]
2618#[stable(feature = "atomic_from", since = "1.23.0")]
2619#[rustc_const_unstable(feature = "const_convert", issue = "143773")]
2620#[cfg(not(feature = "ferrocene_subset"))]
2621impl<T> const From<*mut T> for AtomicPtr<T> {
2622 /// Converts a `*mut T` into an `AtomicPtr<T>`.
2623 #[inline]
2624 fn from(p: *mut T) -> Self {
2625 Self::new(p)
2626 }
2627}
2628
2629#[allow(unused_macros)] // This macro ends up being unused on some architectures.
2630macro_rules! if_8_bit {
2631 (u8, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($yes)*)?) };
2632 (i8, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($yes)*)?) };
2633 ($_:ident, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($no)*)?) };
2634}
2635
2636#[cfg(target_has_atomic_load_store)]
2637macro_rules! atomic_int {
2638 ($cfg_cas:meta,
2639 $cfg_align:meta,
2640 $stable:meta,
2641 $stable_cxchg:meta,
2642 $stable_debug:meta,
2643 $stable_access:meta,
2644 $stable_from:meta,
2645 $stable_nand:meta,
2646 $const_stable_new:meta,
2647 $const_stable_into_inner:meta,
2648 $diagnostic_item:meta,
2649 $s_int_type:literal,
2650 $extra_feature:expr,
2651 $min_fn:ident, $max_fn:ident,
2652 $align:expr,
2653 $int_type:ident $atomic_type:ident) => {
2654 /// An integer type which can be safely shared between threads.
2655 ///
2656 /// This type has the same
2657 #[doc = if_8_bit!(
2658 $int_type,
2659 yes = ["size, alignment, and bit validity"],
2660 no = ["size and bit validity"],
2661 )]
2662 /// as the underlying integer type, [`
2663 #[doc = $s_int_type]
2664 /// `].
2665 #[doc = if_8_bit! {
2666 $int_type,
2667 no = [
2668 "However, the alignment of this type is always equal to its ",
2669 "size, even on targets where [`", $s_int_type, "`] has a ",
2670 "lesser alignment."
2671 ],
2672 }]
2673 ///
2674 /// For more about the differences between atomic types and
2675 /// non-atomic types as well as information about the portability of
2676 /// this type, please see the [module-level documentation].
2677 ///
2678 /// **Note:** This type is only available on platforms that support
2679 /// atomic loads and stores of [`
2680 #[doc = $s_int_type]
2681 /// `].
2682 ///
2683 /// [module-level documentation]: crate::sync::atomic
2684 #[$stable]
2685 #[$diagnostic_item]
2686 #[repr(C, align($align))]
2687 pub struct $atomic_type {
2688 v: UnsafeCell<$int_type>,
2689 }
2690
2691 #[$stable]
2692 impl Default for $atomic_type {
2693 #[inline]
2694 fn default() -> Self {
2695 Self::new(Default::default())
2696 }
2697 }
2698
2699 #[$stable_from]
2700 #[rustc_const_unstable(feature = "const_convert", issue = "143773")]
2701 impl const From<$int_type> for $atomic_type {
2702 #[doc = concat!("Converts an `", stringify!($int_type), "` into an `", stringify!($atomic_type), "`.")]
2703 #[inline]
2704 fn from(v: $int_type) -> Self { Self::new(v) }
2705 }
2706
2707 #[$stable_debug]
2708 #[cfg(not(feature = "ferrocene_subset"))]
2709 impl fmt::Debug for $atomic_type {
2710 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
2711 fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
2712 }
2713 }
2714
2715 // Send is implicitly implemented.
2716 #[$stable]
2717 unsafe impl Sync for $atomic_type {}
2718
2719 impl $atomic_type {
2720 /// Creates a new atomic integer.
2721 ///
2722 /// # Examples
2723 ///
2724 /// ```
2725 #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2726 ///
2727 #[doc = concat!("let atomic_forty_two = ", stringify!($atomic_type), "::new(42);")]
2728 /// ```
2729 #[inline]
2730 #[$stable]
2731 #[$const_stable_new]
2732 #[must_use]
2733 pub const fn new(v: $int_type) -> Self {
2734 Self {v: UnsafeCell::new(v)}
2735 }
2736
2737 /// Creates a new reference to an atomic integer from a pointer.
2738 ///
2739 /// # Examples
2740 ///
2741 /// ```
2742 #[doc = concat!($extra_feature, "use std::sync::atomic::{self, ", stringify!($atomic_type), "};")]
2743 ///
2744 /// // Get a pointer to an allocated value
2745 #[doc = concat!("let ptr: *mut ", stringify!($int_type), " = Box::into_raw(Box::new(0));")]
2746 ///
2747 #[doc = concat!("assert!(ptr.cast::<", stringify!($atomic_type), ">().is_aligned());")]
2748 ///
2749 /// {
2750 /// // Create an atomic view of the allocated value
2751 // SAFETY: this is a doc comment, tidy, it can't hurt you (also guaranteed by the construction of `ptr` and the assert above)
2752 #[doc = concat!(" let atomic = unsafe {", stringify!($atomic_type), "::from_ptr(ptr) };")]
2753 ///
2754 /// // Use `atomic` for atomic operations, possibly share it with other threads
2755 /// atomic.store(1, atomic::Ordering::Relaxed);
2756 /// }
2757 ///
2758 /// // It's ok to non-atomically access the value behind `ptr`,
2759 /// // since the reference to the atomic ended its lifetime in the block above
2760 /// assert_eq!(unsafe { *ptr }, 1);
2761 ///
2762 /// // Deallocate the value
2763 /// unsafe { drop(Box::from_raw(ptr)) }
2764 /// ```
2765 ///
2766 /// # Safety
2767 ///
2768 /// * `ptr` must be aligned to
2769 #[doc = concat!(" `align_of::<", stringify!($atomic_type), ">()`")]
2770 #[doc = if_8_bit!{
2771 $int_type,
2772 yes = [
2773 " (note that this is always true, since `align_of::<",
2774 stringify!($atomic_type), ">() == 1`)."
2775 ],
2776 no = [
2777 " (note that on some platforms this can be bigger than `align_of::<",
2778 stringify!($int_type), ">()`)."
2779 ],
2780 }]
2781 /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2782 /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
2783 /// allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different
2784 /// sizes, without synchronization.
2785 ///
2786 /// [valid]: crate::ptr#safety
2787 /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
2788 #[inline]
2789 #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
2790 #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
2791 pub const unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a $atomic_type {
2792 // SAFETY: guaranteed by the caller
2793 unsafe { &*ptr.cast() }
2794 }
2795
2796
2797 /// Returns a mutable reference to the underlying integer.
2798 ///
2799 /// This is safe because the mutable reference guarantees that no other threads are
2800 /// concurrently accessing the atomic data.
2801 ///
2802 /// # Examples
2803 ///
2804 /// ```
2805 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2806 ///
2807 #[doc = concat!("let mut some_var = ", stringify!($atomic_type), "::new(10);")]
2808 /// assert_eq!(*some_var.get_mut(), 10);
2809 /// *some_var.get_mut() = 5;
2810 /// assert_eq!(some_var.load(Ordering::SeqCst), 5);
2811 /// ```
2812 #[inline]
2813 #[$stable_access]
2814 pub fn get_mut(&mut self) -> &mut $int_type {
2815 self.v.get_mut()
2816 }
2817
2818 #[doc = concat!("Get atomic access to a `&mut ", stringify!($int_type), "`.")]
2819 ///
2820 #[doc = if_8_bit! {
2821 $int_type,
2822 no = [
2823 "**Note:** This function is only available on targets where `",
2824 stringify!($atomic_type), "` has the same alignment as `", stringify!($int_type), "`."
2825 ],
2826 }]
2827 ///
2828 /// # Examples
2829 ///
2830 /// ```
2831 /// #![feature(atomic_from_mut)]
2832 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2833 ///
2834 /// let mut some_int = 123;
2835 #[doc = concat!("let a = ", stringify!($atomic_type), "::from_mut(&mut some_int);")]
2836 /// a.store(100, Ordering::Relaxed);
2837 /// assert_eq!(some_int, 100);
2838 /// ```
2839 ///
2840 #[inline]
2841 #[$cfg_align]
2842 #[unstable(feature = "atomic_from_mut", issue = "76314")]
2843 pub fn from_mut(v: &mut $int_type) -> &mut Self {
2844 let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2845 // SAFETY:
2846 // - the mutable reference guarantees unique ownership.
2847 // - the alignment of `$int_type` and `Self` is the
2848 // same, as promised by $cfg_align and verified above.
2849 unsafe { &mut *(v as *mut $int_type as *mut Self) }
2850 }
2851
2852 #[doc = concat!("Get non-atomic access to a `&mut [", stringify!($atomic_type), "]` slice")]
2853 ///
2854 /// This is safe because the mutable reference guarantees that no other threads are
2855 /// concurrently accessing the atomic data.
2856 ///
2857 /// # Examples
2858 ///
2859 /// ```ignore-wasm
2860 /// #![feature(atomic_from_mut)]
2861 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2862 ///
2863 #[doc = concat!("let mut some_ints = [const { ", stringify!($atomic_type), "::new(0) }; 10];")]
2864 ///
2865 #[doc = concat!("let view: &mut [", stringify!($int_type), "] = ", stringify!($atomic_type), "::get_mut_slice(&mut some_ints);")]
2866 /// assert_eq!(view, [0; 10]);
2867 /// view
2868 /// .iter_mut()
2869 /// .enumerate()
2870 /// .for_each(|(idx, int)| *int = idx as _);
2871 ///
2872 /// std::thread::scope(|s| {
2873 /// some_ints
2874 /// .iter()
2875 /// .enumerate()
2876 /// .for_each(|(idx, int)| {
2877 /// s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
2878 /// })
2879 /// });
2880 /// ```
2881 #[inline]
2882 #[unstable(feature = "atomic_from_mut", issue = "76314")]
2883 pub fn get_mut_slice(this: &mut [Self]) -> &mut [$int_type] {
2884 // SAFETY: the mutable reference guarantees unique ownership.
2885 unsafe { &mut *(this as *mut [Self] as *mut [$int_type]) }
2886 }
2887
2888 #[doc = concat!("Get atomic access to a `&mut [", stringify!($int_type), "]` slice.")]
2889 ///
2890 #[doc = if_8_bit! {
2891 $int_type,
2892 no = [
2893 "**Note:** This function is only available on targets where `",
2894 stringify!($atomic_type), "` has the same alignment as `", stringify!($int_type), "`."
2895 ],
2896 }]
2897 ///
2898 /// # Examples
2899 ///
2900 /// ```ignore-wasm
2901 /// #![feature(atomic_from_mut)]
2902 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2903 ///
2904 /// let mut some_ints = [0; 10];
2905 #[doc = concat!("let a = &*", stringify!($atomic_type), "::from_mut_slice(&mut some_ints);")]
2906 /// std::thread::scope(|s| {
2907 /// for i in 0..a.len() {
2908 /// s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
2909 /// }
2910 /// });
2911 /// for (i, n) in some_ints.into_iter().enumerate() {
2912 /// assert_eq!(i, n as usize);
2913 /// }
2914 /// ```
2915 #[inline]
2916 #[$cfg_align]
2917 #[unstable(feature = "atomic_from_mut", issue = "76314")]
2918 pub fn from_mut_slice(v: &mut [$int_type]) -> &mut [Self] {
2919 let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2920 // SAFETY:
2921 // - the mutable reference guarantees unique ownership.
2922 // - the alignment of `$int_type` and `Self` is the
2923 // same, as promised by $cfg_align and verified above.
2924 unsafe { &mut *(v as *mut [$int_type] as *mut [Self]) }
2925 }
2926
2927 /// Consumes the atomic and returns the contained value.
2928 ///
2929 /// This is safe because passing `self` by value guarantees that no other threads are
2930 /// concurrently accessing the atomic data.
2931 ///
2932 /// # Examples
2933 ///
2934 /// ```
2935 #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2936 ///
2937 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2938 /// assert_eq!(some_var.into_inner(), 5);
2939 /// ```
2940 #[inline]
2941 #[$stable_access]
2942 #[$const_stable_into_inner]
2943 pub const fn into_inner(self) -> $int_type {
2944 self.v.into_inner()
2945 }
2946
2947 /// Loads a value from the atomic integer.
2948 ///
2949 /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2950 /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
2951 ///
2952 /// # Panics
2953 ///
2954 /// Panics if `order` is [`Release`] or [`AcqRel`].
2955 ///
2956 /// # Examples
2957 ///
2958 /// ```
2959 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2960 ///
2961 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2962 ///
2963 /// assert_eq!(some_var.load(Ordering::Relaxed), 5);
2964 /// ```
2965 #[inline]
2966 #[$stable]
2967 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2968 pub fn load(&self, order: Ordering) -> $int_type {
2969 // SAFETY: data races are prevented by atomic intrinsics.
2970 unsafe { atomic_load(self.v.get(), order) }
2971 }
2972
2973 /// Stores a value into the atomic integer.
2974 ///
2975 /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2976 /// Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
2977 ///
2978 /// # Panics
2979 ///
2980 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
2981 ///
2982 /// # Examples
2983 ///
2984 /// ```
2985 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2986 ///
2987 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2988 ///
2989 /// some_var.store(10, Ordering::Relaxed);
2990 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2991 /// ```
2992 #[inline]
2993 #[$stable]
2994 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2995 #[rustc_should_not_be_called_on_const_items]
2996 pub fn store(&self, val: $int_type, order: Ordering) {
2997 // SAFETY: data races are prevented by atomic intrinsics.
2998 unsafe { atomic_store(self.v.get(), val, order); }
2999 }
3000
3001 /// Stores a value into the atomic integer, returning the previous value.
3002 ///
3003 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
3004 /// of this operation. All ordering modes are possible. Note that using
3005 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3006 /// using [`Release`] makes the load part [`Relaxed`].
3007 ///
3008 /// **Note**: This method is only available on platforms that support atomic operations on
3009 #[doc = concat!("[`", $s_int_type, "`].")]
3010 ///
3011 /// # Examples
3012 ///
3013 /// ```
3014 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3015 ///
3016 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
3017 ///
3018 /// assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
3019 /// ```
3020 #[inline]
3021 #[$stable]
3022 #[$cfg_cas]
3023 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3024 #[rustc_should_not_be_called_on_const_items]
3025 pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
3026 // SAFETY: data races are prevented by atomic intrinsics.
3027 unsafe { atomic_swap(self.v.get(), val, order) }
3028 }
3029
3030 /// Stores a value into the atomic integer if the current value is the same as
3031 /// the `current` value.
3032 ///
3033 /// The return value is always the previous value. If it is equal to `current`, then the
3034 /// value was updated.
3035 ///
3036 /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
3037 /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
3038 /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
3039 /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
3040 /// happens, and using [`Release`] makes the load part [`Relaxed`].
3041 ///
3042 /// **Note**: This method is only available on platforms that support atomic operations on
3043 #[doc = concat!("[`", $s_int_type, "`].")]
3044 ///
3045 /// # Migrating to `compare_exchange` and `compare_exchange_weak`
3046 ///
3047 /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
3048 /// memory orderings:
3049 ///
3050 /// Original | Success | Failure
3051 /// -------- | ------- | -------
3052 /// Relaxed | Relaxed | Relaxed
3053 /// Acquire | Acquire | Acquire
3054 /// Release | Release | Relaxed
3055 /// AcqRel | AcqRel | Acquire
3056 /// SeqCst | SeqCst | SeqCst
3057 ///
3058 /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
3059 /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
3060 /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
3061 /// rather than to infer success vs failure based on the value that was read.
3062 ///
3063 /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
3064 /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
3065 /// which allows the compiler to generate better assembly code when the compare and swap
3066 /// is used in a loop.
3067 ///
3068 /// # Examples
3069 ///
3070 /// ```
3071 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3072 ///
3073 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
3074 ///
3075 /// assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
3076 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
3077 ///
3078 /// assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
3079 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
3080 /// ```
3081 #[inline]
3082 #[$stable]
3083 #[deprecated(
3084 since = "1.50.0",
3085 note = "Use `compare_exchange` or `compare_exchange_weak` instead")
3086 ]
3087 #[$cfg_cas]
3088 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3089 #[rustc_should_not_be_called_on_const_items]
3090 pub fn compare_and_swap(&self,
3091 current: $int_type,
3092 new: $int_type,
3093 order: Ordering) -> $int_type {
3094 match self.compare_exchange(current,
3095 new,
3096 order,
3097 strongest_failure_ordering(order)) {
3098 Ok(x) => x,
3099 Err(x) => x,
3100 }
3101 }
3102
3103 /// Stores a value into the atomic integer if the current value is the same as
3104 /// the `current` value.
3105 ///
3106 /// The return value is a result indicating whether the new value was written and
3107 /// containing the previous value. On success this value is guaranteed to be equal to
3108 /// `current`.
3109 ///
3110 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
3111 /// ordering of this operation. `success` describes the required ordering for the
3112 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
3113 /// `failure` describes the required ordering for the load operation that takes place when
3114 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
3115 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
3116 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3117 ///
3118 /// **Note**: This method is only available on platforms that support atomic operations on
3119 #[doc = concat!("[`", $s_int_type, "`].")]
3120 ///
3121 /// # Examples
3122 ///
3123 /// ```
3124 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3125 ///
3126 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
3127 ///
3128 /// assert_eq!(some_var.compare_exchange(5, 10,
3129 /// Ordering::Acquire,
3130 /// Ordering::Relaxed),
3131 /// Ok(5));
3132 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
3133 ///
3134 /// assert_eq!(some_var.compare_exchange(6, 12,
3135 /// Ordering::SeqCst,
3136 /// Ordering::Acquire),
3137 /// Err(10));
3138 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
3139 /// ```
3140 ///
3141 /// # Considerations
3142 ///
3143 /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
3144 /// of CAS operations. In particular, a load of the value followed by a successful
3145 /// `compare_exchange` with the previous load *does not ensure* that other threads have not
3146 /// changed the value in the interim! This is usually important when the *equality* check in
3147 /// the `compare_exchange` is being used to check the *identity* of a value, but equality
3148 /// does not necessarily imply identity. This is a particularly common case for pointers, as
3149 /// a pointer holding the same address does not imply that the same object exists at that
3150 /// address! In this case, `compare_exchange` can lead to the [ABA problem].
3151 ///
3152 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3153 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3154 #[inline]
3155 #[$stable_cxchg]
3156 #[$cfg_cas]
3157 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3158 #[rustc_should_not_be_called_on_const_items]
3159 pub fn compare_exchange(&self,
3160 current: $int_type,
3161 new: $int_type,
3162 success: Ordering,
3163 failure: Ordering) -> Result<$int_type, $int_type> {
3164 // SAFETY: data races are prevented by atomic intrinsics.
3165 unsafe { atomic_compare_exchange(self.v.get(), current, new, success, failure) }
3166 }
3167
3168 /// Stores a value into the atomic integer if the current value is the same as
3169 /// the `current` value.
3170 ///
3171 #[doc = concat!("Unlike [`", stringify!($atomic_type), "::compare_exchange`],")]
3172 /// this function is allowed to spuriously fail even
3173 /// when the comparison succeeds, which can result in more efficient code on some
3174 /// platforms. The return value is a result indicating whether the new value was
3175 /// written and containing the previous value.
3176 ///
3177 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
3178 /// ordering of this operation. `success` describes the required ordering for the
3179 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
3180 /// `failure` describes the required ordering for the load operation that takes place when
3181 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
3182 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
3183 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3184 ///
3185 /// **Note**: This method is only available on platforms that support atomic operations on
3186 #[doc = concat!("[`", $s_int_type, "`].")]
3187 ///
3188 /// # Examples
3189 ///
3190 /// ```
3191 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3192 ///
3193 #[doc = concat!("let val = ", stringify!($atomic_type), "::new(4);")]
3194 ///
3195 /// let mut old = val.load(Ordering::Relaxed);
3196 /// loop {
3197 /// let new = old * 2;
3198 /// match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
3199 /// Ok(_) => break,
3200 /// Err(x) => old = x,
3201 /// }
3202 /// }
3203 /// ```
3204 ///
3205 /// # Considerations
3206 ///
3207 /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
3208 /// of CAS operations. In particular, a load of the value followed by a successful
3209 /// `compare_exchange` with the previous load *does not ensure* that other threads have not
3210 /// changed the value in the interim. This is usually important when the *equality* check in
3211 /// the `compare_exchange` is being used to check the *identity* of a value, but equality
3212 /// does not necessarily imply identity. This is a particularly common case for pointers, as
3213 /// a pointer holding the same address does not imply that the same object exists at that
3214 /// address! In this case, `compare_exchange` can lead to the [ABA problem].
3215 ///
3216 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3217 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3218 #[inline]
3219 #[$stable_cxchg]
3220 #[$cfg_cas]
3221 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3222 #[rustc_should_not_be_called_on_const_items]
3223 pub fn compare_exchange_weak(&self,
3224 current: $int_type,
3225 new: $int_type,
3226 success: Ordering,
3227 failure: Ordering) -> Result<$int_type, $int_type> {
3228 // SAFETY: data races are prevented by atomic intrinsics.
3229 unsafe {
3230 atomic_compare_exchange_weak(self.v.get(), current, new, success, failure)
3231 }
3232 }
3233
3234 /// Adds to the current value, returning the previous value.
3235 ///
3236 /// This operation wraps around on overflow.
3237 ///
3238 /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering
3239 /// of this operation. All ordering modes are possible. Note that using
3240 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3241 /// using [`Release`] makes the load part [`Relaxed`].
3242 ///
3243 /// **Note**: This method is only available on platforms that support atomic operations on
3244 #[doc = concat!("[`", $s_int_type, "`].")]
3245 ///
3246 /// # Examples
3247 ///
3248 /// ```
3249 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3250 ///
3251 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0);")]
3252 /// assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
3253 /// assert_eq!(foo.load(Ordering::SeqCst), 10);
3254 /// ```
3255 #[inline]
3256 #[$stable]
3257 #[$cfg_cas]
3258 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3259 #[rustc_should_not_be_called_on_const_items]
3260 pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
3261 // SAFETY: data races are prevented by atomic intrinsics.
3262 unsafe { atomic_add(self.v.get(), val, order) }
3263 }
3264
3265 /// Subtracts from the current value, returning the previous value.
3266 ///
3267 /// This operation wraps around on overflow.
3268 ///
3269 /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
3270 /// of this operation. All ordering modes are possible. Note that using
3271 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3272 /// using [`Release`] makes the load part [`Relaxed`].
3273 ///
3274 /// **Note**: This method is only available on platforms that support atomic operations on
3275 #[doc = concat!("[`", $s_int_type, "`].")]
3276 ///
3277 /// # Examples
3278 ///
3279 /// ```
3280 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3281 ///
3282 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(20);")]
3283 /// assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
3284 /// assert_eq!(foo.load(Ordering::SeqCst), 10);
3285 /// ```
3286 #[inline]
3287 #[$stable]
3288 #[$cfg_cas]
3289 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3290 #[rustc_should_not_be_called_on_const_items]
3291 pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
3292 // SAFETY: data races are prevented by atomic intrinsics.
3293 unsafe { atomic_sub(self.v.get(), val, order) }
3294 }
3295
3296 /// Bitwise "and" with the current value.
3297 ///
3298 /// Performs a bitwise "and" operation on the current value and the argument `val`, and
3299 /// sets the new value to the result.
3300 ///
3301 /// Returns the previous value.
3302 ///
3303 /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
3304 /// of this operation. All ordering modes are possible. Note that using
3305 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3306 /// using [`Release`] makes the load part [`Relaxed`].
3307 ///
3308 /// **Note**: This method is only available on platforms that support atomic operations on
3309 #[doc = concat!("[`", $s_int_type, "`].")]
3310 ///
3311 /// # Examples
3312 ///
3313 /// ```
3314 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3315 ///
3316 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3317 /// assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3318 /// assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3319 /// ```
3320 #[inline]
3321 #[$stable]
3322 #[$cfg_cas]
3323 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3324 #[rustc_should_not_be_called_on_const_items]
3325 pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
3326 // SAFETY: data races are prevented by atomic intrinsics.
3327 unsafe { atomic_and(self.v.get(), val, order) }
3328 }
3329
3330 /// Bitwise "nand" with the current value.
3331 ///
3332 /// Performs a bitwise "nand" operation on the current value and the argument `val`, and
3333 /// sets the new value to the result.
3334 ///
3335 /// Returns the previous value.
3336 ///
3337 /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
3338 /// of this operation. All ordering modes are possible. Note that using
3339 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3340 /// using [`Release`] makes the load part [`Relaxed`].
3341 ///
3342 /// **Note**: This method is only available on platforms that support atomic operations on
3343 #[doc = concat!("[`", $s_int_type, "`].")]
3344 ///
3345 /// # Examples
3346 ///
3347 /// ```
3348 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3349 ///
3350 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0x13);")]
3351 /// assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
3352 /// assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
3353 /// ```
3354 #[inline]
3355 #[$stable_nand]
3356 #[$cfg_cas]
3357 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3358 #[rustc_should_not_be_called_on_const_items]
3359 pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
3360 // SAFETY: data races are prevented by atomic intrinsics.
3361 unsafe { atomic_nand(self.v.get(), val, order) }
3362 }
3363
3364 /// Bitwise "or" with the current value.
3365 ///
3366 /// Performs a bitwise "or" operation on the current value and the argument `val`, and
3367 /// sets the new value to the result.
3368 ///
3369 /// Returns the previous value.
3370 ///
3371 /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
3372 /// of this operation. All ordering modes are possible. Note that using
3373 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3374 /// using [`Release`] makes the load part [`Relaxed`].
3375 ///
3376 /// **Note**: This method is only available on platforms that support atomic operations on
3377 #[doc = concat!("[`", $s_int_type, "`].")]
3378 ///
3379 /// # Examples
3380 ///
3381 /// ```
3382 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3383 ///
3384 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3385 /// assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3386 /// assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3387 /// ```
3388 #[inline]
3389 #[$stable]
3390 #[$cfg_cas]
3391 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3392 #[rustc_should_not_be_called_on_const_items]
3393 pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
3394 // SAFETY: data races are prevented by atomic intrinsics.
3395 unsafe { atomic_or(self.v.get(), val, order) }
3396 }
3397
3398 /// Bitwise "xor" with the current value.
3399 ///
3400 /// Performs a bitwise "xor" operation on the current value and the argument `val`, and
3401 /// sets the new value to the result.
3402 ///
3403 /// Returns the previous value.
3404 ///
3405 /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
3406 /// of this operation. All ordering modes are possible. Note that using
3407 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3408 /// using [`Release`] makes the load part [`Relaxed`].
3409 ///
3410 /// **Note**: This method is only available on platforms that support atomic operations on
3411 #[doc = concat!("[`", $s_int_type, "`].")]
3412 ///
3413 /// # Examples
3414 ///
3415 /// ```
3416 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3417 ///
3418 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3419 /// assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
3420 /// assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3421 /// ```
3422 #[inline]
3423 #[$stable]
3424 #[$cfg_cas]
3425 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3426 #[rustc_should_not_be_called_on_const_items]
3427 pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
3428 // SAFETY: data races are prevented by atomic intrinsics.
3429 unsafe { atomic_xor(self.v.get(), val, order) }
3430 }
3431
3432 /// Fetches the value, and applies a function to it that returns an optional
3433 /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3434 /// `Err(previous_value)`.
3435 ///
3436 /// Note: This may call the function multiple times if the value has been changed from other threads in
3437 /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3438 /// only once to the stored value.
3439 ///
3440 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3441 /// The first describes the required ordering for when the operation finally succeeds while the second
3442 /// describes the required ordering for loads. These correspond to the success and failure orderings of
3443 #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3444 /// respectively.
3445 ///
3446 /// Using [`Acquire`] as success ordering makes the store part
3447 /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3448 /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3449 ///
3450 /// **Note**: This method is only available on platforms that support atomic operations on
3451 #[doc = concat!("[`", $s_int_type, "`].")]
3452 ///
3453 /// # Considerations
3454 ///
3455 /// This method is not magic; it is not provided by the hardware, and does not act like a
3456 /// critical section or mutex.
3457 ///
3458 /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
3459 /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem]
3460 /// if this atomic integer is an index or more generally if knowledge of only the *bitwise value*
3461 /// of the atomic is not in and of itself sufficient to ensure any required preconditions.
3462 ///
3463 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3464 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3465 ///
3466 /// # Examples
3467 ///
3468 /// ```rust
3469 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3470 ///
3471 #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3472 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3473 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3474 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3475 /// assert_eq!(x.load(Ordering::SeqCst), 9);
3476 /// ```
3477 #[inline]
3478 #[stable(feature = "no_more_cas", since = "1.45.0")]
3479 #[$cfg_cas]
3480 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3481 #[rustc_should_not_be_called_on_const_items]
3482 pub fn fetch_update<F>(&self,
3483 set_order: Ordering,
3484 fetch_order: Ordering,
3485 mut f: F) -> Result<$int_type, $int_type>
3486 where F: FnMut($int_type) -> Option<$int_type> {
3487 let mut prev = self.load(fetch_order);
3488 while let Some(next) = f(prev) {
3489 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
3490 x @ Ok(_) => return x,
3491 Err(next_prev) => prev = next_prev
3492 }
3493 }
3494 Err(prev)
3495 }
3496
3497 /// Fetches the value, and applies a function to it that returns an optional
3498 /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3499 /// `Err(previous_value)`.
3500 ///
3501 #[doc = concat!("See also: [`update`](`", stringify!($atomic_type), "::update`).")]
3502 ///
3503 /// Note: This may call the function multiple times if the value has been changed from other threads in
3504 /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3505 /// only once to the stored value.
3506 ///
3507 /// `try_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3508 /// The first describes the required ordering for when the operation finally succeeds while the second
3509 /// describes the required ordering for loads. These correspond to the success and failure orderings of
3510 #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3511 /// respectively.
3512 ///
3513 /// Using [`Acquire`] as success ordering makes the store part
3514 /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3515 /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3516 ///
3517 /// **Note**: This method is only available on platforms that support atomic operations on
3518 #[doc = concat!("[`", $s_int_type, "`].")]
3519 ///
3520 /// # Considerations
3521 ///
3522 /// This method is not magic; it is not provided by the hardware, and does not act like a
3523 /// critical section or mutex.
3524 ///
3525 /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
3526 /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem]
3527 /// if this atomic integer is an index or more generally if knowledge of only the *bitwise value*
3528 /// of the atomic is not in and of itself sufficient to ensure any required preconditions.
3529 ///
3530 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3531 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3532 ///
3533 /// # Examples
3534 ///
3535 /// ```rust
3536 /// #![feature(atomic_try_update)]
3537 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3538 ///
3539 #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3540 /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3541 /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3542 /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3543 /// assert_eq!(x.load(Ordering::SeqCst), 9);
3544 /// ```
3545 #[inline]
3546 #[unstable(feature = "atomic_try_update", issue = "135894")]
3547 #[$cfg_cas]
3548 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3549 #[rustc_should_not_be_called_on_const_items]
3550 pub fn try_update(
3551 &self,
3552 set_order: Ordering,
3553 fetch_order: Ordering,
3554 f: impl FnMut($int_type) -> Option<$int_type>,
3555 ) -> Result<$int_type, $int_type> {
3556 // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`;
3557 // when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`.
3558 self.fetch_update(set_order, fetch_order, f)
3559 }
3560
3561 /// Fetches the value, applies a function to it that it return a new value.
3562 /// The new value is stored and the old value is returned.
3563 ///
3564 #[doc = concat!("See also: [`try_update`](`", stringify!($atomic_type), "::try_update`).")]
3565 ///
3566 /// Note: This may call the function multiple times if the value has been changed from other threads in
3567 /// the meantime, but the function will have been applied only once to the stored value.
3568 ///
3569 /// `update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3570 /// The first describes the required ordering for when the operation finally succeeds while the second
3571 /// describes the required ordering for loads. These correspond to the success and failure orderings of
3572 #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3573 /// respectively.
3574 ///
3575 /// Using [`Acquire`] as success ordering makes the store part
3576 /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3577 /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3578 ///
3579 /// **Note**: This method is only available on platforms that support atomic operations on
3580 #[doc = concat!("[`", $s_int_type, "`].")]
3581 ///
3582 /// # Considerations
3583 ///
3584 /// [CAS operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3585 /// This method is not magic; it is not provided by the hardware, and does not act like a
3586 /// critical section or mutex.
3587 ///
3588 /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
3589 /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem]
3590 /// if this atomic integer is an index or more generally if knowledge of only the *bitwise value*
3591 /// of the atomic is not in and of itself sufficient to ensure any required preconditions.
3592 ///
3593 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3594 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3595 ///
3596 /// # Examples
3597 ///
3598 /// ```rust
3599 /// #![feature(atomic_try_update)]
3600 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3601 ///
3602 #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3603 /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| x + 1), 7);
3604 /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| x + 1), 8);
3605 /// assert_eq!(x.load(Ordering::SeqCst), 9);
3606 /// ```
3607 #[inline]
3608 #[unstable(feature = "atomic_try_update", issue = "135894")]
3609 #[$cfg_cas]
3610 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3611 #[rustc_should_not_be_called_on_const_items]
3612 pub fn update(
3613 &self,
3614 set_order: Ordering,
3615 fetch_order: Ordering,
3616 mut f: impl FnMut($int_type) -> $int_type,
3617 ) -> $int_type {
3618 let mut prev = self.load(fetch_order);
3619 loop {
3620 match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
3621 Ok(x) => break x,
3622 Err(next_prev) => prev = next_prev,
3623 }
3624 }
3625 }
3626
3627 /// Maximum with the current value.
3628 ///
3629 /// Finds the maximum of the current value and the argument `val`, and
3630 /// sets the new value to the result.
3631 ///
3632 /// Returns the previous value.
3633 ///
3634 /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering
3635 /// of this operation. All ordering modes are possible. Note that using
3636 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3637 /// using [`Release`] makes the load part [`Relaxed`].
3638 ///
3639 /// **Note**: This method is only available on platforms that support atomic operations on
3640 #[doc = concat!("[`", $s_int_type, "`].")]
3641 ///
3642 /// # Examples
3643 ///
3644 /// ```
3645 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3646 ///
3647 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3648 /// assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
3649 /// assert_eq!(foo.load(Ordering::SeqCst), 42);
3650 /// ```
3651 ///
3652 /// If you want to obtain the maximum value in one step, you can use the following:
3653 ///
3654 /// ```
3655 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3656 ///
3657 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3658 /// let bar = 42;
3659 /// let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
3660 /// assert!(max_foo == 42);
3661 /// ```
3662 #[inline]
3663 #[stable(feature = "atomic_min_max", since = "1.45.0")]
3664 #[$cfg_cas]
3665 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3666 #[rustc_should_not_be_called_on_const_items]
3667 pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
3668 // SAFETY: data races are prevented by atomic intrinsics.
3669 unsafe { $max_fn(self.v.get(), val, order) }
3670 }
3671
3672 /// Minimum with the current value.
3673 ///
3674 /// Finds the minimum of the current value and the argument `val`, and
3675 /// sets the new value to the result.
3676 ///
3677 /// Returns the previous value.
3678 ///
3679 /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering
3680 /// of this operation. All ordering modes are possible. Note that using
3681 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3682 /// using [`Release`] makes the load part [`Relaxed`].
3683 ///
3684 /// **Note**: This method is only available on platforms that support atomic operations on
3685 #[doc = concat!("[`", $s_int_type, "`].")]
3686 ///
3687 /// # Examples
3688 ///
3689 /// ```
3690 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3691 ///
3692 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3693 /// assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
3694 /// assert_eq!(foo.load(Ordering::Relaxed), 23);
3695 /// assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
3696 /// assert_eq!(foo.load(Ordering::Relaxed), 22);
3697 /// ```
3698 ///
3699 /// If you want to obtain the minimum value in one step, you can use the following:
3700 ///
3701 /// ```
3702 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3703 ///
3704 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3705 /// let bar = 12;
3706 /// let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
3707 /// assert_eq!(min_foo, 12);
3708 /// ```
3709 #[inline]
3710 #[stable(feature = "atomic_min_max", since = "1.45.0")]
3711 #[$cfg_cas]
3712 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3713 #[rustc_should_not_be_called_on_const_items]
3714 pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
3715 // SAFETY: data races are prevented by atomic intrinsics.
3716 unsafe { $min_fn(self.v.get(), val, order) }
3717 }
3718
3719 /// Returns a mutable pointer to the underlying integer.
3720 ///
3721 /// Doing non-atomic reads and writes on the resulting integer can be a data race.
3722 /// This method is mostly useful for FFI, where the function signature may use
3723 #[doc = concat!("`*mut ", stringify!($int_type), "` instead of `&", stringify!($atomic_type), "`.")]
3724 ///
3725 /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
3726 /// atomic types work with interior mutability. All modifications of an atomic change the value
3727 /// through a shared reference, and can do so safely as long as they use atomic operations. Any
3728 /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the
3729 /// requirements of the [memory model].
3730 ///
3731 /// # Examples
3732 ///
3733 /// ```ignore (extern-declaration)
3734 /// # fn main() {
3735 #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
3736 ///
3737 /// extern "C" {
3738 #[doc = concat!(" fn my_atomic_op(arg: *mut ", stringify!($int_type), ");")]
3739 /// }
3740 ///
3741 #[doc = concat!("let atomic = ", stringify!($atomic_type), "::new(1);")]
3742 ///
3743 /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
3744 /// unsafe {
3745 /// my_atomic_op(atomic.as_ptr());
3746 /// }
3747 /// # }
3748 /// ```
3749 ///
3750 /// [memory model]: self#memory-model-for-atomic-accesses
3751 #[inline]
3752 #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
3753 #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
3754 #[rustc_never_returns_null_ptr]
3755 pub const fn as_ptr(&self) -> *mut $int_type {
3756 self.v.get()
3757 }
3758 }
3759 }
3760}
3761
3762#[cfg(target_has_atomic_load_store = "8")]
3763#[cfg(not(feature = "ferrocene_subset"))]
3764atomic_int! {
3765 cfg(target_has_atomic = "8"),
3766 cfg(target_has_atomic_equal_alignment = "8"),
3767 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3768 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3769 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3770 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3771 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3772 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3773 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3774 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3775 rustc_diagnostic_item = "AtomicI8",
3776 "i8",
3777 "",
3778 atomic_min, atomic_max,
3779 1,
3780 i8 AtomicI8
3781}
3782#[cfg(target_has_atomic_load_store = "8")]
3783atomic_int! {
3784 cfg(target_has_atomic = "8"),
3785 cfg(target_has_atomic_equal_alignment = "8"),
3786 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3787 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3788 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3789 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3790 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3791 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3792 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3793 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3794 rustc_diagnostic_item = "AtomicU8",
3795 "u8",
3796 "",
3797 atomic_umin, atomic_umax,
3798 1,
3799 u8 AtomicU8
3800}
3801#[cfg(target_has_atomic_load_store = "16")]
3802#[cfg(not(feature = "ferrocene_subset"))]
3803atomic_int! {
3804 cfg(target_has_atomic = "16"),
3805 cfg(target_has_atomic_equal_alignment = "16"),
3806 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3807 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3808 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3809 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3810 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3811 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3812 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3813 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3814 rustc_diagnostic_item = "AtomicI16",
3815 "i16",
3816 "",
3817 atomic_min, atomic_max,
3818 2,
3819 i16 AtomicI16
3820}
3821#[cfg(target_has_atomic_load_store = "16")]
3822atomic_int! {
3823 cfg(target_has_atomic = "16"),
3824 cfg(target_has_atomic_equal_alignment = "16"),
3825 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3826 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3827 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3828 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3829 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3830 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3831 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3832 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3833 rustc_diagnostic_item = "AtomicU16",
3834 "u16",
3835 "",
3836 atomic_umin, atomic_umax,
3837 2,
3838 u16 AtomicU16
3839}
3840#[cfg(target_has_atomic_load_store = "32")]
3841#[cfg(not(feature = "ferrocene_subset"))]
3842atomic_int! {
3843 cfg(target_has_atomic = "32"),
3844 cfg(target_has_atomic_equal_alignment = "32"),
3845 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3846 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3847 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3848 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3849 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3850 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3851 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3852 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3853 rustc_diagnostic_item = "AtomicI32",
3854 "i32",
3855 "",
3856 atomic_min, atomic_max,
3857 4,
3858 i32 AtomicI32
3859}
3860#[cfg(target_has_atomic_load_store = "32")]
3861atomic_int! {
3862 cfg(target_has_atomic = "32"),
3863 cfg(target_has_atomic_equal_alignment = "32"),
3864 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3865 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3866 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3867 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3868 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3869 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3870 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3871 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3872 rustc_diagnostic_item = "AtomicU32",
3873 "u32",
3874 "",
3875 atomic_umin, atomic_umax,
3876 4,
3877 u32 AtomicU32
3878}
3879#[cfg(target_has_atomic_load_store = "64")]
3880#[cfg(not(feature = "ferrocene_subset"))]
3881atomic_int! {
3882 cfg(target_has_atomic = "64"),
3883 cfg(target_has_atomic_equal_alignment = "64"),
3884 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3885 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3886 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3887 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3888 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3889 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3890 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3891 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3892 rustc_diagnostic_item = "AtomicI64",
3893 "i64",
3894 "",
3895 atomic_min, atomic_max,
3896 8,
3897 i64 AtomicI64
3898}
3899#[cfg(target_has_atomic_load_store = "64")]
3900atomic_int! {
3901 cfg(target_has_atomic = "64"),
3902 cfg(target_has_atomic_equal_alignment = "64"),
3903 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3904 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3905 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3906 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3907 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3908 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3909 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3910 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3911 rustc_diagnostic_item = "AtomicU64",
3912 "u64",
3913 "",
3914 atomic_umin, atomic_umax,
3915 8,
3916 u64 AtomicU64
3917}
3918#[cfg(target_has_atomic_load_store = "128")]
3919#[cfg(not(feature = "ferrocene_subset"))]
3920atomic_int! {
3921 cfg(target_has_atomic = "128"),
3922 cfg(target_has_atomic_equal_alignment = "128"),
3923 unstable(feature = "integer_atomics", issue = "99069"),
3924 unstable(feature = "integer_atomics", issue = "99069"),
3925 unstable(feature = "integer_atomics", issue = "99069"),
3926 unstable(feature = "integer_atomics", issue = "99069"),
3927 unstable(feature = "integer_atomics", issue = "99069"),
3928 unstable(feature = "integer_atomics", issue = "99069"),
3929 rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3930 rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3931 rustc_diagnostic_item = "AtomicI128",
3932 "i128",
3933 "#![feature(integer_atomics)]\n\n",
3934 atomic_min, atomic_max,
3935 16,
3936 i128 AtomicI128
3937}
3938#[cfg(target_has_atomic_load_store = "128")]
3939#[cfg(not(feature = "ferrocene_subset"))]
3940atomic_int! {
3941 cfg(target_has_atomic = "128"),
3942 cfg(target_has_atomic_equal_alignment = "128"),
3943 unstable(feature = "integer_atomics", issue = "99069"),
3944 unstable(feature = "integer_atomics", issue = "99069"),
3945 unstable(feature = "integer_atomics", issue = "99069"),
3946 unstable(feature = "integer_atomics", issue = "99069"),
3947 unstable(feature = "integer_atomics", issue = "99069"),
3948 unstable(feature = "integer_atomics", issue = "99069"),
3949 rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3950 rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3951 rustc_diagnostic_item = "AtomicU128",
3952 "u128",
3953 "#![feature(integer_atomics)]\n\n",
3954 atomic_umin, atomic_umax,
3955 16,
3956 u128 AtomicU128
3957}
3958
3959#[cfg(target_has_atomic_load_store = "ptr")]
3960macro_rules! atomic_int_ptr_sized {
3961 ( $($target_pointer_width:literal $align:literal)* ) => { $(
3962 #[cfg(target_pointer_width = $target_pointer_width)]
3963 #[cfg(not(feature = "ferrocene_subset"))]
3964 atomic_int! {
3965 cfg(target_has_atomic = "ptr"),
3966 cfg(target_has_atomic_equal_alignment = "ptr"),
3967 stable(feature = "rust1", since = "1.0.0"),
3968 stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3969 stable(feature = "atomic_debug", since = "1.3.0"),
3970 stable(feature = "atomic_access", since = "1.15.0"),
3971 stable(feature = "atomic_from", since = "1.23.0"),
3972 stable(feature = "atomic_nand", since = "1.27.0"),
3973 rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3974 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3975 rustc_diagnostic_item = "AtomicIsize",
3976 "isize",
3977 "",
3978 atomic_min, atomic_max,
3979 $align,
3980 isize AtomicIsize
3981 }
3982 #[cfg(target_pointer_width = $target_pointer_width)]
3983 atomic_int! {
3984 cfg(target_has_atomic = "ptr"),
3985 cfg(target_has_atomic_equal_alignment = "ptr"),
3986 stable(feature = "rust1", since = "1.0.0"),
3987 stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3988 stable(feature = "atomic_debug", since = "1.3.0"),
3989 stable(feature = "atomic_access", since = "1.15.0"),
3990 stable(feature = "atomic_from", since = "1.23.0"),
3991 stable(feature = "atomic_nand", since = "1.27.0"),
3992 rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3993 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3994 rustc_diagnostic_item = "AtomicUsize",
3995 "usize",
3996 "",
3997 atomic_umin, atomic_umax,
3998 $align,
3999 usize AtomicUsize
4000 }
4001
4002 /// An [`AtomicIsize`] initialized to `0`.
4003 #[cfg(target_pointer_width = $target_pointer_width)]
4004 #[stable(feature = "rust1", since = "1.0.0")]
4005 #[deprecated(
4006 since = "1.34.0",
4007 note = "the `new` function is now preferred",
4008 suggestion = "AtomicIsize::new(0)",
4009 )]
4010 #[cfg(not(feature = "ferrocene_subset"))]
4011 pub const ATOMIC_ISIZE_INIT: AtomicIsize = AtomicIsize::new(0);
4012
4013 /// An [`AtomicUsize`] initialized to `0`.
4014 #[cfg(target_pointer_width = $target_pointer_width)]
4015 #[stable(feature = "rust1", since = "1.0.0")]
4016 #[deprecated(
4017 since = "1.34.0",
4018 note = "the `new` function is now preferred",
4019 suggestion = "AtomicUsize::new(0)",
4020 )]
4021 pub const ATOMIC_USIZE_INIT: AtomicUsize = AtomicUsize::new(0);
4022 )* };
4023}
4024
4025#[cfg(target_has_atomic_load_store = "ptr")]
4026atomic_int_ptr_sized! {
4027 "16" 2
4028 "32" 4
4029 "64" 8
4030}
4031
4032#[inline]
4033#[cfg(target_has_atomic)]
4034fn strongest_failure_ordering(order: Ordering) -> Ordering {
4035 match order {
4036 Release => Relaxed,
4037 Relaxed => Relaxed,
4038 SeqCst => SeqCst,
4039 Acquire => Acquire,
4040 AcqRel => Acquire,
4041 }
4042}
4043
4044#[inline]
4045#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4046unsafe fn atomic_store<T: Copy>(dst: *mut T, val: T, order: Ordering) {
4047 // SAFETY: the caller must uphold the safety contract for `atomic_store`.
4048 unsafe {
4049 match order {
4050 Relaxed => intrinsics::atomic_store::<T, { AO::Relaxed }>(dst, val),
4051 Release => intrinsics::atomic_store::<T, { AO::Release }>(dst, val),
4052 SeqCst => intrinsics::atomic_store::<T, { AO::SeqCst }>(dst, val),
4053 Acquire => panic!("there is no such thing as an acquire store"),
4054 AcqRel => panic!("there is no such thing as an acquire-release store"),
4055 }
4056 }
4057}
4058
4059#[inline]
4060#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4061unsafe fn atomic_load<T: Copy>(dst: *const T, order: Ordering) -> T {
4062 // SAFETY: the caller must uphold the safety contract for `atomic_load`.
4063 unsafe {
4064 match order {
4065 Relaxed => intrinsics::atomic_load::<T, { AO::Relaxed }>(dst),
4066 Acquire => intrinsics::atomic_load::<T, { AO::Acquire }>(dst),
4067 SeqCst => intrinsics::atomic_load::<T, { AO::SeqCst }>(dst),
4068 Release => panic!("there is no such thing as a release load"),
4069 AcqRel => panic!("there is no such thing as an acquire-release load"),
4070 }
4071 }
4072}
4073
4074#[inline]
4075#[cfg(target_has_atomic)]
4076#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4077unsafe fn atomic_swap<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4078 // SAFETY: the caller must uphold the safety contract for `atomic_swap`.
4079 unsafe {
4080 match order {
4081 Relaxed => intrinsics::atomic_xchg::<T, { AO::Relaxed }>(dst, val),
4082 Acquire => intrinsics::atomic_xchg::<T, { AO::Acquire }>(dst, val),
4083 Release => intrinsics::atomic_xchg::<T, { AO::Release }>(dst, val),
4084 AcqRel => intrinsics::atomic_xchg::<T, { AO::AcqRel }>(dst, val),
4085 SeqCst => intrinsics::atomic_xchg::<T, { AO::SeqCst }>(dst, val),
4086 }
4087 }
4088}
4089
4090/// Returns the previous value (like __sync_fetch_and_add).
4091#[inline]
4092#[cfg(target_has_atomic)]
4093#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4094unsafe fn atomic_add<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4095 // SAFETY: the caller must uphold the safety contract for `atomic_add`.
4096 unsafe {
4097 match order {
4098 Relaxed => intrinsics::atomic_xadd::<T, U, { AO::Relaxed }>(dst, val),
4099 Acquire => intrinsics::atomic_xadd::<T, U, { AO::Acquire }>(dst, val),
4100 Release => intrinsics::atomic_xadd::<T, U, { AO::Release }>(dst, val),
4101 AcqRel => intrinsics::atomic_xadd::<T, U, { AO::AcqRel }>(dst, val),
4102 SeqCst => intrinsics::atomic_xadd::<T, U, { AO::SeqCst }>(dst, val),
4103 }
4104 }
4105}
4106
4107/// Returns the previous value (like __sync_fetch_and_sub).
4108#[inline]
4109#[cfg(target_has_atomic)]
4110#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4111unsafe fn atomic_sub<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4112 // SAFETY: the caller must uphold the safety contract for `atomic_sub`.
4113 unsafe {
4114 match order {
4115 Relaxed => intrinsics::atomic_xsub::<T, U, { AO::Relaxed }>(dst, val),
4116 Acquire => intrinsics::atomic_xsub::<T, U, { AO::Acquire }>(dst, val),
4117 Release => intrinsics::atomic_xsub::<T, U, { AO::Release }>(dst, val),
4118 AcqRel => intrinsics::atomic_xsub::<T, U, { AO::AcqRel }>(dst, val),
4119 SeqCst => intrinsics::atomic_xsub::<T, U, { AO::SeqCst }>(dst, val),
4120 }
4121 }
4122}
4123
4124/// Publicly exposed for stdarch; nobody else should use this.
4125#[inline]
4126#[cfg(target_has_atomic)]
4127#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4128#[unstable(feature = "core_intrinsics", issue = "none")]
4129#[doc(hidden)]
4130pub unsafe fn atomic_compare_exchange<T: Copy>(
4131 dst: *mut T,
4132 old: T,
4133 new: T,
4134 success: Ordering,
4135 failure: Ordering,
4136) -> Result<T, T> {
4137 // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange`.
4138 let (val, ok) = unsafe {
4139 match (success, failure) {
4140 (Relaxed, Relaxed) => {
4141 intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::Relaxed }>(dst, old, new)
4142 }
4143 (Relaxed, Acquire) => {
4144 intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::Acquire }>(dst, old, new)
4145 }
4146 (Relaxed, SeqCst) => {
4147 intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::SeqCst }>(dst, old, new)
4148 }
4149 (Acquire, Relaxed) => {
4150 intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::Relaxed }>(dst, old, new)
4151 }
4152 (Acquire, Acquire) => {
4153 intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::Acquire }>(dst, old, new)
4154 }
4155 (Acquire, SeqCst) => {
4156 intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::SeqCst }>(dst, old, new)
4157 }
4158 (Release, Relaxed) => {
4159 intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::Relaxed }>(dst, old, new)
4160 }
4161 (Release, Acquire) => {
4162 intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::Acquire }>(dst, old, new)
4163 }
4164 (Release, SeqCst) => {
4165 intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::SeqCst }>(dst, old, new)
4166 }
4167 (AcqRel, Relaxed) => {
4168 intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::Relaxed }>(dst, old, new)
4169 }
4170 (AcqRel, Acquire) => {
4171 intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::Acquire }>(dst, old, new)
4172 }
4173 (AcqRel, SeqCst) => {
4174 intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::SeqCst }>(dst, old, new)
4175 }
4176 (SeqCst, Relaxed) => {
4177 intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::Relaxed }>(dst, old, new)
4178 }
4179 (SeqCst, Acquire) => {
4180 intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::Acquire }>(dst, old, new)
4181 }
4182 (SeqCst, SeqCst) => {
4183 intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::SeqCst }>(dst, old, new)
4184 }
4185 (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
4186 (_, Release) => panic!("there is no such thing as a release failure ordering"),
4187 }
4188 };
4189 if ok { Ok(val) } else { Err(val) }
4190}
4191
4192#[inline]
4193#[cfg(target_has_atomic)]
4194#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4195unsafe fn atomic_compare_exchange_weak<T: Copy>(
4196 dst: *mut T,
4197 old: T,
4198 new: T,
4199 success: Ordering,
4200 failure: Ordering,
4201) -> Result<T, T> {
4202 // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange_weak`.
4203 let (val, ok) = unsafe {
4204 match (success, failure) {
4205 (Relaxed, Relaxed) => {
4206 intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::Relaxed }>(dst, old, new)
4207 }
4208 (Relaxed, Acquire) => {
4209 intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::Acquire }>(dst, old, new)
4210 }
4211 (Relaxed, SeqCst) => {
4212 intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::SeqCst }>(dst, old, new)
4213 }
4214 (Acquire, Relaxed) => {
4215 intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::Relaxed }>(dst, old, new)
4216 }
4217 (Acquire, Acquire) => {
4218 intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::Acquire }>(dst, old, new)
4219 }
4220 (Acquire, SeqCst) => {
4221 intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::SeqCst }>(dst, old, new)
4222 }
4223 (Release, Relaxed) => {
4224 intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::Relaxed }>(dst, old, new)
4225 }
4226 (Release, Acquire) => {
4227 intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::Acquire }>(dst, old, new)
4228 }
4229 (Release, SeqCst) => {
4230 intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::SeqCst }>(dst, old, new)
4231 }
4232 (AcqRel, Relaxed) => {
4233 intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::Relaxed }>(dst, old, new)
4234 }
4235 (AcqRel, Acquire) => {
4236 intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::Acquire }>(dst, old, new)
4237 }
4238 (AcqRel, SeqCst) => {
4239 intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::SeqCst }>(dst, old, new)
4240 }
4241 (SeqCst, Relaxed) => {
4242 intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::Relaxed }>(dst, old, new)
4243 }
4244 (SeqCst, Acquire) => {
4245 intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::Acquire }>(dst, old, new)
4246 }
4247 (SeqCst, SeqCst) => {
4248 intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::SeqCst }>(dst, old, new)
4249 }
4250 (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
4251 (_, Release) => panic!("there is no such thing as a release failure ordering"),
4252 }
4253 };
4254 if ok { Ok(val) } else { Err(val) }
4255}
4256
4257#[inline]
4258#[cfg(target_has_atomic)]
4259#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4260unsafe fn atomic_and<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4261 // SAFETY: the caller must uphold the safety contract for `atomic_and`
4262 unsafe {
4263 match order {
4264 Relaxed => intrinsics::atomic_and::<T, U, { AO::Relaxed }>(dst, val),
4265 Acquire => intrinsics::atomic_and::<T, U, { AO::Acquire }>(dst, val),
4266 Release => intrinsics::atomic_and::<T, U, { AO::Release }>(dst, val),
4267 AcqRel => intrinsics::atomic_and::<T, U, { AO::AcqRel }>(dst, val),
4268 SeqCst => intrinsics::atomic_and::<T, U, { AO::SeqCst }>(dst, val),
4269 }
4270 }
4271}
4272
4273#[inline]
4274#[cfg(target_has_atomic)]
4275#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4276unsafe fn atomic_nand<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4277 // SAFETY: the caller must uphold the safety contract for `atomic_nand`
4278 unsafe {
4279 match order {
4280 Relaxed => intrinsics::atomic_nand::<T, U, { AO::Relaxed }>(dst, val),
4281 Acquire => intrinsics::atomic_nand::<T, U, { AO::Acquire }>(dst, val),
4282 Release => intrinsics::atomic_nand::<T, U, { AO::Release }>(dst, val),
4283 AcqRel => intrinsics::atomic_nand::<T, U, { AO::AcqRel }>(dst, val),
4284 SeqCst => intrinsics::atomic_nand::<T, U, { AO::SeqCst }>(dst, val),
4285 }
4286 }
4287}
4288
4289#[inline]
4290#[cfg(target_has_atomic)]
4291#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4292unsafe fn atomic_or<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4293 // SAFETY: the caller must uphold the safety contract for `atomic_or`
4294 unsafe {
4295 match order {
4296 SeqCst => intrinsics::atomic_or::<T, U, { AO::SeqCst }>(dst, val),
4297 Acquire => intrinsics::atomic_or::<T, U, { AO::Acquire }>(dst, val),
4298 Release => intrinsics::atomic_or::<T, U, { AO::Release }>(dst, val),
4299 AcqRel => intrinsics::atomic_or::<T, U, { AO::AcqRel }>(dst, val),
4300 Relaxed => intrinsics::atomic_or::<T, U, { AO::Relaxed }>(dst, val),
4301 }
4302 }
4303}
4304
4305#[inline]
4306#[cfg(target_has_atomic)]
4307#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4308unsafe fn atomic_xor<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4309 // SAFETY: the caller must uphold the safety contract for `atomic_xor`
4310 unsafe {
4311 match order {
4312 SeqCst => intrinsics::atomic_xor::<T, U, { AO::SeqCst }>(dst, val),
4313 Acquire => intrinsics::atomic_xor::<T, U, { AO::Acquire }>(dst, val),
4314 Release => intrinsics::atomic_xor::<T, U, { AO::Release }>(dst, val),
4315 AcqRel => intrinsics::atomic_xor::<T, U, { AO::AcqRel }>(dst, val),
4316 Relaxed => intrinsics::atomic_xor::<T, U, { AO::Relaxed }>(dst, val),
4317 }
4318 }
4319}
4320
4321/// Updates `*dst` to the max value of `val` and the old value (signed comparison)
4322#[inline]
4323#[cfg(target_has_atomic)]
4324#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4325#[cfg(not(feature = "ferrocene_subset"))]
4326unsafe fn atomic_max<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4327 // SAFETY: the caller must uphold the safety contract for `atomic_max`
4328 unsafe {
4329 match order {
4330 Relaxed => intrinsics::atomic_max::<T, { AO::Relaxed }>(dst, val),
4331 Acquire => intrinsics::atomic_max::<T, { AO::Acquire }>(dst, val),
4332 Release => intrinsics::atomic_max::<T, { AO::Release }>(dst, val),
4333 AcqRel => intrinsics::atomic_max::<T, { AO::AcqRel }>(dst, val),
4334 SeqCst => intrinsics::atomic_max::<T, { AO::SeqCst }>(dst, val),
4335 }
4336 }
4337}
4338
4339/// Updates `*dst` to the min value of `val` and the old value (signed comparison)
4340#[inline]
4341#[cfg(target_has_atomic)]
4342#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4343#[cfg(not(feature = "ferrocene_subset"))]
4344unsafe fn atomic_min<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4345 // SAFETY: the caller must uphold the safety contract for `atomic_min`
4346 unsafe {
4347 match order {
4348 Relaxed => intrinsics::atomic_min::<T, { AO::Relaxed }>(dst, val),
4349 Acquire => intrinsics::atomic_min::<T, { AO::Acquire }>(dst, val),
4350 Release => intrinsics::atomic_min::<T, { AO::Release }>(dst, val),
4351 AcqRel => intrinsics::atomic_min::<T, { AO::AcqRel }>(dst, val),
4352 SeqCst => intrinsics::atomic_min::<T, { AO::SeqCst }>(dst, val),
4353 }
4354 }
4355}
4356
4357/// Updates `*dst` to the max value of `val` and the old value (unsigned comparison)
4358#[inline]
4359#[cfg(target_has_atomic)]
4360#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4361unsafe fn atomic_umax<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4362 // SAFETY: the caller must uphold the safety contract for `atomic_umax`
4363 unsafe {
4364 match order {
4365 Relaxed => intrinsics::atomic_umax::<T, { AO::Relaxed }>(dst, val),
4366 Acquire => intrinsics::atomic_umax::<T, { AO::Acquire }>(dst, val),
4367 Release => intrinsics::atomic_umax::<T, { AO::Release }>(dst, val),
4368 AcqRel => intrinsics::atomic_umax::<T, { AO::AcqRel }>(dst, val),
4369 SeqCst => intrinsics::atomic_umax::<T, { AO::SeqCst }>(dst, val),
4370 }
4371 }
4372}
4373
4374/// Updates `*dst` to the min value of `val` and the old value (unsigned comparison)
4375#[inline]
4376#[cfg(target_has_atomic)]
4377#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4378unsafe fn atomic_umin<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4379 // SAFETY: the caller must uphold the safety contract for `atomic_umin`
4380 unsafe {
4381 match order {
4382 Relaxed => intrinsics::atomic_umin::<T, { AO::Relaxed }>(dst, val),
4383 Acquire => intrinsics::atomic_umin::<T, { AO::Acquire }>(dst, val),
4384 Release => intrinsics::atomic_umin::<T, { AO::Release }>(dst, val),
4385 AcqRel => intrinsics::atomic_umin::<T, { AO::AcqRel }>(dst, val),
4386 SeqCst => intrinsics::atomic_umin::<T, { AO::SeqCst }>(dst, val),
4387 }
4388 }
4389}
4390
4391/// An atomic fence.
4392///
4393/// Fences create synchronization between themselves and atomic operations or fences in other
4394/// threads. To achieve this, a fence prevents the compiler and CPU from reordering certain types of
4395/// memory operations around it.
4396///
4397/// There are 3 different ways to use an atomic fence:
4398///
4399/// - atomic - fence synchronization: an atomic operation with (at least) [`Release`] ordering
4400/// semantics synchronizes with a fence with (at least) [`Acquire`] ordering semantics.
4401/// - fence - atomic synchronization: a fence with (at least) [`Release`] ordering semantics
4402/// synchronizes with an atomic operation with (at least) [`Acquire`] ordering semantics.
4403/// - fence - fence synchronization: a fence with (at least) [`Release`] ordering semantics
4404/// synchronizes with a fence with (at least) [`Acquire`] ordering semantics.
4405///
4406/// These 3 ways complement the regular, fence-less, atomic - atomic synchronization.
4407///
4408/// ## Atomic - Fence
4409///
4410/// An atomic operation on one thread will synchronize with a fence on another thread when:
4411///
4412/// - on thread 1:
4413/// - an atomic operation 'X' with (at least) [`Release`] ordering semantics on some atomic
4414/// object 'm',
4415///
4416/// - is paired on thread 2 with:
4417/// - an atomic read 'Y' with any order on 'm',
4418/// - followed by a fence 'B' with (at least) [`Acquire`] ordering semantics.
4419///
4420/// This provides a happens-before dependence between X and B.
4421///
4422/// ```text
4423/// Thread 1 Thread 2
4424///
4425/// m.store(3, Release); X ---------
4426/// |
4427/// |
4428/// -------------> Y if m.load(Relaxed) == 3 {
4429/// B fence(Acquire);
4430/// ...
4431/// }
4432/// ```
4433///
4434/// ## Fence - Atomic
4435///
4436/// A fence on one thread will synchronize with an atomic operation on another thread when:
4437///
4438/// - on thread:
4439/// - a fence 'A' with (at least) [`Release`] ordering semantics,
4440/// - followed by an atomic write 'X' with any ordering on some atomic object 'm',
4441///
4442/// - is paired on thread 2 with:
4443/// - an atomic operation 'Y' with (at least) [`Acquire`] ordering semantics.
4444///
4445/// This provides a happens-before dependence between A and Y.
4446///
4447/// ```text
4448/// Thread 1 Thread 2
4449///
4450/// fence(Release); A
4451/// m.store(3, Relaxed); X ---------
4452/// |
4453/// |
4454/// -------------> Y if m.load(Acquire) == 3 {
4455/// ...
4456/// }
4457/// ```
4458///
4459/// ## Fence - Fence
4460///
4461/// A fence on one thread will synchronize with a fence on another thread when:
4462///
4463/// - on thread 1:
4464/// - a fence 'A' which has (at least) [`Release`] ordering semantics,
4465/// - followed by an atomic write 'X' with any ordering on some atomic object 'm',
4466///
4467/// - is paired on thread 2 with:
4468/// - an atomic read 'Y' with any ordering on 'm',
4469/// - followed by a fence 'B' with (at least) [`Acquire`] ordering semantics.
4470///
4471/// This provides a happens-before dependence between A and B.
4472///
4473/// ```text
4474/// Thread 1 Thread 2
4475///
4476/// fence(Release); A --------------
4477/// m.store(3, Relaxed); X --------- |
4478/// | |
4479/// | |
4480/// -------------> Y if m.load(Relaxed) == 3 {
4481/// |-------> B fence(Acquire);
4482/// ...
4483/// }
4484/// ```
4485///
4486/// ## Mandatory Atomic
4487///
4488/// Note that in the examples above, it is crucial that the access to `m` are atomic. Fences cannot
4489/// be used to establish synchronization between non-atomic accesses in different threads. However,
4490/// thanks to the happens-before relationship, any non-atomic access that happen-before the atomic
4491/// operation or fence with (at least) [`Release`] ordering semantics are now also properly
4492/// synchronized with any non-atomic accesses that happen-after the atomic operation or fence with
4493/// (at least) [`Acquire`] ordering semantics.
4494///
4495/// ## Memory Ordering
4496///
4497/// A fence which has [`SeqCst`] ordering, in addition to having both [`Acquire`] and [`Release`]
4498/// semantics, participates in the global program order of the other [`SeqCst`] operations and/or
4499/// fences.
4500///
4501/// Accepts [`Acquire`], [`Release`], [`AcqRel`] and [`SeqCst`] orderings.
4502///
4503/// # Panics
4504///
4505/// Panics if `order` is [`Relaxed`].
4506///
4507/// # Examples
4508///
4509/// ```
4510/// use std::sync::atomic::AtomicBool;
4511/// use std::sync::atomic::fence;
4512/// use std::sync::atomic::Ordering;
4513///
4514/// // A mutual exclusion primitive based on spinlock.
4515/// pub struct Mutex {
4516/// flag: AtomicBool,
4517/// }
4518///
4519/// impl Mutex {
4520/// pub fn new() -> Mutex {
4521/// Mutex {
4522/// flag: AtomicBool::new(false),
4523/// }
4524/// }
4525///
4526/// pub fn lock(&self) {
4527/// // Wait until the old value is `false`.
4528/// while self
4529/// .flag
4530/// .compare_exchange_weak(false, true, Ordering::Relaxed, Ordering::Relaxed)
4531/// .is_err()
4532/// {}
4533/// // This fence synchronizes-with store in `unlock`.
4534/// fence(Ordering::Acquire);
4535/// }
4536///
4537/// pub fn unlock(&self) {
4538/// self.flag.store(false, Ordering::Release);
4539/// }
4540/// }
4541/// ```
4542#[inline]
4543#[stable(feature = "rust1", since = "1.0.0")]
4544#[rustc_diagnostic_item = "fence"]
4545#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4546pub fn fence(order: Ordering) {
4547 // SAFETY: using an atomic fence is safe.
4548 unsafe {
4549 match order {
4550 Acquire => intrinsics::atomic_fence::<{ AO::Acquire }>(),
4551 Release => intrinsics::atomic_fence::<{ AO::Release }>(),
4552 AcqRel => intrinsics::atomic_fence::<{ AO::AcqRel }>(),
4553 SeqCst => intrinsics::atomic_fence::<{ AO::SeqCst }>(),
4554 Relaxed => panic!("there is no such thing as a relaxed fence"),
4555 }
4556 }
4557}
4558
4559/// A "compiler-only" atomic fence.
4560///
4561/// Like [`fence`], this function establishes synchronization with other atomic operations and
4562/// fences. However, unlike [`fence`], `compiler_fence` only establishes synchronization with
4563/// operations *in the same thread*. This may at first sound rather useless, since code within a
4564/// thread is typically already totally ordered and does not need any further synchronization.
4565/// However, there are cases where code can run on the same thread without being ordered:
4566/// - The most common case is that of a *signal handler*: a signal handler runs in the same thread
4567/// as the code it interrupted, but it is not ordered with respect to that code. `compiler_fence`
4568/// can be used to establish synchronization between a thread and its signal handler, the same way
4569/// that `fence` can be used to establish synchronization across threads.
4570/// - Similar situations can arise in embedded programming with interrupt handlers, or in custom
4571/// implementations of preemptive green threads. In general, `compiler_fence` can establish
4572/// synchronization with code that is guaranteed to run on the same hardware CPU.
4573///
4574/// See [`fence`] for how a fence can be used to achieve synchronization. Note that just like
4575/// [`fence`], synchronization still requires atomic operations to be used in both threads -- it is
4576/// not possible to perform synchronization entirely with fences and non-atomic operations.
4577///
4578/// `compiler_fence` does not emit any machine code, but restricts the kinds of memory re-ordering
4579/// the compiler is allowed to do. `compiler_fence` corresponds to [`atomic_signal_fence`] in C and
4580/// C++.
4581///
4582/// [`atomic_signal_fence`]: https://en.cppreference.com/w/cpp/atomic/atomic_signal_fence
4583///
4584/// # Panics
4585///
4586/// Panics if `order` is [`Relaxed`].
4587///
4588/// # Examples
4589///
4590/// Without the two `compiler_fence` calls, the read of `IMPORTANT_VARIABLE` in `signal_handler`
4591/// is *undefined behavior* due to a data race, despite everything happening in a single thread.
4592/// This is because the signal handler is considered to run concurrently with its associated
4593/// thread, and explicit synchronization is required to pass data between a thread and its
4594/// signal handler. The code below uses two `compiler_fence` calls to establish the usual
4595/// release-acquire synchronization pattern (see [`fence`] for an image).
4596///
4597/// ```
4598/// use std::sync::atomic::AtomicBool;
4599/// use std::sync::atomic::Ordering;
4600/// use std::sync::atomic::compiler_fence;
4601///
4602/// static mut IMPORTANT_VARIABLE: usize = 0;
4603/// static IS_READY: AtomicBool = AtomicBool::new(false);
4604///
4605/// fn main() {
4606/// unsafe { IMPORTANT_VARIABLE = 42 };
4607/// // Marks earlier writes as being released with future relaxed stores.
4608/// compiler_fence(Ordering::Release);
4609/// IS_READY.store(true, Ordering::Relaxed);
4610/// }
4611///
4612/// fn signal_handler() {
4613/// if IS_READY.load(Ordering::Relaxed) {
4614/// // Acquires writes that were released with relaxed stores that we read from.
4615/// compiler_fence(Ordering::Acquire);
4616/// assert_eq!(unsafe { IMPORTANT_VARIABLE }, 42);
4617/// }
4618/// }
4619/// ```
4620#[inline]
4621#[stable(feature = "compiler_fences", since = "1.21.0")]
4622#[rustc_diagnostic_item = "compiler_fence"]
4623#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4624pub fn compiler_fence(order: Ordering) {
4625 // SAFETY: using an atomic fence is safe.
4626 unsafe {
4627 match order {
4628 Acquire => intrinsics::atomic_singlethreadfence::<{ AO::Acquire }>(),
4629 Release => intrinsics::atomic_singlethreadfence::<{ AO::Release }>(),
4630 AcqRel => intrinsics::atomic_singlethreadfence::<{ AO::AcqRel }>(),
4631 SeqCst => intrinsics::atomic_singlethreadfence::<{ AO::SeqCst }>(),
4632 Relaxed => panic!("there is no such thing as a relaxed fence"),
4633 }
4634 }
4635}
4636
4637#[cfg(target_has_atomic_load_store = "8")]
4638#[stable(feature = "atomic_debug", since = "1.3.0")]
4639#[cfg(not(feature = "ferrocene_subset"))]
4640impl fmt::Debug for AtomicBool {
4641 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4642 fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
4643 }
4644}
4645
4646#[cfg(target_has_atomic_load_store = "ptr")]
4647#[stable(feature = "atomic_debug", since = "1.3.0")]
4648#[cfg(not(feature = "ferrocene_subset"))]
4649impl<T> fmt::Debug for AtomicPtr<T> {
4650 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4651 fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
4652 }
4653}
4654
4655#[cfg(target_has_atomic_load_store = "ptr")]
4656#[stable(feature = "atomic_pointer", since = "1.24.0")]
4657#[cfg(not(feature = "ferrocene_subset"))]
4658impl<T> fmt::Pointer for AtomicPtr<T> {
4659 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4660 fmt::Pointer::fmt(&self.load(Ordering::Relaxed), f)
4661 }
4662}
4663
4664/// Signals the processor that it is inside a busy-wait spin-loop ("spin lock").
4665///
4666/// This function is deprecated in favor of [`hint::spin_loop`].
4667///
4668/// [`hint::spin_loop`]: crate::hint::spin_loop
4669#[inline]
4670#[stable(feature = "spin_loop_hint", since = "1.24.0")]
4671#[deprecated(since = "1.51.0", note = "use hint::spin_loop instead")]
4672#[cfg(not(feature = "ferrocene_subset"))]
4673pub fn spin_loop_hint() {
4674 spin_loop()
4675}