core/sync/atomic.rs
1//! Atomic types
2//!
3//! Atomic types provide primitive shared-memory communication between
4//! threads, and are the building blocks of other concurrent
5//! types.
6//!
7//! This module defines atomic versions of a select number of primitive
8//! types, including [`AtomicBool`], [`AtomicIsize`], [`AtomicUsize`],
9//! [`AtomicI8`], [`AtomicU16`], etc.
10//! Atomic types present operations that, when used correctly, synchronize
11//! updates between threads.
12//!
13//! Atomic variables are safe to share between threads (they implement [`Sync`])
14//! but they do not themselves provide the mechanism for sharing and follow the
15//! [threading model](../../../std/thread/index.html#the-threading-model) of Rust.
16//! The most common way to share an atomic variable is to put it into an [`Arc`][arc] (an
17//! atomically-reference-counted shared pointer).
18//!
19//! [arc]: ../../../std/sync/struct.Arc.html
20//!
21//! Atomic types may be stored in static variables, initialized using
22//! the constant initializers like [`AtomicBool::new`]. Atomic statics
23//! are often used for lazy global initialization.
24//!
25//! ## Memory model for atomic accesses
26//!
27//! Rust atomics currently follow the same rules as [C++20 atomics][cpp], specifically the rules
28//! from the [`intro.races`][cpp-intro.races] section, without the "consume" memory ordering. Since
29//! C++ uses an object-based memory model whereas Rust is access-based, a bit of translation work
30//! has to be done to apply the C++ rules to Rust: whenever C++ talks about "the value of an
31//! object", we understand that to mean the resulting bytes obtained when doing a read. When the C++
32//! standard talks about "the value of an atomic object", this refers to the result of doing an
33//! atomic load (via the operations provided in this module). A "modification of an atomic object"
34//! refers to an atomic store.
35//!
36//! The end result is *almost* equivalent to saying that creating a *shared reference* to one of the
37//! Rust atomic types corresponds to creating an `atomic_ref` in C++, with the `atomic_ref` being
38//! destroyed when the lifetime of the shared reference ends. The main difference is that Rust
39//! permits concurrent atomic and non-atomic reads to the same memory as those cause no issue in the
40//! C++ memory model, they are just forbidden in C++ because memory is partitioned into "atomic
41//! objects" and "non-atomic objects" (with `atomic_ref` temporarily converting a non-atomic object
42//! into an atomic object).
43//!
44//! The most important aspect of this model is that *data races* are undefined behavior. A data race
45//! is defined as conflicting non-synchronized accesses where at least one of the accesses is
46//! non-atomic. Here, accesses are *conflicting* if they affect overlapping regions of memory and at
47//! least one of them is a write. (A `compare_exchange` or `compare_exchange_weak` that does not
48//! succeed is not considered a write.) They are *non-synchronized* if neither of them
49//! *happens-before* the other, according to the happens-before order of the memory model.
50//!
51//! The other possible cause of undefined behavior in the memory model are mixed-size accesses: Rust
52//! inherits the C++ limitation that non-synchronized conflicting atomic accesses may not partially
53//! overlap. In other words, every pair of non-synchronized atomic accesses must be either disjoint,
54//! access the exact same memory (including using the same access size), or both be reads.
55//!
56//! Each atomic access takes an [`Ordering`] which defines how the operation interacts with the
57//! happens-before order. These orderings behave the same as the corresponding [C++20 atomic
58//! orderings][cpp_memory_order]. For more information, see the [nomicon].
59//!
60//! [cpp]: https://en.cppreference.com/w/cpp/atomic
61//! [cpp-intro.races]: https://timsong-cpp.github.io/cppwp/n4868/intro.multithread#intro.races
62//! [cpp_memory_order]: https://en.cppreference.com/w/cpp/atomic/memory_order
63//! [nomicon]: ../../../nomicon/atomics.html
64//!
65//! ```rust,no_run undefined_behavior
66//! use std::sync::atomic::{AtomicU16, AtomicU8, Ordering};
67//! use std::mem::transmute;
68//! use std::thread;
69//!
70//! let atomic = AtomicU16::new(0);
71//!
72//! thread::scope(|s| {
73//! // This is UB: conflicting non-synchronized accesses, at least one of which is non-atomic.
74//! s.spawn(|| atomic.store(1, Ordering::Relaxed)); // atomic store
75//! s.spawn(|| unsafe { atomic.as_ptr().write(2) }); // non-atomic write
76//! });
77//!
78//! thread::scope(|s| {
79//! // This is fine: the accesses do not conflict (as none of them performs any modification).
80//! // In C++ this would be disallowed since creating an `atomic_ref` precludes
81//! // further non-atomic accesses, but Rust does not have that limitation.
82//! s.spawn(|| atomic.load(Ordering::Relaxed)); // atomic load
83//! s.spawn(|| unsafe { atomic.as_ptr().read() }); // non-atomic read
84//! });
85//!
86//! thread::scope(|s| {
87//! // This is fine: `join` synchronizes the code in a way such that the atomic
88//! // store happens-before the non-atomic write.
89//! let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed)); // atomic store
90//! handle.join().expect("thread won't panic"); // synchronize
91//! s.spawn(|| unsafe { atomic.as_ptr().write(2) }); // non-atomic write
92//! });
93//!
94//! thread::scope(|s| {
95//! // This is UB: non-synchronized conflicting differently-sized atomic accesses.
96//! s.spawn(|| atomic.store(1, Ordering::Relaxed));
97//! s.spawn(|| unsafe {
98//! let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic);
99//! differently_sized.store(2, Ordering::Relaxed);
100//! });
101//! });
102//!
103//! thread::scope(|s| {
104//! // This is fine: `join` synchronizes the code in a way such that
105//! // the 1-byte store happens-before the 2-byte store.
106//! let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed));
107//! handle.join().expect("thread won't panic");
108//! s.spawn(|| unsafe {
109//! let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic);
110//! differently_sized.store(2, Ordering::Relaxed);
111//! });
112//! });
113//! ```
114//!
115//! # Portability
116//!
117//! All atomic types in this module are guaranteed to be [lock-free] if they're
118//! available. This means they don't internally acquire a global mutex. Atomic
119//! types and operations are not guaranteed to be wait-free. This means that
120//! operations like `fetch_or` may be implemented with a compare-and-swap loop.
121//!
122//! Atomic operations may be implemented at the instruction layer with
123//! larger-size atomics. For example some platforms use 4-byte atomic
124//! instructions to implement `AtomicI8`. Note that this emulation should not
125//! have an impact on correctness of code, it's just something to be aware of.
126//!
127//! The atomic types in this module might not be available on all platforms. The
128//! atomic types here are all widely available, however, and can generally be
129//! relied upon existing. Some notable exceptions are:
130//!
131//! * PowerPC and MIPS platforms with 32-bit pointers do not have `AtomicU64` or
132//! `AtomicI64` types.
133//! * Legacy ARM platforms like ARMv4T and ARMv5TE have very limited hardware
134//! support for atomics. The bare-metal targets disable this module
135//! entirely, but the Linux targets [use the kernel] to assist (which comes
136//! with a performance penalty). It's not until ARMv6K onwards that ARM CPUs
137//! have support for load/store and Compare and Swap (CAS) atomics in hardware.
138//! * ARMv6-M and ARMv8-M baseline targets (`thumbv6m-*` and
139//! `thumbv8m.base-*`) only provide `load` and `store` operations, and do
140//! not support Compare and Swap (CAS) operations, such as `swap`,
141//! `fetch_add`, etc. Full CAS support is available on ARMv7-M and ARMv8-M
142//! Mainline (`thumbv7m-*`, `thumbv7em*` and `thumbv8m.main-*`).
143//!
144//! [use the kernel]: https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt
145//!
146//! Note that future platforms may be added that also do not have support for
147//! some atomic operations. Maximally portable code will want to be careful
148//! about which atomic types are used. `AtomicUsize` and `AtomicIsize` are
149//! generally the most portable, but even then they're not available everywhere.
150//! For reference, the `std` library requires `AtomicBool`s and pointer-sized atomics, although
151//! `core` does not.
152//!
153//! The `#[cfg(target_has_atomic)]` attribute can be used to conditionally
154//! compile based on the target's supported bit widths. It is a key-value
155//! option set for each supported size, with values "8", "16", "32", "64",
156//! "128", and "ptr" for pointer-sized atomics.
157//!
158//! [lock-free]: https://en.wikipedia.org/wiki/Non-blocking_algorithm
159//!
160//! # Atomic accesses to read-only memory
161//!
162//! In general, *all* atomic accesses on read-only memory are undefined behavior. For instance, attempting
163//! to do a `compare_exchange` that will definitely fail (making it conceptually a read-only
164//! operation) can still cause a segmentation fault if the underlying memory page is mapped read-only. Since
165//! atomic `load`s might be implemented using compare-exchange operations, even a `load` can fault
166//! on read-only memory.
167//!
168//! For the purpose of this section, "read-only memory" is defined as memory that is read-only in
169//! the underlying target, i.e., the pages are mapped with a read-only flag and any attempt to write
170//! will cause a page fault. In particular, an `&u128` reference that points to memory that is
171//! read-write mapped is *not* considered to point to "read-only memory". In Rust, almost all memory
172//! is read-write; the only exceptions are memory created by `const` items or `static` items without
173//! interior mutability, and memory that was specifically marked as read-only by the operating
174//! system via platform-specific APIs.
175//!
176//! As an exception from the general rule stated above, "sufficiently small" atomic loads with
177//! `Ordering::Relaxed` are implemented in a way that works on read-only memory, and are hence not
178//! undefined behavior. The exact size limit for what makes a load "sufficiently small" varies
179//! depending on the target:
180//!
181//! | `target_arch` | Size limit |
182//! |---------------|---------|
183//! | `x86`, `arm`, `loongarch32`, `mips`, `mips32r6`, `powerpc`, `riscv32`, `sparc`, `hexagon` | 4 bytes |
184//! | `x86_64`, `aarch64`, `loongarch64`, `mips64`, `mips64r6`, `powerpc64`, `riscv64`, `sparc64`, `s390x` | 8 bytes |
185//!
186//! Atomics loads that are larger than this limit as well as atomic loads with ordering other
187//! than `Relaxed`, as well as *all* atomic loads on targets not listed in the table, might still be
188//! read-only under certain conditions, but that is not a stable guarantee and should not be relied
189//! upon.
190//!
191//! If you need to do an acquire load on read-only memory, you can do a relaxed load followed by an
192//! acquire fence instead.
193//!
194//! # Examples
195//!
196//! A simple spinlock:
197//!
198//! ```ignore-wasm
199//! use std::sync::Arc;
200//! use std::sync::atomic::{AtomicUsize, Ordering};
201//! use std::{hint, thread};
202//!
203//! fn main() {
204//! let spinlock = Arc::new(AtomicUsize::new(1));
205//!
206//! let spinlock_clone = Arc::clone(&spinlock);
207//!
208//! let thread = thread::spawn(move || {
209//! spinlock_clone.store(0, Ordering::Release);
210//! });
211//!
212//! // Wait for the other thread to release the lock
213//! while spinlock.load(Ordering::Acquire) != 0 {
214//! hint::spin_loop();
215//! }
216//!
217//! if let Err(panic) = thread.join() {
218//! println!("Thread had an error: {panic:?}");
219//! }
220//! }
221//! ```
222//!
223//! Keep a global count of live threads:
224//!
225//! ```
226//! use std::sync::atomic::{AtomicUsize, Ordering};
227//!
228//! static GLOBAL_THREAD_COUNT: AtomicUsize = AtomicUsize::new(0);
229//!
230//! // Note that Relaxed ordering doesn't synchronize anything
231//! // except the global thread counter itself.
232//! let old_thread_count = GLOBAL_THREAD_COUNT.fetch_add(1, Ordering::Relaxed);
233//! // Note that this number may not be true at the moment of printing
234//! // because some other thread may have changed static value already.
235//! println!("live threads: {}", old_thread_count + 1);
236//! ```
237
238#![stable(feature = "rust1", since = "1.0.0")]
239#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(dead_code))]
240#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(unused_imports))]
241#![rustc_diagnostic_item = "atomic_mod"]
242// Clippy complains about the pattern of "safe function calling unsafe function taking pointers".
243// This happens with AtomicPtr intrinsics but is fine, as the pointers clippy is concerned about
244// are just normal values that get loaded/stored, but not dereferenced.
245#![allow(clippy::not_unsafe_ptr_arg_deref)]
246
247use self::Ordering::*;
248use crate::cell::UnsafeCell;
249#[cfg(not(feature = "ferrocene_subset"))]
250use crate::hint::spin_loop;
251use crate::intrinsics::AtomicOrdering as AO;
252#[cfg(not(feature = "ferrocene_subset"))]
253use crate::{fmt, intrinsics};
254
255// Ferrocene addition: imports for certified subset
256#[cfg(feature = "ferrocene_subset")]
257#[rustfmt::skip]
258use crate::intrinsics;
259
260trait Sealed {}
261
262/// A marker trait for primitive types which can be modified atomically.
263///
264/// This is an implementation detail for <code>[Atomic]\<T></code> which may disappear or be replaced at any time.
265///
266/// # Safety
267///
268/// Types implementing this trait must be primitives that can be modified atomically.
269///
270/// The associated `Self::AtomicInner` type must have the same size and bit validity as `Self`,
271/// but may have a higher alignment requirement, so the following `transmute`s are sound:
272///
273/// - `&mut Self::AtomicInner` as `&mut Self`
274/// - `Self` as `Self::AtomicInner` or the reverse
275#[unstable(
276 feature = "atomic_internals",
277 reason = "implementation detail which may disappear or be replaced at any time",
278 issue = "none"
279)]
280#[expect(private_bounds)]
281pub unsafe trait AtomicPrimitive: Sized + Copy + Sealed {
282 /// Temporary implementation detail.
283 type AtomicInner: Sized;
284}
285
286macro impl_atomic_primitive(
287 $Atom:ident $(<$T:ident>)? ($Primitive:ty),
288 size($size:literal),
289 align($align:literal) $(,)?
290) {
291 impl $(<$T>)? Sealed for $Primitive {}
292
293 #[unstable(
294 feature = "atomic_internals",
295 reason = "implementation detail which may disappear or be replaced at any time",
296 issue = "none"
297 )]
298 #[cfg(target_has_atomic_load_store = $size)]
299 unsafe impl $(<$T>)? AtomicPrimitive for $Primitive {
300 type AtomicInner = $Atom $(<$T>)?;
301 }
302}
303
304impl_atomic_primitive!(AtomicBool(bool), size("8"), align(1));
305#[cfg(not(feature = "ferrocene_subset"))]
306impl_atomic_primitive!(AtomicI8(i8), size("8"), align(1));
307impl_atomic_primitive!(AtomicU8(u8), size("8"), align(1));
308#[cfg(not(feature = "ferrocene_subset"))]
309impl_atomic_primitive!(AtomicI16(i16), size("16"), align(2));
310#[cfg(not(feature = "ferrocene_subset"))]
311impl_atomic_primitive!(AtomicU16(u16), size("16"), align(2));
312#[cfg(not(feature = "ferrocene_subset"))]
313impl_atomic_primitive!(AtomicI32(i32), size("32"), align(4));
314impl_atomic_primitive!(AtomicU32(u32), size("32"), align(4));
315#[cfg(not(feature = "ferrocene_subset"))]
316impl_atomic_primitive!(AtomicI64(i64), size("64"), align(8));
317impl_atomic_primitive!(AtomicU64(u64), size("64"), align(8));
318#[cfg(not(feature = "ferrocene_subset"))]
319impl_atomic_primitive!(AtomicI128(i128), size("128"), align(16));
320#[cfg(not(feature = "ferrocene_subset"))]
321impl_atomic_primitive!(AtomicU128(u128), size("128"), align(16));
322
323#[cfg(target_pointer_width = "16")]
324#[cfg(not(feature = "ferrocene_subset"))]
325impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(2));
326#[cfg(target_pointer_width = "32")]
327#[cfg(not(feature = "ferrocene_subset"))]
328impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(4));
329#[cfg(target_pointer_width = "64")]
330#[cfg(not(feature = "ferrocene_subset"))]
331impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(8));
332
333#[cfg(target_pointer_width = "16")]
334impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(2));
335#[cfg(target_pointer_width = "32")]
336impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(4));
337#[cfg(target_pointer_width = "64")]
338impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(8));
339
340#[cfg(target_pointer_width = "16")]
341#[cfg(not(feature = "ferrocene_subset"))]
342impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(2));
343#[cfg(target_pointer_width = "32")]
344#[cfg(not(feature = "ferrocene_subset"))]
345impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(4));
346#[cfg(target_pointer_width = "64")]
347#[cfg(not(feature = "ferrocene_subset"))]
348impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(8));
349
350/// A memory location which can be safely modified from multiple threads.
351///
352/// This has the same size and bit validity as the underlying type `T`. However,
353/// the alignment of this type is always equal to its size, even on targets where
354/// `T` has alignment less than its size.
355///
356/// For more about the differences between atomic types and non-atomic types as
357/// well as information about the portability of this type, please see the
358/// [module-level documentation].
359///
360/// **Note:** This type is only available on platforms that support atomic loads
361/// and stores of `T`.
362///
363/// [module-level documentation]: crate::sync::atomic
364#[unstable(feature = "generic_atomic", issue = "130539")]
365#[cfg(not(feature = "ferrocene_subset"))]
366pub type Atomic<T> = <T as AtomicPrimitive>::AtomicInner;
367
368// Some architectures don't have byte-sized atomics, which results in LLVM
369// emulating them using a LL/SC loop. However for AtomicBool we can take
370// advantage of the fact that it only ever contains 0 or 1 and use atomic OR/AND
371// instead, which LLVM can emulate using a larger atomic OR/AND operation.
372//
373// This list should only contain architectures which have word-sized atomic-or/
374// atomic-and instructions but don't natively support byte-sized atomics.
375#[cfg(target_has_atomic = "8")]
376const EMULATE_ATOMIC_BOOL: bool = cfg!(any(
377 target_arch = "riscv32",
378 target_arch = "riscv64",
379 target_arch = "loongarch32",
380 target_arch = "loongarch64"
381));
382
383/// A boolean type which can be safely shared between threads.
384///
385/// This type has the same size, alignment, and bit validity as a [`bool`].
386///
387/// **Note**: This type is only available on platforms that support atomic
388/// loads and stores of `u8`.
389#[cfg(target_has_atomic_load_store = "8")]
390#[stable(feature = "rust1", since = "1.0.0")]
391#[rustc_diagnostic_item = "AtomicBool"]
392#[repr(C, align(1))]
393pub struct AtomicBool {
394 v: UnsafeCell<u8>,
395}
396
397#[cfg(target_has_atomic_load_store = "8")]
398#[stable(feature = "rust1", since = "1.0.0")]
399#[cfg(not(feature = "ferrocene_subset"))]
400impl Default for AtomicBool {
401 /// Creates an `AtomicBool` initialized to `false`.
402 #[inline]
403 fn default() -> Self {
404 Self::new(false)
405 }
406}
407
408// Send is implicitly implemented for AtomicBool.
409#[cfg(target_has_atomic_load_store = "8")]
410#[stable(feature = "rust1", since = "1.0.0")]
411unsafe impl Sync for AtomicBool {}
412
413/// A raw pointer type which can be safely shared between threads.
414///
415/// This type has the same size and bit validity as a `*mut T`.
416///
417/// **Note**: This type is only available on platforms that support atomic
418/// loads and stores of pointers. Its size depends on the target pointer's size.
419#[cfg(target_has_atomic_load_store = "ptr")]
420#[stable(feature = "rust1", since = "1.0.0")]
421#[rustc_diagnostic_item = "AtomicPtr"]
422#[cfg_attr(target_pointer_width = "16", repr(C, align(2)))]
423#[cfg_attr(target_pointer_width = "32", repr(C, align(4)))]
424#[cfg_attr(target_pointer_width = "64", repr(C, align(8)))]
425#[cfg(not(feature = "ferrocene_subset"))]
426pub struct AtomicPtr<T> {
427 p: UnsafeCell<*mut T>,
428}
429
430#[cfg(target_has_atomic_load_store = "ptr")]
431#[stable(feature = "rust1", since = "1.0.0")]
432#[cfg(not(feature = "ferrocene_subset"))]
433impl<T> Default for AtomicPtr<T> {
434 /// Creates a null `AtomicPtr<T>`.
435 fn default() -> AtomicPtr<T> {
436 AtomicPtr::new(crate::ptr::null_mut())
437 }
438}
439
440#[cfg(target_has_atomic_load_store = "ptr")]
441#[stable(feature = "rust1", since = "1.0.0")]
442#[cfg(not(feature = "ferrocene_subset"))]
443unsafe impl<T> Send for AtomicPtr<T> {}
444#[cfg(target_has_atomic_load_store = "ptr")]
445#[stable(feature = "rust1", since = "1.0.0")]
446#[cfg(not(feature = "ferrocene_subset"))]
447unsafe impl<T> Sync for AtomicPtr<T> {}
448
449/// Atomic memory orderings
450///
451/// Memory orderings specify the way atomic operations synchronize memory.
452/// In its weakest [`Ordering::Relaxed`], only the memory directly touched by the
453/// operation is synchronized. On the other hand, a store-load pair of [`Ordering::SeqCst`]
454/// operations synchronize other memory while additionally preserving a total order of such
455/// operations across all threads.
456///
457/// Rust's memory orderings are [the same as those of
458/// C++20](https://en.cppreference.com/w/cpp/atomic/memory_order).
459///
460/// For more information see the [nomicon].
461///
462/// [nomicon]: ../../../nomicon/atomics.html
463#[stable(feature = "rust1", since = "1.0.0")]
464#[cfg_attr(not(feature = "ferrocene_subset"), derive(Copy, Clone, Debug, Eq, PartialEq, Hash))]
465#[cfg_attr(feature = "ferrocene_subset", derive(Copy, Clone))]
466#[non_exhaustive]
467#[rustc_diagnostic_item = "Ordering"]
468pub enum Ordering {
469 /// No ordering constraints, only atomic operations.
470 ///
471 /// Corresponds to [`memory_order_relaxed`] in C++20.
472 ///
473 /// [`memory_order_relaxed`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Relaxed_ordering
474 #[stable(feature = "rust1", since = "1.0.0")]
475 Relaxed,
476 /// When coupled with a store, all previous operations become ordered
477 /// before any load of this value with [`Acquire`] (or stronger) ordering.
478 /// In particular, all previous writes become visible to all threads
479 /// that perform an [`Acquire`] (or stronger) load of this value.
480 ///
481 /// Notice that using this ordering for an operation that combines loads
482 /// and stores leads to a [`Relaxed`] load operation!
483 ///
484 /// This ordering is only applicable for operations that can perform a store.
485 ///
486 /// Corresponds to [`memory_order_release`] in C++20.
487 ///
488 /// [`memory_order_release`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
489 #[stable(feature = "rust1", since = "1.0.0")]
490 Release,
491 /// When coupled with a load, if the loaded value was written by a store operation with
492 /// [`Release`] (or stronger) ordering, then all subsequent operations
493 /// become ordered after that store. In particular, all subsequent loads will see data
494 /// written before the store.
495 ///
496 /// Notice that using this ordering for an operation that combines loads
497 /// and stores leads to a [`Relaxed`] store operation!
498 ///
499 /// This ordering is only applicable for operations that can perform a load.
500 ///
501 /// Corresponds to [`memory_order_acquire`] in C++20.
502 ///
503 /// [`memory_order_acquire`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
504 #[stable(feature = "rust1", since = "1.0.0")]
505 Acquire,
506 /// Has the effects of both [`Acquire`] and [`Release`] together:
507 /// For loads it uses [`Acquire`] ordering. For stores it uses the [`Release`] ordering.
508 ///
509 /// Notice that in the case of `compare_and_swap`, it is possible that the operation ends up
510 /// not performing any store and hence it has just [`Acquire`] ordering. However,
511 /// `AcqRel` will never perform [`Relaxed`] accesses.
512 ///
513 /// This ordering is only applicable for operations that combine both loads and stores.
514 ///
515 /// Corresponds to [`memory_order_acq_rel`] in C++20.
516 ///
517 /// [`memory_order_acq_rel`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
518 #[stable(feature = "rust1", since = "1.0.0")]
519 AcqRel,
520 /// Like [`Acquire`]/[`Release`]/[`AcqRel`] (for load, store, and load-with-store
521 /// operations, respectively) with the additional guarantee that all threads see all
522 /// sequentially consistent operations in the same order.
523 ///
524 /// Corresponds to [`memory_order_seq_cst`] in C++20.
525 ///
526 /// [`memory_order_seq_cst`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Sequentially-consistent_ordering
527 #[stable(feature = "rust1", since = "1.0.0")]
528 SeqCst,
529}
530
531/// An [`AtomicBool`] initialized to `false`.
532#[cfg(target_has_atomic_load_store = "8")]
533#[stable(feature = "rust1", since = "1.0.0")]
534#[deprecated(
535 since = "1.34.0",
536 note = "the `new` function is now preferred",
537 suggestion = "AtomicBool::new(false)"
538)]
539#[cfg(not(feature = "ferrocene_subset"))]
540pub const ATOMIC_BOOL_INIT: AtomicBool = AtomicBool::new(false);
541
542#[cfg(target_has_atomic_load_store = "8")]
543impl AtomicBool {
544 /// Creates a new `AtomicBool`.
545 ///
546 /// # Examples
547 ///
548 /// ```
549 /// use std::sync::atomic::AtomicBool;
550 ///
551 /// let atomic_true = AtomicBool::new(true);
552 /// let atomic_false = AtomicBool::new(false);
553 /// ```
554 #[inline]
555 #[stable(feature = "rust1", since = "1.0.0")]
556 #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
557 #[must_use]
558 pub const fn new(v: bool) -> AtomicBool {
559 AtomicBool { v: UnsafeCell::new(v as u8) }
560 }
561
562 /// Creates a new `AtomicBool` from a pointer.
563 ///
564 /// # Examples
565 ///
566 /// ```
567 /// use std::sync::atomic::{self, AtomicBool};
568 ///
569 /// // Get a pointer to an allocated value
570 /// let ptr: *mut bool = Box::into_raw(Box::new(false));
571 ///
572 /// assert!(ptr.cast::<AtomicBool>().is_aligned());
573 ///
574 /// {
575 /// // Create an atomic view of the allocated value
576 /// let atomic = unsafe { AtomicBool::from_ptr(ptr) };
577 ///
578 /// // Use `atomic` for atomic operations, possibly share it with other threads
579 /// atomic.store(true, atomic::Ordering::Relaxed);
580 /// }
581 ///
582 /// // It's ok to non-atomically access the value behind `ptr`,
583 /// // since the reference to the atomic ended its lifetime in the block above
584 /// assert_eq!(unsafe { *ptr }, true);
585 ///
586 /// // Deallocate the value
587 /// unsafe { drop(Box::from_raw(ptr)) }
588 /// ```
589 ///
590 /// # Safety
591 ///
592 /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that this is always true, since
593 /// `align_of::<AtomicBool>() == 1`).
594 /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
595 /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
596 /// allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different
597 /// sizes, without synchronization.
598 ///
599 /// [valid]: crate::ptr#safety
600 /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
601 #[inline]
602 #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
603 #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
604 #[cfg(not(feature = "ferrocene_subset"))]
605 pub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a AtomicBool {
606 // SAFETY: guaranteed by the caller
607 unsafe { &*ptr.cast() }
608 }
609
610 /// Returns a mutable reference to the underlying [`bool`].
611 ///
612 /// This is safe because the mutable reference guarantees that no other threads are
613 /// concurrently accessing the atomic data.
614 ///
615 /// # Examples
616 ///
617 /// ```
618 /// use std::sync::atomic::{AtomicBool, Ordering};
619 ///
620 /// let mut some_bool = AtomicBool::new(true);
621 /// assert_eq!(*some_bool.get_mut(), true);
622 /// *some_bool.get_mut() = false;
623 /// assert_eq!(some_bool.load(Ordering::SeqCst), false);
624 /// ```
625 #[inline]
626 #[stable(feature = "atomic_access", since = "1.15.0")]
627 #[cfg(not(feature = "ferrocene_subset"))]
628 pub fn get_mut(&mut self) -> &mut bool {
629 // SAFETY: the mutable reference guarantees unique ownership.
630 unsafe { &mut *(self.v.get() as *mut bool) }
631 }
632
633 /// Gets atomic access to a `&mut bool`.
634 ///
635 /// # Examples
636 ///
637 /// ```
638 /// #![feature(atomic_from_mut)]
639 /// use std::sync::atomic::{AtomicBool, Ordering};
640 ///
641 /// let mut some_bool = true;
642 /// let a = AtomicBool::from_mut(&mut some_bool);
643 /// a.store(false, Ordering::Relaxed);
644 /// assert_eq!(some_bool, false);
645 /// ```
646 #[inline]
647 #[cfg(target_has_atomic_equal_alignment = "8")]
648 #[unstable(feature = "atomic_from_mut", issue = "76314")]
649 #[cfg(not(feature = "ferrocene_subset"))]
650 pub fn from_mut(v: &mut bool) -> &mut Self {
651 // SAFETY: the mutable reference guarantees unique ownership, and
652 // alignment of both `bool` and `Self` is 1.
653 unsafe { &mut *(v as *mut bool as *mut Self) }
654 }
655
656 /// Gets non-atomic access to a `&mut [AtomicBool]` slice.
657 ///
658 /// This is safe because the mutable reference guarantees that no other threads are
659 /// concurrently accessing the atomic data.
660 ///
661 /// # Examples
662 ///
663 /// ```ignore-wasm
664 /// #![feature(atomic_from_mut)]
665 /// use std::sync::atomic::{AtomicBool, Ordering};
666 ///
667 /// let mut some_bools = [const { AtomicBool::new(false) }; 10];
668 ///
669 /// let view: &mut [bool] = AtomicBool::get_mut_slice(&mut some_bools);
670 /// assert_eq!(view, [false; 10]);
671 /// view[..5].copy_from_slice(&[true; 5]);
672 ///
673 /// std::thread::scope(|s| {
674 /// for t in &some_bools[..5] {
675 /// s.spawn(move || assert_eq!(t.load(Ordering::Relaxed), true));
676 /// }
677 ///
678 /// for f in &some_bools[5..] {
679 /// s.spawn(move || assert_eq!(f.load(Ordering::Relaxed), false));
680 /// }
681 /// });
682 /// ```
683 #[inline]
684 #[unstable(feature = "atomic_from_mut", issue = "76314")]
685 #[cfg(not(feature = "ferrocene_subset"))]
686 pub fn get_mut_slice(this: &mut [Self]) -> &mut [bool] {
687 // SAFETY: the mutable reference guarantees unique ownership.
688 unsafe { &mut *(this as *mut [Self] as *mut [bool]) }
689 }
690
691 /// Gets atomic access to a `&mut [bool]` slice.
692 ///
693 /// # Examples
694 ///
695 /// ```rust,ignore-wasm
696 /// #![feature(atomic_from_mut)]
697 /// use std::sync::atomic::{AtomicBool, Ordering};
698 ///
699 /// let mut some_bools = [false; 10];
700 /// let a = &*AtomicBool::from_mut_slice(&mut some_bools);
701 /// std::thread::scope(|s| {
702 /// for i in 0..a.len() {
703 /// s.spawn(move || a[i].store(true, Ordering::Relaxed));
704 /// }
705 /// });
706 /// assert_eq!(some_bools, [true; 10]);
707 /// ```
708 #[inline]
709 #[cfg(target_has_atomic_equal_alignment = "8")]
710 #[unstable(feature = "atomic_from_mut", issue = "76314")]
711 #[cfg(not(feature = "ferrocene_subset"))]
712 pub fn from_mut_slice(v: &mut [bool]) -> &mut [Self] {
713 // SAFETY: the mutable reference guarantees unique ownership, and
714 // alignment of both `bool` and `Self` is 1.
715 unsafe { &mut *(v as *mut [bool] as *mut [Self]) }
716 }
717
718 /// Consumes the atomic and returns the contained value.
719 ///
720 /// This is safe because passing `self` by value guarantees that no other threads are
721 /// concurrently accessing the atomic data.
722 ///
723 /// # Examples
724 ///
725 /// ```
726 /// use std::sync::atomic::AtomicBool;
727 ///
728 /// let some_bool = AtomicBool::new(true);
729 /// assert_eq!(some_bool.into_inner(), true);
730 /// ```
731 #[inline]
732 #[stable(feature = "atomic_access", since = "1.15.0")]
733 #[rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0")]
734 #[cfg(not(feature = "ferrocene_subset"))]
735 pub const fn into_inner(self) -> bool {
736 self.v.into_inner() != 0
737 }
738
739 /// Loads a value from the bool.
740 ///
741 /// `load` takes an [`Ordering`] argument which describes the memory ordering
742 /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
743 ///
744 /// # Panics
745 ///
746 /// Panics if `order` is [`Release`] or [`AcqRel`].
747 ///
748 /// # Examples
749 ///
750 /// ```
751 /// use std::sync::atomic::{AtomicBool, Ordering};
752 ///
753 /// let some_bool = AtomicBool::new(true);
754 ///
755 /// assert_eq!(some_bool.load(Ordering::Relaxed), true);
756 /// ```
757 #[inline]
758 #[stable(feature = "rust1", since = "1.0.0")]
759 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
760 pub fn load(&self, order: Ordering) -> bool {
761 // SAFETY: any data races are prevented by atomic intrinsics and the raw
762 // pointer passed in is valid because we got it from a reference.
763 unsafe { atomic_load(self.v.get(), order) != 0 }
764 }
765
766 /// Stores a value into the bool.
767 ///
768 /// `store` takes an [`Ordering`] argument which describes the memory ordering
769 /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
770 ///
771 /// # Panics
772 ///
773 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
774 ///
775 /// # Examples
776 ///
777 /// ```
778 /// use std::sync::atomic::{AtomicBool, Ordering};
779 ///
780 /// let some_bool = AtomicBool::new(true);
781 ///
782 /// some_bool.store(false, Ordering::Relaxed);
783 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
784 /// ```
785 #[inline]
786 #[stable(feature = "rust1", since = "1.0.0")]
787 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
788 #[rustc_should_not_be_called_on_const_items]
789 pub fn store(&self, val: bool, order: Ordering) {
790 // SAFETY: any data races are prevented by atomic intrinsics and the raw
791 // pointer passed in is valid because we got it from a reference.
792 unsafe {
793 atomic_store(self.v.get(), val as u8, order);
794 }
795 }
796
797 /// Stores a value into the bool, returning the previous value.
798 ///
799 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
800 /// of this operation. All ordering modes are possible. Note that using
801 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
802 /// using [`Release`] makes the load part [`Relaxed`].
803 ///
804 /// **Note:** This method is only available on platforms that support atomic
805 /// operations on `u8`.
806 ///
807 /// # Examples
808 ///
809 /// ```
810 /// use std::sync::atomic::{AtomicBool, Ordering};
811 ///
812 /// let some_bool = AtomicBool::new(true);
813 ///
814 /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true);
815 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
816 /// ```
817 #[inline]
818 #[stable(feature = "rust1", since = "1.0.0")]
819 #[cfg(target_has_atomic = "8")]
820 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
821 #[rustc_should_not_be_called_on_const_items]
822 pub fn swap(&self, val: bool, order: Ordering) -> bool {
823 if EMULATE_ATOMIC_BOOL {
824 if val { self.fetch_or(true, order) } else { self.fetch_and(false, order) }
825 } else {
826 // SAFETY: data races are prevented by atomic intrinsics.
827 unsafe { atomic_swap(self.v.get(), val as u8, order) != 0 }
828 }
829 }
830
831 /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
832 ///
833 /// The return value is always the previous value. If it is equal to `current`, then the value
834 /// was updated.
835 ///
836 /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
837 /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
838 /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
839 /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
840 /// happens, and using [`Release`] makes the load part [`Relaxed`].
841 ///
842 /// **Note:** This method is only available on platforms that support atomic
843 /// operations on `u8`.
844 ///
845 /// # Migrating to `compare_exchange` and `compare_exchange_weak`
846 ///
847 /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
848 /// memory orderings:
849 ///
850 /// Original | Success | Failure
851 /// -------- | ------- | -------
852 /// Relaxed | Relaxed | Relaxed
853 /// Acquire | Acquire | Acquire
854 /// Release | Release | Relaxed
855 /// AcqRel | AcqRel | Acquire
856 /// SeqCst | SeqCst | SeqCst
857 ///
858 /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
859 /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
860 /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
861 /// rather than to infer success vs failure based on the value that was read.
862 ///
863 /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
864 /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
865 /// which allows the compiler to generate better assembly code when the compare and swap
866 /// is used in a loop.
867 ///
868 /// # Examples
869 ///
870 /// ```
871 /// use std::sync::atomic::{AtomicBool, Ordering};
872 ///
873 /// let some_bool = AtomicBool::new(true);
874 ///
875 /// assert_eq!(some_bool.compare_and_swap(true, false, Ordering::Relaxed), true);
876 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
877 ///
878 /// assert_eq!(some_bool.compare_and_swap(true, true, Ordering::Relaxed), false);
879 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
880 /// ```
881 #[cfg(not(feature = "ferrocene_subset"))]
882 #[inline]
883 #[stable(feature = "rust1", since = "1.0.0")]
884 #[deprecated(
885 since = "1.50.0",
886 note = "Use `compare_exchange` or `compare_exchange_weak` instead"
887 )]
888 #[cfg(target_has_atomic = "8")]
889 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
890 #[rustc_should_not_be_called_on_const_items]
891 pub fn compare_and_swap(&self, current: bool, new: bool, order: Ordering) -> bool {
892 match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
893 Ok(x) => x,
894 Err(x) => x,
895 }
896 }
897
898 /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
899 ///
900 /// The return value is a result indicating whether the new value was written and containing
901 /// the previous value. On success this value is guaranteed to be equal to `current`.
902 ///
903 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
904 /// ordering of this operation. `success` describes the required ordering for the
905 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
906 /// `failure` describes the required ordering for the load operation that takes place when
907 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
908 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
909 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
910 ///
911 /// **Note:** This method is only available on platforms that support atomic
912 /// operations on `u8`.
913 ///
914 /// # Examples
915 ///
916 /// ```
917 /// use std::sync::atomic::{AtomicBool, Ordering};
918 ///
919 /// let some_bool = AtomicBool::new(true);
920 ///
921 /// assert_eq!(some_bool.compare_exchange(true,
922 /// false,
923 /// Ordering::Acquire,
924 /// Ordering::Relaxed),
925 /// Ok(true));
926 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
927 ///
928 /// assert_eq!(some_bool.compare_exchange(true, true,
929 /// Ordering::SeqCst,
930 /// Ordering::Acquire),
931 /// Err(false));
932 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
933 /// ```
934 ///
935 /// # Considerations
936 ///
937 /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
938 /// of CAS operations. In particular, a load of the value followed by a successful
939 /// `compare_exchange` with the previous load *does not ensure* that other threads have not
940 /// changed the value in the interim. This is usually important when the *equality* check in
941 /// the `compare_exchange` is being used to check the *identity* of a value, but equality
942 /// does not necessarily imply identity. In this case, `compare_exchange` can lead to the
943 /// [ABA problem].
944 ///
945 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
946 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
947 #[inline]
948 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
949 #[doc(alias = "compare_and_swap")]
950 #[cfg(target_has_atomic = "8")]
951 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
952 #[rustc_should_not_be_called_on_const_items]
953 pub fn compare_exchange(
954 &self,
955 current: bool,
956 new: bool,
957 success: Ordering,
958 failure: Ordering,
959 ) -> Result<bool, bool> {
960 if EMULATE_ATOMIC_BOOL {
961 // Pick the strongest ordering from success and failure.
962 let order = match (success, failure) {
963 (SeqCst, _) => SeqCst,
964 (_, SeqCst) => SeqCst,
965 (AcqRel, _) => AcqRel,
966 (_, AcqRel) => {
967 panic!("there is no such thing as an acquire-release failure ordering")
968 }
969 (Release, Acquire) => AcqRel,
970 (Acquire, _) => Acquire,
971 (_, Acquire) => Acquire,
972 (Release, Relaxed) => Release,
973 (_, Release) => panic!("there is no such thing as a release failure ordering"),
974 (Relaxed, Relaxed) => Relaxed,
975 };
976 let old = if current == new {
977 // This is a no-op, but we still need to perform the operation
978 // for memory ordering reasons.
979 self.fetch_or(false, order)
980 } else {
981 // This sets the value to the new one and returns the old one.
982 self.swap(new, order)
983 };
984 if old == current { Ok(old) } else { Err(old) }
985 } else {
986 // SAFETY: data races are prevented by atomic intrinsics.
987 match unsafe {
988 atomic_compare_exchange(self.v.get(), current as u8, new as u8, success, failure)
989 } {
990 Ok(x) => Ok(x != 0),
991 Err(x) => Err(x != 0),
992 }
993 }
994 }
995
996 /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
997 ///
998 /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the
999 /// comparison succeeds, which can result in more efficient code on some platforms. The
1000 /// return value is a result indicating whether the new value was written and containing the
1001 /// previous value.
1002 ///
1003 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1004 /// ordering of this operation. `success` describes the required ordering for the
1005 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1006 /// `failure` describes the required ordering for the load operation that takes place when
1007 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1008 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1009 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1010 ///
1011 /// **Note:** This method is only available on platforms that support atomic
1012 /// operations on `u8`.
1013 ///
1014 /// # Examples
1015 ///
1016 /// ```
1017 /// use std::sync::atomic::{AtomicBool, Ordering};
1018 ///
1019 /// let val = AtomicBool::new(false);
1020 ///
1021 /// let new = true;
1022 /// let mut old = val.load(Ordering::Relaxed);
1023 /// loop {
1024 /// match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1025 /// Ok(_) => break,
1026 /// Err(x) => old = x,
1027 /// }
1028 /// }
1029 /// ```
1030 ///
1031 /// # Considerations
1032 ///
1033 /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
1034 /// of CAS operations. In particular, a load of the value followed by a successful
1035 /// `compare_exchange` with the previous load *does not ensure* that other threads have not
1036 /// changed the value in the interim. This is usually important when the *equality* check in
1037 /// the `compare_exchange` is being used to check the *identity* of a value, but equality
1038 /// does not necessarily imply identity. In this case, `compare_exchange` can lead to the
1039 /// [ABA problem].
1040 ///
1041 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1042 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1043 #[inline]
1044 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1045 #[doc(alias = "compare_and_swap")]
1046 #[cfg(target_has_atomic = "8")]
1047 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1048 #[cfg(not(feature = "ferrocene_subset"))]
1049 #[rustc_should_not_be_called_on_const_items]
1050 pub fn compare_exchange_weak(
1051 &self,
1052 current: bool,
1053 new: bool,
1054 success: Ordering,
1055 failure: Ordering,
1056 ) -> Result<bool, bool> {
1057 if EMULATE_ATOMIC_BOOL {
1058 return self.compare_exchange(current, new, success, failure);
1059 }
1060
1061 // SAFETY: data races are prevented by atomic intrinsics.
1062 match unsafe {
1063 atomic_compare_exchange_weak(self.v.get(), current as u8, new as u8, success, failure)
1064 } {
1065 Ok(x) => Ok(x != 0),
1066 Err(x) => Err(x != 0),
1067 }
1068 }
1069
1070 /// Logical "and" with a boolean value.
1071 ///
1072 /// Performs a logical "and" operation on the current value and the argument `val`, and sets
1073 /// the new value to the result.
1074 ///
1075 /// Returns the previous value.
1076 ///
1077 /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
1078 /// of this operation. All ordering modes are possible. Note that using
1079 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1080 /// using [`Release`] makes the load part [`Relaxed`].
1081 ///
1082 /// **Note:** This method is only available on platforms that support atomic
1083 /// operations on `u8`.
1084 ///
1085 /// # Examples
1086 ///
1087 /// ```
1088 /// use std::sync::atomic::{AtomicBool, Ordering};
1089 ///
1090 /// let foo = AtomicBool::new(true);
1091 /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
1092 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1093 ///
1094 /// let foo = AtomicBool::new(true);
1095 /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
1096 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1097 ///
1098 /// let foo = AtomicBool::new(false);
1099 /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
1100 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1101 /// ```
1102 #[inline]
1103 #[stable(feature = "rust1", since = "1.0.0")]
1104 #[cfg(target_has_atomic = "8")]
1105 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1106 #[rustc_should_not_be_called_on_const_items]
1107 pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
1108 // SAFETY: data races are prevented by atomic intrinsics.
1109 unsafe { atomic_and(self.v.get(), val as u8, order) != 0 }
1110 }
1111
1112 /// Logical "nand" with a boolean value.
1113 ///
1114 /// Performs a logical "nand" operation on the current value and the argument `val`, and sets
1115 /// the new value to the result.
1116 ///
1117 /// Returns the previous value.
1118 ///
1119 /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
1120 /// of this operation. All ordering modes are possible. Note that using
1121 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1122 /// using [`Release`] makes the load part [`Relaxed`].
1123 ///
1124 /// **Note:** This method is only available on platforms that support atomic
1125 /// operations on `u8`.
1126 ///
1127 /// # Examples
1128 ///
1129 /// ```
1130 /// use std::sync::atomic::{AtomicBool, Ordering};
1131 ///
1132 /// let foo = AtomicBool::new(true);
1133 /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
1134 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1135 ///
1136 /// let foo = AtomicBool::new(true);
1137 /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
1138 /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
1139 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1140 ///
1141 /// let foo = AtomicBool::new(false);
1142 /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
1143 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1144 /// ```
1145 #[inline]
1146 #[stable(feature = "rust1", since = "1.0.0")]
1147 #[cfg(target_has_atomic = "8")]
1148 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1149 #[cfg(not(feature = "ferrocene_subset"))]
1150 #[rustc_should_not_be_called_on_const_items]
1151 pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
1152 // We can't use atomic_nand here because it can result in a bool with
1153 // an invalid value. This happens because the atomic operation is done
1154 // with an 8-bit integer internally, which would set the upper 7 bits.
1155 // So we just use fetch_xor or swap instead.
1156 if val {
1157 // !(x & true) == !x
1158 // We must invert the bool.
1159 self.fetch_xor(true, order)
1160 } else {
1161 // !(x & false) == true
1162 // We must set the bool to true.
1163 self.swap(true, order)
1164 }
1165 }
1166
1167 /// Logical "or" with a boolean value.
1168 ///
1169 /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1170 /// new value to the result.
1171 ///
1172 /// Returns the previous value.
1173 ///
1174 /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
1175 /// of this operation. All ordering modes are possible. Note that using
1176 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1177 /// using [`Release`] makes the load part [`Relaxed`].
1178 ///
1179 /// **Note:** This method is only available on platforms that support atomic
1180 /// operations on `u8`.
1181 ///
1182 /// # Examples
1183 ///
1184 /// ```
1185 /// use std::sync::atomic::{AtomicBool, Ordering};
1186 ///
1187 /// let foo = AtomicBool::new(true);
1188 /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
1189 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1190 ///
1191 /// let foo = AtomicBool::new(true);
1192 /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true);
1193 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1194 ///
1195 /// let foo = AtomicBool::new(false);
1196 /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
1197 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1198 /// ```
1199 #[inline]
1200 #[stable(feature = "rust1", since = "1.0.0")]
1201 #[cfg(target_has_atomic = "8")]
1202 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1203 #[rustc_should_not_be_called_on_const_items]
1204 pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
1205 // SAFETY: data races are prevented by atomic intrinsics.
1206 unsafe { atomic_or(self.v.get(), val as u8, order) != 0 }
1207 }
1208
1209 /// Logical "xor" with a boolean value.
1210 ///
1211 /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1212 /// the new value to the result.
1213 ///
1214 /// Returns the previous value.
1215 ///
1216 /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
1217 /// of this operation. All ordering modes are possible. Note that using
1218 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1219 /// using [`Release`] makes the load part [`Relaxed`].
1220 ///
1221 /// **Note:** This method is only available on platforms that support atomic
1222 /// operations on `u8`.
1223 ///
1224 /// # Examples
1225 ///
1226 /// ```
1227 /// use std::sync::atomic::{AtomicBool, Ordering};
1228 ///
1229 /// let foo = AtomicBool::new(true);
1230 /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
1231 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1232 ///
1233 /// let foo = AtomicBool::new(true);
1234 /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
1235 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1236 ///
1237 /// let foo = AtomicBool::new(false);
1238 /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
1239 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1240 /// ```
1241 #[inline]
1242 #[stable(feature = "rust1", since = "1.0.0")]
1243 #[cfg(target_has_atomic = "8")]
1244 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1245 #[cfg(not(feature = "ferrocene_subset"))]
1246 #[rustc_should_not_be_called_on_const_items]
1247 pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
1248 // SAFETY: data races are prevented by atomic intrinsics.
1249 unsafe { atomic_xor(self.v.get(), val as u8, order) != 0 }
1250 }
1251
1252 /// Logical "not" with a boolean value.
1253 ///
1254 /// Performs a logical "not" operation on the current value, and sets
1255 /// the new value to the result.
1256 ///
1257 /// Returns the previous value.
1258 ///
1259 /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering
1260 /// of this operation. All ordering modes are possible. Note that using
1261 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1262 /// using [`Release`] makes the load part [`Relaxed`].
1263 ///
1264 /// **Note:** This method is only available on platforms that support atomic
1265 /// operations on `u8`.
1266 ///
1267 /// # Examples
1268 ///
1269 /// ```
1270 /// use std::sync::atomic::{AtomicBool, Ordering};
1271 ///
1272 /// let foo = AtomicBool::new(true);
1273 /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
1274 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1275 ///
1276 /// let foo = AtomicBool::new(false);
1277 /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
1278 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1279 /// ```
1280 #[inline]
1281 #[stable(feature = "atomic_bool_fetch_not", since = "1.81.0")]
1282 #[cfg(target_has_atomic = "8")]
1283 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1284 #[cfg(not(feature = "ferrocene_subset"))]
1285 #[rustc_should_not_be_called_on_const_items]
1286 pub fn fetch_not(&self, order: Ordering) -> bool {
1287 self.fetch_xor(true, order)
1288 }
1289
1290 /// Returns a mutable pointer to the underlying [`bool`].
1291 ///
1292 /// Doing non-atomic reads and writes on the resulting boolean can be a data race.
1293 /// This method is mostly useful for FFI, where the function signature may use
1294 /// `*mut bool` instead of `&AtomicBool`.
1295 ///
1296 /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
1297 /// atomic types work with interior mutability. All modifications of an atomic change the value
1298 /// through a shared reference, and can do so safely as long as they use atomic operations. Any
1299 /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the
1300 /// requirements of the [memory model].
1301 ///
1302 /// # Examples
1303 ///
1304 /// ```ignore (extern-declaration)
1305 /// # fn main() {
1306 /// use std::sync::atomic::AtomicBool;
1307 ///
1308 /// extern "C" {
1309 /// fn my_atomic_op(arg: *mut bool);
1310 /// }
1311 ///
1312 /// let mut atomic = AtomicBool::new(true);
1313 /// unsafe {
1314 /// my_atomic_op(atomic.as_ptr());
1315 /// }
1316 /// # }
1317 /// ```
1318 ///
1319 /// [memory model]: self#memory-model-for-atomic-accesses
1320 #[inline]
1321 #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
1322 #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
1323 #[rustc_never_returns_null_ptr]
1324 #[cfg(not(feature = "ferrocene_subset"))]
1325 #[rustc_should_not_be_called_on_const_items]
1326 pub const fn as_ptr(&self) -> *mut bool {
1327 self.v.get().cast()
1328 }
1329
1330 /// Fetches the value, and applies a function to it that returns an optional
1331 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1332 /// returned `Some(_)`, else `Err(previous_value)`.
1333 ///
1334 /// Note: This may call the function multiple times if the value has been
1335 /// changed from other threads in the meantime, as long as the function
1336 /// returns `Some(_)`, but the function will have been applied only once to
1337 /// the stored value.
1338 ///
1339 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1340 /// ordering of this operation. The first describes the required ordering for
1341 /// when the operation finally succeeds while the second describes the
1342 /// required ordering for loads. These correspond to the success and failure
1343 /// orderings of [`AtomicBool::compare_exchange`] respectively.
1344 ///
1345 /// Using [`Acquire`] as success ordering makes the store part of this
1346 /// operation [`Relaxed`], and using [`Release`] makes the final successful
1347 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1348 /// [`Acquire`] or [`Relaxed`].
1349 ///
1350 /// **Note:** This method is only available on platforms that support atomic
1351 /// operations on `u8`.
1352 ///
1353 /// # Considerations
1354 ///
1355 /// This method is not magic; it is not provided by the hardware, and does not act like a
1356 /// critical section or mutex.
1357 ///
1358 /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
1359 /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem].
1360 ///
1361 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1362 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1363 ///
1364 /// # Examples
1365 ///
1366 /// ```rust
1367 /// use std::sync::atomic::{AtomicBool, Ordering};
1368 ///
1369 /// let x = AtomicBool::new(false);
1370 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1371 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1372 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1373 /// assert_eq!(x.load(Ordering::SeqCst), false);
1374 /// ```
1375 #[inline]
1376 #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
1377 #[cfg(target_has_atomic = "8")]
1378 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1379 #[cfg(not(feature = "ferrocene_subset"))]
1380 #[rustc_should_not_be_called_on_const_items]
1381 pub fn fetch_update<F>(
1382 &self,
1383 set_order: Ordering,
1384 fetch_order: Ordering,
1385 mut f: F,
1386 ) -> Result<bool, bool>
1387 where
1388 F: FnMut(bool) -> Option<bool>,
1389 {
1390 let mut prev = self.load(fetch_order);
1391 while let Some(next) = f(prev) {
1392 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1393 x @ Ok(_) => return x,
1394 Err(next_prev) => prev = next_prev,
1395 }
1396 }
1397 Err(prev)
1398 }
1399
1400 /// Fetches the value, and applies a function to it that returns an optional
1401 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1402 /// returned `Some(_)`, else `Err(previous_value)`.
1403 ///
1404 /// See also: [`update`](`AtomicBool::update`).
1405 ///
1406 /// Note: This may call the function multiple times if the value has been
1407 /// changed from other threads in the meantime, as long as the function
1408 /// returns `Some(_)`, but the function will have been applied only once to
1409 /// the stored value.
1410 ///
1411 /// `try_update` takes two [`Ordering`] arguments to describe the memory
1412 /// ordering of this operation. The first describes the required ordering for
1413 /// when the operation finally succeeds while the second describes the
1414 /// required ordering for loads. These correspond to the success and failure
1415 /// orderings of [`AtomicBool::compare_exchange`] respectively.
1416 ///
1417 /// Using [`Acquire`] as success ordering makes the store part of this
1418 /// operation [`Relaxed`], and using [`Release`] makes the final successful
1419 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1420 /// [`Acquire`] or [`Relaxed`].
1421 ///
1422 /// **Note:** This method is only available on platforms that support atomic
1423 /// operations on `u8`.
1424 ///
1425 /// # Considerations
1426 ///
1427 /// This method is not magic; it is not provided by the hardware, and does not act like a
1428 /// critical section or mutex.
1429 ///
1430 /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
1431 /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem].
1432 ///
1433 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1434 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1435 ///
1436 /// # Examples
1437 ///
1438 /// ```rust
1439 /// #![feature(atomic_try_update)]
1440 /// use std::sync::atomic::{AtomicBool, Ordering};
1441 ///
1442 /// let x = AtomicBool::new(false);
1443 /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1444 /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1445 /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1446 /// assert_eq!(x.load(Ordering::SeqCst), false);
1447 /// ```
1448 #[inline]
1449 #[unstable(feature = "atomic_try_update", issue = "135894")]
1450 #[cfg(target_has_atomic = "8")]
1451 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1452 #[cfg(not(feature = "ferrocene_subset"))]
1453 #[rustc_should_not_be_called_on_const_items]
1454 pub fn try_update(
1455 &self,
1456 set_order: Ordering,
1457 fetch_order: Ordering,
1458 f: impl FnMut(bool) -> Option<bool>,
1459 ) -> Result<bool, bool> {
1460 // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`;
1461 // when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`.
1462 self.fetch_update(set_order, fetch_order, f)
1463 }
1464
1465 /// Fetches the value, applies a function to it that it return a new value.
1466 /// The new value is stored and the old value is returned.
1467 ///
1468 /// See also: [`try_update`](`AtomicBool::try_update`).
1469 ///
1470 /// Note: This may call the function multiple times if the value has been changed from other threads in
1471 /// the meantime, but the function will have been applied only once to the stored value.
1472 ///
1473 /// `update` takes two [`Ordering`] arguments to describe the memory
1474 /// ordering of this operation. The first describes the required ordering for
1475 /// when the operation finally succeeds while the second describes the
1476 /// required ordering for loads. These correspond to the success and failure
1477 /// orderings of [`AtomicBool::compare_exchange`] respectively.
1478 ///
1479 /// Using [`Acquire`] as success ordering makes the store part
1480 /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
1481 /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1482 ///
1483 /// **Note:** This method is only available on platforms that support atomic operations on `u8`.
1484 ///
1485 /// # Considerations
1486 ///
1487 /// This method is not magic; it is not provided by the hardware, and does not act like a
1488 /// critical section or mutex.
1489 ///
1490 /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
1491 /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem].
1492 ///
1493 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1494 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1495 ///
1496 /// # Examples
1497 ///
1498 /// ```rust
1499 /// #![feature(atomic_try_update)]
1500 ///
1501 /// use std::sync::atomic::{AtomicBool, Ordering};
1502 ///
1503 /// let x = AtomicBool::new(false);
1504 /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| !x), false);
1505 /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| !x), true);
1506 /// assert_eq!(x.load(Ordering::SeqCst), false);
1507 /// ```
1508 #[inline]
1509 #[unstable(feature = "atomic_try_update", issue = "135894")]
1510 #[cfg(target_has_atomic = "8")]
1511 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1512 #[cfg(not(feature = "ferrocene_subset"))]
1513 #[rustc_should_not_be_called_on_const_items]
1514 pub fn update(
1515 &self,
1516 set_order: Ordering,
1517 fetch_order: Ordering,
1518 mut f: impl FnMut(bool) -> bool,
1519 ) -> bool {
1520 let mut prev = self.load(fetch_order);
1521 loop {
1522 match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
1523 Ok(x) => break x,
1524 Err(next_prev) => prev = next_prev,
1525 }
1526 }
1527 }
1528}
1529
1530#[cfg(target_has_atomic_load_store = "ptr")]
1531#[cfg(not(feature = "ferrocene_subset"))]
1532impl<T> AtomicPtr<T> {
1533 /// Creates a new `AtomicPtr`.
1534 ///
1535 /// # Examples
1536 ///
1537 /// ```
1538 /// use std::sync::atomic::AtomicPtr;
1539 ///
1540 /// let ptr = &mut 5;
1541 /// let atomic_ptr = AtomicPtr::new(ptr);
1542 /// ```
1543 #[inline]
1544 #[stable(feature = "rust1", since = "1.0.0")]
1545 #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
1546 pub const fn new(p: *mut T) -> AtomicPtr<T> {
1547 AtomicPtr { p: UnsafeCell::new(p) }
1548 }
1549
1550 /// Creates a new `AtomicPtr` from a pointer.
1551 ///
1552 /// # Examples
1553 ///
1554 /// ```
1555 /// use std::sync::atomic::{self, AtomicPtr};
1556 ///
1557 /// // Get a pointer to an allocated value
1558 /// let ptr: *mut *mut u8 = Box::into_raw(Box::new(std::ptr::null_mut()));
1559 ///
1560 /// assert!(ptr.cast::<AtomicPtr<u8>>().is_aligned());
1561 ///
1562 /// {
1563 /// // Create an atomic view of the allocated value
1564 /// let atomic = unsafe { AtomicPtr::from_ptr(ptr) };
1565 ///
1566 /// // Use `atomic` for atomic operations, possibly share it with other threads
1567 /// atomic.store(std::ptr::NonNull::dangling().as_ptr(), atomic::Ordering::Relaxed);
1568 /// }
1569 ///
1570 /// // It's ok to non-atomically access the value behind `ptr`,
1571 /// // since the reference to the atomic ended its lifetime in the block above
1572 /// assert!(!unsafe { *ptr }.is_null());
1573 ///
1574 /// // Deallocate the value
1575 /// unsafe { drop(Box::from_raw(ptr)) }
1576 /// ```
1577 ///
1578 /// # Safety
1579 ///
1580 /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this
1581 /// can be bigger than `align_of::<*mut T>()`).
1582 /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
1583 /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
1584 /// allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different
1585 /// sizes, without synchronization.
1586 ///
1587 /// [valid]: crate::ptr#safety
1588 /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
1589 #[inline]
1590 #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
1591 #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
1592 pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a AtomicPtr<T> {
1593 // SAFETY: guaranteed by the caller
1594 unsafe { &*ptr.cast() }
1595 }
1596
1597 /// Returns a mutable reference to the underlying pointer.
1598 ///
1599 /// This is safe because the mutable reference guarantees that no other threads are
1600 /// concurrently accessing the atomic data.
1601 ///
1602 /// # Examples
1603 ///
1604 /// ```
1605 /// use std::sync::atomic::{AtomicPtr, Ordering};
1606 ///
1607 /// let mut data = 10;
1608 /// let mut atomic_ptr = AtomicPtr::new(&mut data);
1609 /// let mut other_data = 5;
1610 /// *atomic_ptr.get_mut() = &mut other_data;
1611 /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
1612 /// ```
1613 #[inline]
1614 #[stable(feature = "atomic_access", since = "1.15.0")]
1615 pub fn get_mut(&mut self) -> &mut *mut T {
1616 self.p.get_mut()
1617 }
1618
1619 /// Gets atomic access to a pointer.
1620 ///
1621 /// **Note:** This function is only available on targets where `AtomicPtr<T>` has the same alignment as `*const T`
1622 ///
1623 /// # Examples
1624 ///
1625 /// ```
1626 /// #![feature(atomic_from_mut)]
1627 /// use std::sync::atomic::{AtomicPtr, Ordering};
1628 ///
1629 /// let mut data = 123;
1630 /// let mut some_ptr = &mut data as *mut i32;
1631 /// let a = AtomicPtr::from_mut(&mut some_ptr);
1632 /// let mut other_data = 456;
1633 /// a.store(&mut other_data, Ordering::Relaxed);
1634 /// assert_eq!(unsafe { *some_ptr }, 456);
1635 /// ```
1636 #[inline]
1637 #[cfg(target_has_atomic_equal_alignment = "ptr")]
1638 #[unstable(feature = "atomic_from_mut", issue = "76314")]
1639 pub fn from_mut(v: &mut *mut T) -> &mut Self {
1640 let [] = [(); align_of::<AtomicPtr<()>>() - align_of::<*mut ()>()];
1641 // SAFETY:
1642 // - the mutable reference guarantees unique ownership.
1643 // - the alignment of `*mut T` and `Self` is the same on all platforms
1644 // supported by rust, as verified above.
1645 unsafe { &mut *(v as *mut *mut T as *mut Self) }
1646 }
1647
1648 /// Gets non-atomic access to a `&mut [AtomicPtr]` slice.
1649 ///
1650 /// This is safe because the mutable reference guarantees that no other threads are
1651 /// concurrently accessing the atomic data.
1652 ///
1653 /// # Examples
1654 ///
1655 /// ```ignore-wasm
1656 /// #![feature(atomic_from_mut)]
1657 /// use std::ptr::null_mut;
1658 /// use std::sync::atomic::{AtomicPtr, Ordering};
1659 ///
1660 /// let mut some_ptrs = [const { AtomicPtr::new(null_mut::<String>()) }; 10];
1661 ///
1662 /// let view: &mut [*mut String] = AtomicPtr::get_mut_slice(&mut some_ptrs);
1663 /// assert_eq!(view, [null_mut::<String>(); 10]);
1664 /// view
1665 /// .iter_mut()
1666 /// .enumerate()
1667 /// .for_each(|(i, ptr)| *ptr = Box::into_raw(Box::new(format!("iteration#{i}"))));
1668 ///
1669 /// std::thread::scope(|s| {
1670 /// for ptr in &some_ptrs {
1671 /// s.spawn(move || {
1672 /// let ptr = ptr.load(Ordering::Relaxed);
1673 /// assert!(!ptr.is_null());
1674 ///
1675 /// let name = unsafe { Box::from_raw(ptr) };
1676 /// println!("Hello, {name}!");
1677 /// });
1678 /// }
1679 /// });
1680 /// ```
1681 #[inline]
1682 #[unstable(feature = "atomic_from_mut", issue = "76314")]
1683 pub fn get_mut_slice(this: &mut [Self]) -> &mut [*mut T] {
1684 // SAFETY: the mutable reference guarantees unique ownership.
1685 unsafe { &mut *(this as *mut [Self] as *mut [*mut T]) }
1686 }
1687
1688 /// Gets atomic access to a slice of pointers.
1689 ///
1690 /// **Note:** This function is only available on targets where `AtomicPtr<T>` has the same alignment as `*const T`
1691 ///
1692 /// # Examples
1693 ///
1694 /// ```ignore-wasm
1695 /// #![feature(atomic_from_mut)]
1696 /// use std::ptr::null_mut;
1697 /// use std::sync::atomic::{AtomicPtr, Ordering};
1698 ///
1699 /// let mut some_ptrs = [null_mut::<String>(); 10];
1700 /// let a = &*AtomicPtr::from_mut_slice(&mut some_ptrs);
1701 /// std::thread::scope(|s| {
1702 /// for i in 0..a.len() {
1703 /// s.spawn(move || {
1704 /// let name = Box::new(format!("thread{i}"));
1705 /// a[i].store(Box::into_raw(name), Ordering::Relaxed);
1706 /// });
1707 /// }
1708 /// });
1709 /// for p in some_ptrs {
1710 /// assert!(!p.is_null());
1711 /// let name = unsafe { Box::from_raw(p) };
1712 /// println!("Hello, {name}!");
1713 /// }
1714 /// ```
1715 #[inline]
1716 #[cfg(target_has_atomic_equal_alignment = "ptr")]
1717 #[unstable(feature = "atomic_from_mut", issue = "76314")]
1718 pub fn from_mut_slice(v: &mut [*mut T]) -> &mut [Self] {
1719 // SAFETY:
1720 // - the mutable reference guarantees unique ownership.
1721 // - the alignment of `*mut T` and `Self` is the same on all platforms
1722 // supported by rust, as verified above.
1723 unsafe { &mut *(v as *mut [*mut T] as *mut [Self]) }
1724 }
1725
1726 /// Consumes the atomic and returns the contained value.
1727 ///
1728 /// This is safe because passing `self` by value guarantees that no other threads are
1729 /// concurrently accessing the atomic data.
1730 ///
1731 /// # Examples
1732 ///
1733 /// ```
1734 /// use std::sync::atomic::AtomicPtr;
1735 ///
1736 /// let mut data = 5;
1737 /// let atomic_ptr = AtomicPtr::new(&mut data);
1738 /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
1739 /// ```
1740 #[inline]
1741 #[stable(feature = "atomic_access", since = "1.15.0")]
1742 #[rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0")]
1743 pub const fn into_inner(self) -> *mut T {
1744 self.p.into_inner()
1745 }
1746
1747 /// Loads a value from the pointer.
1748 ///
1749 /// `load` takes an [`Ordering`] argument which describes the memory ordering
1750 /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
1751 ///
1752 /// # Panics
1753 ///
1754 /// Panics if `order` is [`Release`] or [`AcqRel`].
1755 ///
1756 /// # Examples
1757 ///
1758 /// ```
1759 /// use std::sync::atomic::{AtomicPtr, Ordering};
1760 ///
1761 /// let ptr = &mut 5;
1762 /// let some_ptr = AtomicPtr::new(ptr);
1763 ///
1764 /// let value = some_ptr.load(Ordering::Relaxed);
1765 /// ```
1766 #[inline]
1767 #[stable(feature = "rust1", since = "1.0.0")]
1768 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1769 pub fn load(&self, order: Ordering) -> *mut T {
1770 // SAFETY: data races are prevented by atomic intrinsics.
1771 unsafe { atomic_load(self.p.get(), order) }
1772 }
1773
1774 /// Stores a value into the pointer.
1775 ///
1776 /// `store` takes an [`Ordering`] argument which describes the memory ordering
1777 /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
1778 ///
1779 /// # Panics
1780 ///
1781 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
1782 ///
1783 /// # Examples
1784 ///
1785 /// ```
1786 /// use std::sync::atomic::{AtomicPtr, Ordering};
1787 ///
1788 /// let ptr = &mut 5;
1789 /// let some_ptr = AtomicPtr::new(ptr);
1790 ///
1791 /// let other_ptr = &mut 10;
1792 ///
1793 /// some_ptr.store(other_ptr, Ordering::Relaxed);
1794 /// ```
1795 #[inline]
1796 #[stable(feature = "rust1", since = "1.0.0")]
1797 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1798 #[rustc_should_not_be_called_on_const_items]
1799 pub fn store(&self, ptr: *mut T, order: Ordering) {
1800 // SAFETY: data races are prevented by atomic intrinsics.
1801 unsafe {
1802 atomic_store(self.p.get(), ptr, order);
1803 }
1804 }
1805
1806 /// Stores a value into the pointer, returning the previous value.
1807 ///
1808 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
1809 /// of this operation. All ordering modes are possible. Note that using
1810 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1811 /// using [`Release`] makes the load part [`Relaxed`].
1812 ///
1813 /// **Note:** This method is only available on platforms that support atomic
1814 /// operations on pointers.
1815 ///
1816 /// # Examples
1817 ///
1818 /// ```
1819 /// use std::sync::atomic::{AtomicPtr, Ordering};
1820 ///
1821 /// let ptr = &mut 5;
1822 /// let some_ptr = AtomicPtr::new(ptr);
1823 ///
1824 /// let other_ptr = &mut 10;
1825 ///
1826 /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
1827 /// ```
1828 #[inline]
1829 #[stable(feature = "rust1", since = "1.0.0")]
1830 #[cfg(target_has_atomic = "ptr")]
1831 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1832 #[rustc_should_not_be_called_on_const_items]
1833 pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
1834 // SAFETY: data races are prevented by atomic intrinsics.
1835 unsafe { atomic_swap(self.p.get(), ptr, order) }
1836 }
1837
1838 /// Stores a value into the pointer if the current value is the same as the `current` value.
1839 ///
1840 /// The return value is always the previous value. If it is equal to `current`, then the value
1841 /// was updated.
1842 ///
1843 /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
1844 /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
1845 /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
1846 /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
1847 /// happens, and using [`Release`] makes the load part [`Relaxed`].
1848 ///
1849 /// **Note:** This method is only available on platforms that support atomic
1850 /// operations on pointers.
1851 ///
1852 /// # Migrating to `compare_exchange` and `compare_exchange_weak`
1853 ///
1854 /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
1855 /// memory orderings:
1856 ///
1857 /// Original | Success | Failure
1858 /// -------- | ------- | -------
1859 /// Relaxed | Relaxed | Relaxed
1860 /// Acquire | Acquire | Acquire
1861 /// Release | Release | Relaxed
1862 /// AcqRel | AcqRel | Acquire
1863 /// SeqCst | SeqCst | SeqCst
1864 ///
1865 /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
1866 /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
1867 /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
1868 /// rather than to infer success vs failure based on the value that was read.
1869 ///
1870 /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
1871 /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
1872 /// which allows the compiler to generate better assembly code when the compare and swap
1873 /// is used in a loop.
1874 ///
1875 /// # Examples
1876 ///
1877 /// ```
1878 /// use std::sync::atomic::{AtomicPtr, Ordering};
1879 ///
1880 /// let ptr = &mut 5;
1881 /// let some_ptr = AtomicPtr::new(ptr);
1882 ///
1883 /// let other_ptr = &mut 10;
1884 ///
1885 /// let value = some_ptr.compare_and_swap(ptr, other_ptr, Ordering::Relaxed);
1886 /// ```
1887 #[inline]
1888 #[stable(feature = "rust1", since = "1.0.0")]
1889 #[deprecated(
1890 since = "1.50.0",
1891 note = "Use `compare_exchange` or `compare_exchange_weak` instead"
1892 )]
1893 #[cfg(target_has_atomic = "ptr")]
1894 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1895 #[rustc_should_not_be_called_on_const_items]
1896 pub fn compare_and_swap(&self, current: *mut T, new: *mut T, order: Ordering) -> *mut T {
1897 match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
1898 Ok(x) => x,
1899 Err(x) => x,
1900 }
1901 }
1902
1903 /// Stores a value into the pointer if the current value is the same as the `current` value.
1904 ///
1905 /// The return value is a result indicating whether the new value was written and containing
1906 /// the previous value. On success this value is guaranteed to be equal to `current`.
1907 ///
1908 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
1909 /// ordering of this operation. `success` describes the required ordering for the
1910 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1911 /// `failure` describes the required ordering for the load operation that takes place when
1912 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1913 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1914 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1915 ///
1916 /// **Note:** This method is only available on platforms that support atomic
1917 /// operations on pointers.
1918 ///
1919 /// # Examples
1920 ///
1921 /// ```
1922 /// use std::sync::atomic::{AtomicPtr, Ordering};
1923 ///
1924 /// let ptr = &mut 5;
1925 /// let some_ptr = AtomicPtr::new(ptr);
1926 ///
1927 /// let other_ptr = &mut 10;
1928 ///
1929 /// let value = some_ptr.compare_exchange(ptr, other_ptr,
1930 /// Ordering::SeqCst, Ordering::Relaxed);
1931 /// ```
1932 ///
1933 /// # Considerations
1934 ///
1935 /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
1936 /// of CAS operations. In particular, a load of the value followed by a successful
1937 /// `compare_exchange` with the previous load *does not ensure* that other threads have not
1938 /// changed the value in the interim. This is usually important when the *equality* check in
1939 /// the `compare_exchange` is being used to check the *identity* of a value, but equality
1940 /// does not necessarily imply identity. This is a particularly common case for pointers, as
1941 /// a pointer holding the same address does not imply that the same object exists at that
1942 /// address! In this case, `compare_exchange` can lead to the [ABA problem].
1943 ///
1944 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1945 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1946 #[inline]
1947 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1948 #[cfg(target_has_atomic = "ptr")]
1949 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1950 #[rustc_should_not_be_called_on_const_items]
1951 pub fn compare_exchange(
1952 &self,
1953 current: *mut T,
1954 new: *mut T,
1955 success: Ordering,
1956 failure: Ordering,
1957 ) -> Result<*mut T, *mut T> {
1958 // SAFETY: data races are prevented by atomic intrinsics.
1959 unsafe { atomic_compare_exchange(self.p.get(), current, new, success, failure) }
1960 }
1961
1962 /// Stores a value into the pointer if the current value is the same as the `current` value.
1963 ///
1964 /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the
1965 /// comparison succeeds, which can result in more efficient code on some platforms. The
1966 /// return value is a result indicating whether the new value was written and containing the
1967 /// previous value.
1968 ///
1969 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1970 /// ordering of this operation. `success` describes the required ordering for the
1971 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1972 /// `failure` describes the required ordering for the load operation that takes place when
1973 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1974 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1975 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1976 ///
1977 /// **Note:** This method is only available on platforms that support atomic
1978 /// operations on pointers.
1979 ///
1980 /// # Examples
1981 ///
1982 /// ```
1983 /// use std::sync::atomic::{AtomicPtr, Ordering};
1984 ///
1985 /// let some_ptr = AtomicPtr::new(&mut 5);
1986 ///
1987 /// let new = &mut 10;
1988 /// let mut old = some_ptr.load(Ordering::Relaxed);
1989 /// loop {
1990 /// match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1991 /// Ok(_) => break,
1992 /// Err(x) => old = x,
1993 /// }
1994 /// }
1995 /// ```
1996 ///
1997 /// # Considerations
1998 ///
1999 /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
2000 /// of CAS operations. In particular, a load of the value followed by a successful
2001 /// `compare_exchange` with the previous load *does not ensure* that other threads have not
2002 /// changed the value in the interim. This is usually important when the *equality* check in
2003 /// the `compare_exchange` is being used to check the *identity* of a value, but equality
2004 /// does not necessarily imply identity. This is a particularly common case for pointers, as
2005 /// a pointer holding the same address does not imply that the same object exists at that
2006 /// address! In this case, `compare_exchange` can lead to the [ABA problem].
2007 ///
2008 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2009 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
2010 #[inline]
2011 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
2012 #[cfg(target_has_atomic = "ptr")]
2013 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2014 #[rustc_should_not_be_called_on_const_items]
2015 pub fn compare_exchange_weak(
2016 &self,
2017 current: *mut T,
2018 new: *mut T,
2019 success: Ordering,
2020 failure: Ordering,
2021 ) -> Result<*mut T, *mut T> {
2022 // SAFETY: This intrinsic is unsafe because it operates on a raw pointer
2023 // but we know for sure that the pointer is valid (we just got it from
2024 // an `UnsafeCell` that we have by reference) and the atomic operation
2025 // itself allows us to safely mutate the `UnsafeCell` contents.
2026 unsafe { atomic_compare_exchange_weak(self.p.get(), current, new, success, failure) }
2027 }
2028
2029 /// Fetches the value, and applies a function to it that returns an optional
2030 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
2031 /// returned `Some(_)`, else `Err(previous_value)`.
2032 ///
2033 /// Note: This may call the function multiple times if the value has been
2034 /// changed from other threads in the meantime, as long as the function
2035 /// returns `Some(_)`, but the function will have been applied only once to
2036 /// the stored value.
2037 ///
2038 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
2039 /// ordering of this operation. The first describes the required ordering for
2040 /// when the operation finally succeeds while the second describes the
2041 /// required ordering for loads. These correspond to the success and failure
2042 /// orderings of [`AtomicPtr::compare_exchange`] respectively.
2043 ///
2044 /// Using [`Acquire`] as success ordering makes the store part of this
2045 /// operation [`Relaxed`], and using [`Release`] makes the final successful
2046 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
2047 /// [`Acquire`] or [`Relaxed`].
2048 ///
2049 /// **Note:** This method is only available on platforms that support atomic
2050 /// operations on pointers.
2051 ///
2052 /// # Considerations
2053 ///
2054 /// This method is not magic; it is not provided by the hardware, and does not act like a
2055 /// critical section or mutex.
2056 ///
2057 /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
2058 /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem],
2059 /// which is a particularly common pitfall for pointers!
2060 ///
2061 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2062 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
2063 ///
2064 /// # Examples
2065 ///
2066 /// ```rust
2067 /// use std::sync::atomic::{AtomicPtr, Ordering};
2068 ///
2069 /// let ptr: *mut _ = &mut 5;
2070 /// let some_ptr = AtomicPtr::new(ptr);
2071 ///
2072 /// let new: *mut _ = &mut 10;
2073 /// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
2074 /// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
2075 /// if x == ptr {
2076 /// Some(new)
2077 /// } else {
2078 /// None
2079 /// }
2080 /// });
2081 /// assert_eq!(result, Ok(ptr));
2082 /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2083 /// ```
2084 #[inline]
2085 #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
2086 #[cfg(target_has_atomic = "ptr")]
2087 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2088 #[rustc_should_not_be_called_on_const_items]
2089 pub fn fetch_update<F>(
2090 &self,
2091 set_order: Ordering,
2092 fetch_order: Ordering,
2093 mut f: F,
2094 ) -> Result<*mut T, *mut T>
2095 where
2096 F: FnMut(*mut T) -> Option<*mut T>,
2097 {
2098 let mut prev = self.load(fetch_order);
2099 while let Some(next) = f(prev) {
2100 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
2101 x @ Ok(_) => return x,
2102 Err(next_prev) => prev = next_prev,
2103 }
2104 }
2105 Err(prev)
2106 }
2107 /// Fetches the value, and applies a function to it that returns an optional
2108 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
2109 /// returned `Some(_)`, else `Err(previous_value)`.
2110 ///
2111 /// See also: [`update`](`AtomicPtr::update`).
2112 ///
2113 /// Note: This may call the function multiple times if the value has been
2114 /// changed from other threads in the meantime, as long as the function
2115 /// returns `Some(_)`, but the function will have been applied only once to
2116 /// the stored value.
2117 ///
2118 /// `try_update` takes two [`Ordering`] arguments to describe the memory
2119 /// ordering of this operation. The first describes the required ordering for
2120 /// when the operation finally succeeds while the second describes the
2121 /// required ordering for loads. These correspond to the success and failure
2122 /// orderings of [`AtomicPtr::compare_exchange`] respectively.
2123 ///
2124 /// Using [`Acquire`] as success ordering makes the store part of this
2125 /// operation [`Relaxed`], and using [`Release`] makes the final successful
2126 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
2127 /// [`Acquire`] or [`Relaxed`].
2128 ///
2129 /// **Note:** This method is only available on platforms that support atomic
2130 /// operations on pointers.
2131 ///
2132 /// # Considerations
2133 ///
2134 /// This method is not magic; it is not provided by the hardware, and does not act like a
2135 /// critical section or mutex.
2136 ///
2137 /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
2138 /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem],
2139 /// which is a particularly common pitfall for pointers!
2140 ///
2141 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2142 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
2143 ///
2144 /// # Examples
2145 ///
2146 /// ```rust
2147 /// #![feature(atomic_try_update)]
2148 /// use std::sync::atomic::{AtomicPtr, Ordering};
2149 ///
2150 /// let ptr: *mut _ = &mut 5;
2151 /// let some_ptr = AtomicPtr::new(ptr);
2152 ///
2153 /// let new: *mut _ = &mut 10;
2154 /// assert_eq!(some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
2155 /// let result = some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
2156 /// if x == ptr {
2157 /// Some(new)
2158 /// } else {
2159 /// None
2160 /// }
2161 /// });
2162 /// assert_eq!(result, Ok(ptr));
2163 /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2164 /// ```
2165 #[inline]
2166 #[unstable(feature = "atomic_try_update", issue = "135894")]
2167 #[cfg(target_has_atomic = "ptr")]
2168 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2169 #[rustc_should_not_be_called_on_const_items]
2170 pub fn try_update(
2171 &self,
2172 set_order: Ordering,
2173 fetch_order: Ordering,
2174 f: impl FnMut(*mut T) -> Option<*mut T>,
2175 ) -> Result<*mut T, *mut T> {
2176 // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`;
2177 // when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`.
2178 self.fetch_update(set_order, fetch_order, f)
2179 }
2180
2181 /// Fetches the value, applies a function to it that it return a new value.
2182 /// The new value is stored and the old value is returned.
2183 ///
2184 /// See also: [`try_update`](`AtomicPtr::try_update`).
2185 ///
2186 /// Note: This may call the function multiple times if the value has been changed from other threads in
2187 /// the meantime, but the function will have been applied only once to the stored value.
2188 ///
2189 /// `update` takes two [`Ordering`] arguments to describe the memory
2190 /// ordering of this operation. The first describes the required ordering for
2191 /// when the operation finally succeeds while the second describes the
2192 /// required ordering for loads. These correspond to the success and failure
2193 /// orderings of [`AtomicPtr::compare_exchange`] respectively.
2194 ///
2195 /// Using [`Acquire`] as success ordering makes the store part
2196 /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
2197 /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2198 ///
2199 /// **Note:** This method is only available on platforms that support atomic
2200 /// operations on pointers.
2201 ///
2202 /// # Considerations
2203 ///
2204 /// This method is not magic; it is not provided by the hardware, and does not act like a
2205 /// critical section or mutex.
2206 ///
2207 /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
2208 /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem],
2209 /// which is a particularly common pitfall for pointers!
2210 ///
2211 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2212 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
2213 ///
2214 /// # Examples
2215 ///
2216 /// ```rust
2217 /// #![feature(atomic_try_update)]
2218 ///
2219 /// use std::sync::atomic::{AtomicPtr, Ordering};
2220 ///
2221 /// let ptr: *mut _ = &mut 5;
2222 /// let some_ptr = AtomicPtr::new(ptr);
2223 ///
2224 /// let new: *mut _ = &mut 10;
2225 /// let result = some_ptr.update(Ordering::SeqCst, Ordering::SeqCst, |_| new);
2226 /// assert_eq!(result, ptr);
2227 /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2228 /// ```
2229 #[inline]
2230 #[unstable(feature = "atomic_try_update", issue = "135894")]
2231 #[cfg(target_has_atomic = "8")]
2232 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2233 #[rustc_should_not_be_called_on_const_items]
2234 pub fn update(
2235 &self,
2236 set_order: Ordering,
2237 fetch_order: Ordering,
2238 mut f: impl FnMut(*mut T) -> *mut T,
2239 ) -> *mut T {
2240 let mut prev = self.load(fetch_order);
2241 loop {
2242 match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
2243 Ok(x) => break x,
2244 Err(next_prev) => prev = next_prev,
2245 }
2246 }
2247 }
2248
2249 /// Offsets the pointer's address by adding `val` (in units of `T`),
2250 /// returning the previous pointer.
2251 ///
2252 /// This is equivalent to using [`wrapping_add`] to atomically perform the
2253 /// equivalent of `ptr = ptr.wrapping_add(val);`.
2254 ///
2255 /// This method operates in units of `T`, which means that it cannot be used
2256 /// to offset the pointer by an amount which is not a multiple of
2257 /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2258 /// work with a deliberately misaligned pointer. In such cases, you may use
2259 /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead.
2260 ///
2261 /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the
2262 /// memory ordering of this operation. All ordering modes are possible. Note
2263 /// that using [`Acquire`] makes the store part of this operation
2264 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2265 ///
2266 /// **Note**: This method is only available on platforms that support atomic
2267 /// operations on [`AtomicPtr`].
2268 ///
2269 /// [`wrapping_add`]: pointer::wrapping_add
2270 ///
2271 /// # Examples
2272 ///
2273 /// ```
2274 /// use core::sync::atomic::{AtomicPtr, Ordering};
2275 ///
2276 /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2277 /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
2278 /// // Note: units of `size_of::<i64>()`.
2279 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
2280 /// ```
2281 #[inline]
2282 #[cfg(target_has_atomic = "ptr")]
2283 #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2284 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2285 #[rustc_should_not_be_called_on_const_items]
2286 pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T {
2287 self.fetch_byte_add(val.wrapping_mul(size_of::<T>()), order)
2288 }
2289
2290 /// Offsets the pointer's address by subtracting `val` (in units of `T`),
2291 /// returning the previous pointer.
2292 ///
2293 /// This is equivalent to using [`wrapping_sub`] to atomically perform the
2294 /// equivalent of `ptr = ptr.wrapping_sub(val);`.
2295 ///
2296 /// This method operates in units of `T`, which means that it cannot be used
2297 /// to offset the pointer by an amount which is not a multiple of
2298 /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2299 /// work with a deliberately misaligned pointer. In such cases, you may use
2300 /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead.
2301 ///
2302 /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory
2303 /// ordering of this operation. All ordering modes are possible. Note that
2304 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2305 /// and using [`Release`] makes the load part [`Relaxed`].
2306 ///
2307 /// **Note**: This method is only available on platforms that support atomic
2308 /// operations on [`AtomicPtr`].
2309 ///
2310 /// [`wrapping_sub`]: pointer::wrapping_sub
2311 ///
2312 /// # Examples
2313 ///
2314 /// ```
2315 /// use core::sync::atomic::{AtomicPtr, Ordering};
2316 ///
2317 /// let array = [1i32, 2i32];
2318 /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
2319 ///
2320 /// assert!(core::ptr::eq(
2321 /// atom.fetch_ptr_sub(1, Ordering::Relaxed),
2322 /// &array[1],
2323 /// ));
2324 /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
2325 /// ```
2326 #[inline]
2327 #[cfg(target_has_atomic = "ptr")]
2328 #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2329 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2330 #[rustc_should_not_be_called_on_const_items]
2331 pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T {
2332 self.fetch_byte_sub(val.wrapping_mul(size_of::<T>()), order)
2333 }
2334
2335 /// Offsets the pointer's address by adding `val` *bytes*, returning the
2336 /// previous pointer.
2337 ///
2338 /// This is equivalent to using [`wrapping_byte_add`] to atomically
2339 /// perform `ptr = ptr.wrapping_byte_add(val)`.
2340 ///
2341 /// `fetch_byte_add` takes an [`Ordering`] argument which describes the
2342 /// memory ordering of this operation. All ordering modes are possible. Note
2343 /// that using [`Acquire`] makes the store part of this operation
2344 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2345 ///
2346 /// **Note**: This method is only available on platforms that support atomic
2347 /// operations on [`AtomicPtr`].
2348 ///
2349 /// [`wrapping_byte_add`]: pointer::wrapping_byte_add
2350 ///
2351 /// # Examples
2352 ///
2353 /// ```
2354 /// use core::sync::atomic::{AtomicPtr, Ordering};
2355 ///
2356 /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2357 /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
2358 /// // Note: in units of bytes, not `size_of::<i64>()`.
2359 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
2360 /// ```
2361 #[inline]
2362 #[cfg(target_has_atomic = "ptr")]
2363 #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2364 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2365 #[rustc_should_not_be_called_on_const_items]
2366 pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T {
2367 // SAFETY: data races are prevented by atomic intrinsics.
2368 unsafe { atomic_add(self.p.get(), val, order).cast() }
2369 }
2370
2371 /// Offsets the pointer's address by subtracting `val` *bytes*, returning the
2372 /// previous pointer.
2373 ///
2374 /// This is equivalent to using [`wrapping_byte_sub`] to atomically
2375 /// perform `ptr = ptr.wrapping_byte_sub(val)`.
2376 ///
2377 /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the
2378 /// memory ordering of this operation. All ordering modes are possible. Note
2379 /// that using [`Acquire`] makes the store part of this operation
2380 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2381 ///
2382 /// **Note**: This method is only available on platforms that support atomic
2383 /// operations on [`AtomicPtr`].
2384 ///
2385 /// [`wrapping_byte_sub`]: pointer::wrapping_byte_sub
2386 ///
2387 /// # Examples
2388 ///
2389 /// ```
2390 /// use core::sync::atomic::{AtomicPtr, Ordering};
2391 ///
2392 /// let mut arr = [0i64, 1];
2393 /// let atom = AtomicPtr::<i64>::new(&raw mut arr[1]);
2394 /// assert_eq!(atom.fetch_byte_sub(8, Ordering::Relaxed).addr(), (&raw const arr[1]).addr());
2395 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), (&raw const arr[0]).addr());
2396 /// ```
2397 #[inline]
2398 #[cfg(target_has_atomic = "ptr")]
2399 #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2400 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2401 #[rustc_should_not_be_called_on_const_items]
2402 pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T {
2403 // SAFETY: data races are prevented by atomic intrinsics.
2404 unsafe { atomic_sub(self.p.get(), val, order).cast() }
2405 }
2406
2407 /// Performs a bitwise "or" operation on the address of the current pointer,
2408 /// and the argument `val`, and stores a pointer with provenance of the
2409 /// current pointer and the resulting address.
2410 ///
2411 /// This is equivalent to using [`map_addr`] to atomically perform
2412 /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged
2413 /// pointer schemes to atomically set tag bits.
2414 ///
2415 /// **Caveat**: This operation returns the previous value. To compute the
2416 /// stored value without losing provenance, you may use [`map_addr`]. For
2417 /// example: `a.fetch_or(val).map_addr(|a| a | val)`.
2418 ///
2419 /// `fetch_or` takes an [`Ordering`] argument which describes the memory
2420 /// ordering of this operation. All ordering modes are possible. Note that
2421 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2422 /// and using [`Release`] makes the load part [`Relaxed`].
2423 ///
2424 /// **Note**: This method is only available on platforms that support atomic
2425 /// operations on [`AtomicPtr`].
2426 ///
2427 /// This API and its claimed semantics are part of the Strict Provenance
2428 /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2429 /// details.
2430 ///
2431 /// [`map_addr`]: pointer::map_addr
2432 ///
2433 /// # Examples
2434 ///
2435 /// ```
2436 /// use core::sync::atomic::{AtomicPtr, Ordering};
2437 ///
2438 /// let pointer = &mut 3i64 as *mut i64;
2439 ///
2440 /// let atom = AtomicPtr::<i64>::new(pointer);
2441 /// // Tag the bottom bit of the pointer.
2442 /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
2443 /// // Extract and untag.
2444 /// let tagged = atom.load(Ordering::Relaxed);
2445 /// assert_eq!(tagged.addr() & 1, 1);
2446 /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2447 /// ```
2448 #[inline]
2449 #[cfg(target_has_atomic = "ptr")]
2450 #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2451 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2452 #[rustc_should_not_be_called_on_const_items]
2453 pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T {
2454 // SAFETY: data races are prevented by atomic intrinsics.
2455 unsafe { atomic_or(self.p.get(), val, order).cast() }
2456 }
2457
2458 /// Performs a bitwise "and" operation on the address of the current
2459 /// pointer, and the argument `val`, and stores a pointer with provenance of
2460 /// the current pointer and the resulting address.
2461 ///
2462 /// This is equivalent to using [`map_addr`] to atomically perform
2463 /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged
2464 /// pointer schemes to atomically unset tag bits.
2465 ///
2466 /// **Caveat**: This operation returns the previous value. To compute the
2467 /// stored value without losing provenance, you may use [`map_addr`]. For
2468 /// example: `a.fetch_and(val).map_addr(|a| a & val)`.
2469 ///
2470 /// `fetch_and` takes an [`Ordering`] argument which describes the memory
2471 /// ordering of this operation. All ordering modes are possible. Note that
2472 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2473 /// and using [`Release`] makes the load part [`Relaxed`].
2474 ///
2475 /// **Note**: This method is only available on platforms that support atomic
2476 /// operations on [`AtomicPtr`].
2477 ///
2478 /// This API and its claimed semantics are part of the Strict Provenance
2479 /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2480 /// details.
2481 ///
2482 /// [`map_addr`]: pointer::map_addr
2483 ///
2484 /// # Examples
2485 ///
2486 /// ```
2487 /// use core::sync::atomic::{AtomicPtr, Ordering};
2488 ///
2489 /// let pointer = &mut 3i64 as *mut i64;
2490 /// // A tagged pointer
2491 /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2492 /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
2493 /// // Untag, and extract the previously tagged pointer.
2494 /// let untagged = atom.fetch_and(!1, Ordering::Relaxed)
2495 /// .map_addr(|a| a & !1);
2496 /// assert_eq!(untagged, pointer);
2497 /// ```
2498 #[inline]
2499 #[cfg(target_has_atomic = "ptr")]
2500 #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2501 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2502 #[rustc_should_not_be_called_on_const_items]
2503 pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T {
2504 // SAFETY: data races are prevented by atomic intrinsics.
2505 unsafe { atomic_and(self.p.get(), val, order).cast() }
2506 }
2507
2508 /// Performs a bitwise "xor" operation on the address of the current
2509 /// pointer, and the argument `val`, and stores a pointer with provenance of
2510 /// the current pointer and the resulting address.
2511 ///
2512 /// This is equivalent to using [`map_addr`] to atomically perform
2513 /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged
2514 /// pointer schemes to atomically toggle tag bits.
2515 ///
2516 /// **Caveat**: This operation returns the previous value. To compute the
2517 /// stored value without losing provenance, you may use [`map_addr`]. For
2518 /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`.
2519 ///
2520 /// `fetch_xor` takes an [`Ordering`] argument which describes the memory
2521 /// ordering of this operation. All ordering modes are possible. Note that
2522 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2523 /// and using [`Release`] makes the load part [`Relaxed`].
2524 ///
2525 /// **Note**: This method is only available on platforms that support atomic
2526 /// operations on [`AtomicPtr`].
2527 ///
2528 /// This API and its claimed semantics are part of the Strict Provenance
2529 /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2530 /// details.
2531 ///
2532 /// [`map_addr`]: pointer::map_addr
2533 ///
2534 /// # Examples
2535 ///
2536 /// ```
2537 /// use core::sync::atomic::{AtomicPtr, Ordering};
2538 ///
2539 /// let pointer = &mut 3i64 as *mut i64;
2540 /// let atom = AtomicPtr::<i64>::new(pointer);
2541 ///
2542 /// // Toggle a tag bit on the pointer.
2543 /// atom.fetch_xor(1, Ordering::Relaxed);
2544 /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2545 /// ```
2546 #[inline]
2547 #[cfg(target_has_atomic = "ptr")]
2548 #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2549 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2550 #[rustc_should_not_be_called_on_const_items]
2551 pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T {
2552 // SAFETY: data races are prevented by atomic intrinsics.
2553 unsafe { atomic_xor(self.p.get(), val, order).cast() }
2554 }
2555
2556 /// Returns a mutable pointer to the underlying pointer.
2557 ///
2558 /// Doing non-atomic reads and writes on the resulting pointer can be a data race.
2559 /// This method is mostly useful for FFI, where the function signature may use
2560 /// `*mut *mut T` instead of `&AtomicPtr<T>`.
2561 ///
2562 /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
2563 /// atomic types work with interior mutability. All modifications of an atomic change the value
2564 /// through a shared reference, and can do so safely as long as they use atomic operations. Any
2565 /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the
2566 /// requirements of the [memory model].
2567 ///
2568 /// # Examples
2569 ///
2570 /// ```ignore (extern-declaration)
2571 /// use std::sync::atomic::AtomicPtr;
2572 ///
2573 /// extern "C" {
2574 /// fn my_atomic_op(arg: *mut *mut u32);
2575 /// }
2576 ///
2577 /// let mut value = 17;
2578 /// let atomic = AtomicPtr::new(&mut value);
2579 ///
2580 /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
2581 /// unsafe {
2582 /// my_atomic_op(atomic.as_ptr());
2583 /// }
2584 /// ```
2585 ///
2586 /// [memory model]: self#memory-model-for-atomic-accesses
2587 #[inline]
2588 #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
2589 #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
2590 #[rustc_never_returns_null_ptr]
2591 pub const fn as_ptr(&self) -> *mut *mut T {
2592 self.p.get()
2593 }
2594}
2595
2596#[cfg(target_has_atomic_load_store = "8")]
2597#[stable(feature = "atomic_bool_from", since = "1.24.0")]
2598#[rustc_const_unstable(feature = "const_convert", issue = "143773")]
2599#[cfg(not(feature = "ferrocene_subset"))]
2600impl const From<bool> for AtomicBool {
2601 /// Converts a `bool` into an `AtomicBool`.
2602 ///
2603 /// # Examples
2604 ///
2605 /// ```
2606 /// use std::sync::atomic::AtomicBool;
2607 /// let atomic_bool = AtomicBool::from(true);
2608 /// assert_eq!(format!("{atomic_bool:?}"), "true")
2609 /// ```
2610 #[inline]
2611 fn from(b: bool) -> Self {
2612 Self::new(b)
2613 }
2614}
2615
2616#[cfg(target_has_atomic_load_store = "ptr")]
2617#[stable(feature = "atomic_from", since = "1.23.0")]
2618#[rustc_const_unstable(feature = "const_convert", issue = "143773")]
2619#[cfg(not(feature = "ferrocene_subset"))]
2620impl<T> const From<*mut T> for AtomicPtr<T> {
2621 /// Converts a `*mut T` into an `AtomicPtr<T>`.
2622 #[inline]
2623 fn from(p: *mut T) -> Self {
2624 Self::new(p)
2625 }
2626}
2627
2628#[allow(unused_macros)] // This macro ends up being unused on some architectures.
2629macro_rules! if_8_bit {
2630 (u8, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($yes)*)?) };
2631 (i8, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($yes)*)?) };
2632 ($_:ident, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($no)*)?) };
2633}
2634
2635#[cfg(target_has_atomic_load_store)]
2636macro_rules! atomic_int {
2637 ($cfg_cas:meta,
2638 $cfg_align:meta,
2639 $stable:meta,
2640 $stable_cxchg:meta,
2641 $stable_debug:meta,
2642 $stable_access:meta,
2643 $stable_from:meta,
2644 $stable_nand:meta,
2645 $const_stable_new:meta,
2646 $const_stable_into_inner:meta,
2647 $diagnostic_item:meta,
2648 $s_int_type:literal,
2649 $extra_feature:expr,
2650 $min_fn:ident, $max_fn:ident,
2651 $align:expr,
2652 $int_type:ident $atomic_type:ident) => {
2653 /// An integer type which can be safely shared between threads.
2654 ///
2655 /// This type has the same
2656 #[doc = if_8_bit!(
2657 $int_type,
2658 yes = ["size, alignment, and bit validity"],
2659 no = ["size and bit validity"],
2660 )]
2661 /// as the underlying integer type, [`
2662 #[doc = $s_int_type]
2663 /// `].
2664 #[doc = if_8_bit! {
2665 $int_type,
2666 no = [
2667 "However, the alignment of this type is always equal to its ",
2668 "size, even on targets where [`", $s_int_type, "`] has a ",
2669 "lesser alignment."
2670 ],
2671 }]
2672 ///
2673 /// For more about the differences between atomic types and
2674 /// non-atomic types as well as information about the portability of
2675 /// this type, please see the [module-level documentation].
2676 ///
2677 /// **Note:** This type is only available on platforms that support
2678 /// atomic loads and stores of [`
2679 #[doc = $s_int_type]
2680 /// `].
2681 ///
2682 /// [module-level documentation]: crate::sync::atomic
2683 #[$stable]
2684 #[$diagnostic_item]
2685 #[repr(C, align($align))]
2686 pub struct $atomic_type {
2687 v: UnsafeCell<$int_type>,
2688 }
2689
2690 #[$stable]
2691 impl Default for $atomic_type {
2692 #[inline]
2693 fn default() -> Self {
2694 Self::new(Default::default())
2695 }
2696 }
2697
2698 #[$stable_from]
2699 #[rustc_const_unstable(feature = "const_convert", issue = "143773")]
2700 impl const From<$int_type> for $atomic_type {
2701 #[doc = concat!("Converts an `", stringify!($int_type), "` into an `", stringify!($atomic_type), "`.")]
2702 #[inline]
2703 fn from(v: $int_type) -> Self { Self::new(v) }
2704 }
2705
2706 #[$stable_debug]
2707 #[cfg(not(feature = "ferrocene_subset"))]
2708 impl fmt::Debug for $atomic_type {
2709 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
2710 fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
2711 }
2712 }
2713
2714 // Send is implicitly implemented.
2715 #[$stable]
2716 unsafe impl Sync for $atomic_type {}
2717
2718 impl $atomic_type {
2719 /// Creates a new atomic integer.
2720 ///
2721 /// # Examples
2722 ///
2723 /// ```
2724 #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2725 ///
2726 #[doc = concat!("let atomic_forty_two = ", stringify!($atomic_type), "::new(42);")]
2727 /// ```
2728 #[inline]
2729 #[$stable]
2730 #[$const_stable_new]
2731 #[must_use]
2732 pub const fn new(v: $int_type) -> Self {
2733 Self {v: UnsafeCell::new(v)}
2734 }
2735
2736 /// Creates a new reference to an atomic integer from a pointer.
2737 ///
2738 /// # Examples
2739 ///
2740 /// ```
2741 #[doc = concat!($extra_feature, "use std::sync::atomic::{self, ", stringify!($atomic_type), "};")]
2742 ///
2743 /// // Get a pointer to an allocated value
2744 #[doc = concat!("let ptr: *mut ", stringify!($int_type), " = Box::into_raw(Box::new(0));")]
2745 ///
2746 #[doc = concat!("assert!(ptr.cast::<", stringify!($atomic_type), ">().is_aligned());")]
2747 ///
2748 /// {
2749 /// // Create an atomic view of the allocated value
2750 // SAFETY: this is a doc comment, tidy, it can't hurt you (also guaranteed by the construction of `ptr` and the assert above)
2751 #[doc = concat!(" let atomic = unsafe {", stringify!($atomic_type), "::from_ptr(ptr) };")]
2752 ///
2753 /// // Use `atomic` for atomic operations, possibly share it with other threads
2754 /// atomic.store(1, atomic::Ordering::Relaxed);
2755 /// }
2756 ///
2757 /// // It's ok to non-atomically access the value behind `ptr`,
2758 /// // since the reference to the atomic ended its lifetime in the block above
2759 /// assert_eq!(unsafe { *ptr }, 1);
2760 ///
2761 /// // Deallocate the value
2762 /// unsafe { drop(Box::from_raw(ptr)) }
2763 /// ```
2764 ///
2765 /// # Safety
2766 ///
2767 /// * `ptr` must be aligned to
2768 #[doc = concat!(" `align_of::<", stringify!($atomic_type), ">()`")]
2769 #[doc = if_8_bit!{
2770 $int_type,
2771 yes = [
2772 " (note that this is always true, since `align_of::<",
2773 stringify!($atomic_type), ">() == 1`)."
2774 ],
2775 no = [
2776 " (note that on some platforms this can be bigger than `align_of::<",
2777 stringify!($int_type), ">()`)."
2778 ],
2779 }]
2780 /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2781 /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
2782 /// allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different
2783 /// sizes, without synchronization.
2784 ///
2785 /// [valid]: crate::ptr#safety
2786 /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
2787 #[inline]
2788 #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
2789 #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
2790 pub const unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a $atomic_type {
2791 // SAFETY: guaranteed by the caller
2792 unsafe { &*ptr.cast() }
2793 }
2794
2795
2796 /// Returns a mutable reference to the underlying integer.
2797 ///
2798 /// This is safe because the mutable reference guarantees that no other threads are
2799 /// concurrently accessing the atomic data.
2800 ///
2801 /// # Examples
2802 ///
2803 /// ```
2804 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2805 ///
2806 #[doc = concat!("let mut some_var = ", stringify!($atomic_type), "::new(10);")]
2807 /// assert_eq!(*some_var.get_mut(), 10);
2808 /// *some_var.get_mut() = 5;
2809 /// assert_eq!(some_var.load(Ordering::SeqCst), 5);
2810 /// ```
2811 #[inline]
2812 #[$stable_access]
2813 pub fn get_mut(&mut self) -> &mut $int_type {
2814 self.v.get_mut()
2815 }
2816
2817 #[doc = concat!("Get atomic access to a `&mut ", stringify!($int_type), "`.")]
2818 ///
2819 #[doc = if_8_bit! {
2820 $int_type,
2821 no = [
2822 "**Note:** This function is only available on targets where `",
2823 stringify!($atomic_type), "` has the same alignment as `", stringify!($int_type), "`."
2824 ],
2825 }]
2826 ///
2827 /// # Examples
2828 ///
2829 /// ```
2830 /// #![feature(atomic_from_mut)]
2831 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2832 ///
2833 /// let mut some_int = 123;
2834 #[doc = concat!("let a = ", stringify!($atomic_type), "::from_mut(&mut some_int);")]
2835 /// a.store(100, Ordering::Relaxed);
2836 /// assert_eq!(some_int, 100);
2837 /// ```
2838 ///
2839 #[inline]
2840 #[$cfg_align]
2841 #[unstable(feature = "atomic_from_mut", issue = "76314")]
2842 pub fn from_mut(v: &mut $int_type) -> &mut Self {
2843 let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2844 // SAFETY:
2845 // - the mutable reference guarantees unique ownership.
2846 // - the alignment of `$int_type` and `Self` is the
2847 // same, as promised by $cfg_align and verified above.
2848 unsafe { &mut *(v as *mut $int_type as *mut Self) }
2849 }
2850
2851 #[doc = concat!("Get non-atomic access to a `&mut [", stringify!($atomic_type), "]` slice")]
2852 ///
2853 /// This is safe because the mutable reference guarantees that no other threads are
2854 /// concurrently accessing the atomic data.
2855 ///
2856 /// # Examples
2857 ///
2858 /// ```ignore-wasm
2859 /// #![feature(atomic_from_mut)]
2860 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2861 ///
2862 #[doc = concat!("let mut some_ints = [const { ", stringify!($atomic_type), "::new(0) }; 10];")]
2863 ///
2864 #[doc = concat!("let view: &mut [", stringify!($int_type), "] = ", stringify!($atomic_type), "::get_mut_slice(&mut some_ints);")]
2865 /// assert_eq!(view, [0; 10]);
2866 /// view
2867 /// .iter_mut()
2868 /// .enumerate()
2869 /// .for_each(|(idx, int)| *int = idx as _);
2870 ///
2871 /// std::thread::scope(|s| {
2872 /// some_ints
2873 /// .iter()
2874 /// .enumerate()
2875 /// .for_each(|(idx, int)| {
2876 /// s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
2877 /// })
2878 /// });
2879 /// ```
2880 #[inline]
2881 #[unstable(feature = "atomic_from_mut", issue = "76314")]
2882 pub fn get_mut_slice(this: &mut [Self]) -> &mut [$int_type] {
2883 // SAFETY: the mutable reference guarantees unique ownership.
2884 unsafe { &mut *(this as *mut [Self] as *mut [$int_type]) }
2885 }
2886
2887 #[doc = concat!("Get atomic access to a `&mut [", stringify!($int_type), "]` slice.")]
2888 ///
2889 #[doc = if_8_bit! {
2890 $int_type,
2891 no = [
2892 "**Note:** This function is only available on targets where `",
2893 stringify!($atomic_type), "` has the same alignment as `", stringify!($int_type), "`."
2894 ],
2895 }]
2896 ///
2897 /// # Examples
2898 ///
2899 /// ```ignore-wasm
2900 /// #![feature(atomic_from_mut)]
2901 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2902 ///
2903 /// let mut some_ints = [0; 10];
2904 #[doc = concat!("let a = &*", stringify!($atomic_type), "::from_mut_slice(&mut some_ints);")]
2905 /// std::thread::scope(|s| {
2906 /// for i in 0..a.len() {
2907 /// s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
2908 /// }
2909 /// });
2910 /// for (i, n) in some_ints.into_iter().enumerate() {
2911 /// assert_eq!(i, n as usize);
2912 /// }
2913 /// ```
2914 #[inline]
2915 #[$cfg_align]
2916 #[unstable(feature = "atomic_from_mut", issue = "76314")]
2917 pub fn from_mut_slice(v: &mut [$int_type]) -> &mut [Self] {
2918 let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2919 // SAFETY:
2920 // - the mutable reference guarantees unique ownership.
2921 // - the alignment of `$int_type` and `Self` is the
2922 // same, as promised by $cfg_align and verified above.
2923 unsafe { &mut *(v as *mut [$int_type] as *mut [Self]) }
2924 }
2925
2926 /// Consumes the atomic and returns the contained value.
2927 ///
2928 /// This is safe because passing `self` by value guarantees that no other threads are
2929 /// concurrently accessing the atomic data.
2930 ///
2931 /// # Examples
2932 ///
2933 /// ```
2934 #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2935 ///
2936 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2937 /// assert_eq!(some_var.into_inner(), 5);
2938 /// ```
2939 #[inline]
2940 #[$stable_access]
2941 #[$const_stable_into_inner]
2942 pub const fn into_inner(self) -> $int_type {
2943 self.v.into_inner()
2944 }
2945
2946 /// Loads a value from the atomic integer.
2947 ///
2948 /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2949 /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
2950 ///
2951 /// # Panics
2952 ///
2953 /// Panics if `order` is [`Release`] or [`AcqRel`].
2954 ///
2955 /// # Examples
2956 ///
2957 /// ```
2958 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2959 ///
2960 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2961 ///
2962 /// assert_eq!(some_var.load(Ordering::Relaxed), 5);
2963 /// ```
2964 #[inline]
2965 #[$stable]
2966 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2967 pub fn load(&self, order: Ordering) -> $int_type {
2968 // SAFETY: data races are prevented by atomic intrinsics.
2969 unsafe { atomic_load(self.v.get(), order) }
2970 }
2971
2972 /// Stores a value into the atomic integer.
2973 ///
2974 /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2975 /// Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
2976 ///
2977 /// # Panics
2978 ///
2979 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
2980 ///
2981 /// # Examples
2982 ///
2983 /// ```
2984 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2985 ///
2986 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2987 ///
2988 /// some_var.store(10, Ordering::Relaxed);
2989 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2990 /// ```
2991 #[inline]
2992 #[$stable]
2993 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2994 #[rustc_should_not_be_called_on_const_items]
2995 pub fn store(&self, val: $int_type, order: Ordering) {
2996 // SAFETY: data races are prevented by atomic intrinsics.
2997 unsafe { atomic_store(self.v.get(), val, order); }
2998 }
2999
3000 /// Stores a value into the atomic integer, returning the previous value.
3001 ///
3002 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
3003 /// of this operation. All ordering modes are possible. Note that using
3004 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3005 /// using [`Release`] makes the load part [`Relaxed`].
3006 ///
3007 /// **Note**: This method is only available on platforms that support atomic operations on
3008 #[doc = concat!("[`", $s_int_type, "`].")]
3009 ///
3010 /// # Examples
3011 ///
3012 /// ```
3013 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3014 ///
3015 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
3016 ///
3017 /// assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
3018 /// ```
3019 #[inline]
3020 #[$stable]
3021 #[$cfg_cas]
3022 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3023 #[rustc_should_not_be_called_on_const_items]
3024 pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
3025 // SAFETY: data races are prevented by atomic intrinsics.
3026 unsafe { atomic_swap(self.v.get(), val, order) }
3027 }
3028
3029 /// Stores a value into the atomic integer if the current value is the same as
3030 /// the `current` value.
3031 ///
3032 /// The return value is always the previous value. If it is equal to `current`, then the
3033 /// value was updated.
3034 ///
3035 /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
3036 /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
3037 /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
3038 /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
3039 /// happens, and using [`Release`] makes the load part [`Relaxed`].
3040 ///
3041 /// **Note**: This method is only available on platforms that support atomic operations on
3042 #[doc = concat!("[`", $s_int_type, "`].")]
3043 ///
3044 /// # Migrating to `compare_exchange` and `compare_exchange_weak`
3045 ///
3046 /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
3047 /// memory orderings:
3048 ///
3049 /// Original | Success | Failure
3050 /// -------- | ------- | -------
3051 /// Relaxed | Relaxed | Relaxed
3052 /// Acquire | Acquire | Acquire
3053 /// Release | Release | Relaxed
3054 /// AcqRel | AcqRel | Acquire
3055 /// SeqCst | SeqCst | SeqCst
3056 ///
3057 /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
3058 /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
3059 /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
3060 /// rather than to infer success vs failure based on the value that was read.
3061 ///
3062 /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
3063 /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
3064 /// which allows the compiler to generate better assembly code when the compare and swap
3065 /// is used in a loop.
3066 ///
3067 /// # Examples
3068 ///
3069 /// ```
3070 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3071 ///
3072 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
3073 ///
3074 /// assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
3075 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
3076 ///
3077 /// assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
3078 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
3079 /// ```
3080 #[inline]
3081 #[$stable]
3082 #[deprecated(
3083 since = "1.50.0",
3084 note = "Use `compare_exchange` or `compare_exchange_weak` instead")
3085 ]
3086 #[$cfg_cas]
3087 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3088 #[rustc_should_not_be_called_on_const_items]
3089 pub fn compare_and_swap(&self,
3090 current: $int_type,
3091 new: $int_type,
3092 order: Ordering) -> $int_type {
3093 match self.compare_exchange(current,
3094 new,
3095 order,
3096 strongest_failure_ordering(order)) {
3097 Ok(x) => x,
3098 Err(x) => x,
3099 }
3100 }
3101
3102 /// Stores a value into the atomic integer if the current value is the same as
3103 /// the `current` value.
3104 ///
3105 /// The return value is a result indicating whether the new value was written and
3106 /// containing the previous value. On success this value is guaranteed to be equal to
3107 /// `current`.
3108 ///
3109 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
3110 /// ordering of this operation. `success` describes the required ordering for the
3111 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
3112 /// `failure` describes the required ordering for the load operation that takes place when
3113 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
3114 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
3115 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3116 ///
3117 /// **Note**: This method is only available on platforms that support atomic operations on
3118 #[doc = concat!("[`", $s_int_type, "`].")]
3119 ///
3120 /// # Examples
3121 ///
3122 /// ```
3123 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3124 ///
3125 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
3126 ///
3127 /// assert_eq!(some_var.compare_exchange(5, 10,
3128 /// Ordering::Acquire,
3129 /// Ordering::Relaxed),
3130 /// Ok(5));
3131 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
3132 ///
3133 /// assert_eq!(some_var.compare_exchange(6, 12,
3134 /// Ordering::SeqCst,
3135 /// Ordering::Acquire),
3136 /// Err(10));
3137 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
3138 /// ```
3139 ///
3140 /// # Considerations
3141 ///
3142 /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
3143 /// of CAS operations. In particular, a load of the value followed by a successful
3144 /// `compare_exchange` with the previous load *does not ensure* that other threads have not
3145 /// changed the value in the interim! This is usually important when the *equality* check in
3146 /// the `compare_exchange` is being used to check the *identity* of a value, but equality
3147 /// does not necessarily imply identity. This is a particularly common case for pointers, as
3148 /// a pointer holding the same address does not imply that the same object exists at that
3149 /// address! In this case, `compare_exchange` can lead to the [ABA problem].
3150 ///
3151 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3152 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3153 #[inline]
3154 #[$stable_cxchg]
3155 #[$cfg_cas]
3156 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3157 #[rustc_should_not_be_called_on_const_items]
3158 pub fn compare_exchange(&self,
3159 current: $int_type,
3160 new: $int_type,
3161 success: Ordering,
3162 failure: Ordering) -> Result<$int_type, $int_type> {
3163 // SAFETY: data races are prevented by atomic intrinsics.
3164 unsafe { atomic_compare_exchange(self.v.get(), current, new, success, failure) }
3165 }
3166
3167 /// Stores a value into the atomic integer if the current value is the same as
3168 /// the `current` value.
3169 ///
3170 #[doc = concat!("Unlike [`", stringify!($atomic_type), "::compare_exchange`],")]
3171 /// this function is allowed to spuriously fail even
3172 /// when the comparison succeeds, which can result in more efficient code on some
3173 /// platforms. The return value is a result indicating whether the new value was
3174 /// written and containing the previous value.
3175 ///
3176 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
3177 /// ordering of this operation. `success` describes the required ordering for the
3178 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
3179 /// `failure` describes the required ordering for the load operation that takes place when
3180 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
3181 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
3182 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3183 ///
3184 /// **Note**: This method is only available on platforms that support atomic operations on
3185 #[doc = concat!("[`", $s_int_type, "`].")]
3186 ///
3187 /// # Examples
3188 ///
3189 /// ```
3190 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3191 ///
3192 #[doc = concat!("let val = ", stringify!($atomic_type), "::new(4);")]
3193 ///
3194 /// let mut old = val.load(Ordering::Relaxed);
3195 /// loop {
3196 /// let new = old * 2;
3197 /// match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
3198 /// Ok(_) => break,
3199 /// Err(x) => old = x,
3200 /// }
3201 /// }
3202 /// ```
3203 ///
3204 /// # Considerations
3205 ///
3206 /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
3207 /// of CAS operations. In particular, a load of the value followed by a successful
3208 /// `compare_exchange` with the previous load *does not ensure* that other threads have not
3209 /// changed the value in the interim. This is usually important when the *equality* check in
3210 /// the `compare_exchange` is being used to check the *identity* of a value, but equality
3211 /// does not necessarily imply identity. This is a particularly common case for pointers, as
3212 /// a pointer holding the same address does not imply that the same object exists at that
3213 /// address! In this case, `compare_exchange` can lead to the [ABA problem].
3214 ///
3215 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3216 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3217 #[inline]
3218 #[$stable_cxchg]
3219 #[$cfg_cas]
3220 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3221 #[rustc_should_not_be_called_on_const_items]
3222 pub fn compare_exchange_weak(&self,
3223 current: $int_type,
3224 new: $int_type,
3225 success: Ordering,
3226 failure: Ordering) -> Result<$int_type, $int_type> {
3227 // SAFETY: data races are prevented by atomic intrinsics.
3228 unsafe {
3229 atomic_compare_exchange_weak(self.v.get(), current, new, success, failure)
3230 }
3231 }
3232
3233 /// Adds to the current value, returning the previous value.
3234 ///
3235 /// This operation wraps around on overflow.
3236 ///
3237 /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering
3238 /// of this operation. All ordering modes are possible. Note that using
3239 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3240 /// using [`Release`] makes the load part [`Relaxed`].
3241 ///
3242 /// **Note**: This method is only available on platforms that support atomic operations on
3243 #[doc = concat!("[`", $s_int_type, "`].")]
3244 ///
3245 /// # Examples
3246 ///
3247 /// ```
3248 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3249 ///
3250 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0);")]
3251 /// assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
3252 /// assert_eq!(foo.load(Ordering::SeqCst), 10);
3253 /// ```
3254 #[inline]
3255 #[$stable]
3256 #[$cfg_cas]
3257 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3258 #[rustc_should_not_be_called_on_const_items]
3259 pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
3260 // SAFETY: data races are prevented by atomic intrinsics.
3261 unsafe { atomic_add(self.v.get(), val, order) }
3262 }
3263
3264 /// Subtracts from the current value, returning the previous value.
3265 ///
3266 /// This operation wraps around on overflow.
3267 ///
3268 /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
3269 /// of this operation. All ordering modes are possible. Note that using
3270 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3271 /// using [`Release`] makes the load part [`Relaxed`].
3272 ///
3273 /// **Note**: This method is only available on platforms that support atomic operations on
3274 #[doc = concat!("[`", $s_int_type, "`].")]
3275 ///
3276 /// # Examples
3277 ///
3278 /// ```
3279 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3280 ///
3281 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(20);")]
3282 /// assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
3283 /// assert_eq!(foo.load(Ordering::SeqCst), 10);
3284 /// ```
3285 #[inline]
3286 #[$stable]
3287 #[$cfg_cas]
3288 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3289 #[rustc_should_not_be_called_on_const_items]
3290 pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
3291 // SAFETY: data races are prevented by atomic intrinsics.
3292 unsafe { atomic_sub(self.v.get(), val, order) }
3293 }
3294
3295 /// Bitwise "and" with the current value.
3296 ///
3297 /// Performs a bitwise "and" operation on the current value and the argument `val`, and
3298 /// sets the new value to the result.
3299 ///
3300 /// Returns the previous value.
3301 ///
3302 /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
3303 /// of this operation. All ordering modes are possible. Note that using
3304 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3305 /// using [`Release`] makes the load part [`Relaxed`].
3306 ///
3307 /// **Note**: This method is only available on platforms that support atomic operations on
3308 #[doc = concat!("[`", $s_int_type, "`].")]
3309 ///
3310 /// # Examples
3311 ///
3312 /// ```
3313 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3314 ///
3315 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3316 /// assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3317 /// assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3318 /// ```
3319 #[inline]
3320 #[$stable]
3321 #[$cfg_cas]
3322 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3323 #[rustc_should_not_be_called_on_const_items]
3324 pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
3325 // SAFETY: data races are prevented by atomic intrinsics.
3326 unsafe { atomic_and(self.v.get(), val, order) }
3327 }
3328
3329 /// Bitwise "nand" with the current value.
3330 ///
3331 /// Performs a bitwise "nand" operation on the current value and the argument `val`, and
3332 /// sets the new value to the result.
3333 ///
3334 /// Returns the previous value.
3335 ///
3336 /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
3337 /// of this operation. All ordering modes are possible. Note that using
3338 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3339 /// using [`Release`] makes the load part [`Relaxed`].
3340 ///
3341 /// **Note**: This method is only available on platforms that support atomic operations on
3342 #[doc = concat!("[`", $s_int_type, "`].")]
3343 ///
3344 /// # Examples
3345 ///
3346 /// ```
3347 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3348 ///
3349 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0x13);")]
3350 /// assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
3351 /// assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
3352 /// ```
3353 #[inline]
3354 #[$stable_nand]
3355 #[$cfg_cas]
3356 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3357 #[rustc_should_not_be_called_on_const_items]
3358 pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
3359 // SAFETY: data races are prevented by atomic intrinsics.
3360 unsafe { atomic_nand(self.v.get(), val, order) }
3361 }
3362
3363 /// Bitwise "or" with the current value.
3364 ///
3365 /// Performs a bitwise "or" operation on the current value and the argument `val`, and
3366 /// sets the new value to the result.
3367 ///
3368 /// Returns the previous value.
3369 ///
3370 /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
3371 /// of this operation. All ordering modes are possible. Note that using
3372 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3373 /// using [`Release`] makes the load part [`Relaxed`].
3374 ///
3375 /// **Note**: This method is only available on platforms that support atomic operations on
3376 #[doc = concat!("[`", $s_int_type, "`].")]
3377 ///
3378 /// # Examples
3379 ///
3380 /// ```
3381 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3382 ///
3383 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3384 /// assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3385 /// assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3386 /// ```
3387 #[inline]
3388 #[$stable]
3389 #[$cfg_cas]
3390 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3391 #[rustc_should_not_be_called_on_const_items]
3392 pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
3393 // SAFETY: data races are prevented by atomic intrinsics.
3394 unsafe { atomic_or(self.v.get(), val, order) }
3395 }
3396
3397 /// Bitwise "xor" with the current value.
3398 ///
3399 /// Performs a bitwise "xor" operation on the current value and the argument `val`, and
3400 /// sets the new value to the result.
3401 ///
3402 /// Returns the previous value.
3403 ///
3404 /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
3405 /// of this operation. All ordering modes are possible. Note that using
3406 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3407 /// using [`Release`] makes the load part [`Relaxed`].
3408 ///
3409 /// **Note**: This method is only available on platforms that support atomic operations on
3410 #[doc = concat!("[`", $s_int_type, "`].")]
3411 ///
3412 /// # Examples
3413 ///
3414 /// ```
3415 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3416 ///
3417 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3418 /// assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
3419 /// assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3420 /// ```
3421 #[inline]
3422 #[$stable]
3423 #[$cfg_cas]
3424 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3425 #[rustc_should_not_be_called_on_const_items]
3426 pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
3427 // SAFETY: data races are prevented by atomic intrinsics.
3428 unsafe { atomic_xor(self.v.get(), val, order) }
3429 }
3430
3431 /// Fetches the value, and applies a function to it that returns an optional
3432 /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3433 /// `Err(previous_value)`.
3434 ///
3435 /// Note: This may call the function multiple times if the value has been changed from other threads in
3436 /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3437 /// only once to the stored value.
3438 ///
3439 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3440 /// The first describes the required ordering for when the operation finally succeeds while the second
3441 /// describes the required ordering for loads. These correspond to the success and failure orderings of
3442 #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3443 /// respectively.
3444 ///
3445 /// Using [`Acquire`] as success ordering makes the store part
3446 /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3447 /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3448 ///
3449 /// **Note**: This method is only available on platforms that support atomic operations on
3450 #[doc = concat!("[`", $s_int_type, "`].")]
3451 ///
3452 /// # Considerations
3453 ///
3454 /// This method is not magic; it is not provided by the hardware, and does not act like a
3455 /// critical section or mutex.
3456 ///
3457 /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
3458 /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem]
3459 /// if this atomic integer is an index or more generally if knowledge of only the *bitwise value*
3460 /// of the atomic is not in and of itself sufficient to ensure any required preconditions.
3461 ///
3462 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3463 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3464 ///
3465 /// # Examples
3466 ///
3467 /// ```rust
3468 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3469 ///
3470 #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3471 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3472 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3473 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3474 /// assert_eq!(x.load(Ordering::SeqCst), 9);
3475 /// ```
3476 #[inline]
3477 #[stable(feature = "no_more_cas", since = "1.45.0")]
3478 #[$cfg_cas]
3479 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3480 #[rustc_should_not_be_called_on_const_items]
3481 pub fn fetch_update<F>(&self,
3482 set_order: Ordering,
3483 fetch_order: Ordering,
3484 mut f: F) -> Result<$int_type, $int_type>
3485 where F: FnMut($int_type) -> Option<$int_type> {
3486 let mut prev = self.load(fetch_order);
3487 while let Some(next) = f(prev) {
3488 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
3489 x @ Ok(_) => return x,
3490 Err(next_prev) => prev = next_prev
3491 }
3492 }
3493 Err(prev)
3494 }
3495
3496 /// Fetches the value, and applies a function to it that returns an optional
3497 /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3498 /// `Err(previous_value)`.
3499 ///
3500 #[doc = concat!("See also: [`update`](`", stringify!($atomic_type), "::update`).")]
3501 ///
3502 /// Note: This may call the function multiple times if the value has been changed from other threads in
3503 /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3504 /// only once to the stored value.
3505 ///
3506 /// `try_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3507 /// The first describes the required ordering for when the operation finally succeeds while the second
3508 /// describes the required ordering for loads. These correspond to the success and failure orderings of
3509 #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3510 /// respectively.
3511 ///
3512 /// Using [`Acquire`] as success ordering makes the store part
3513 /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3514 /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3515 ///
3516 /// **Note**: This method is only available on platforms that support atomic operations on
3517 #[doc = concat!("[`", $s_int_type, "`].")]
3518 ///
3519 /// # Considerations
3520 ///
3521 /// This method is not magic; it is not provided by the hardware, and does not act like a
3522 /// critical section or mutex.
3523 ///
3524 /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
3525 /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem]
3526 /// if this atomic integer is an index or more generally if knowledge of only the *bitwise value*
3527 /// of the atomic is not in and of itself sufficient to ensure any required preconditions.
3528 ///
3529 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3530 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3531 ///
3532 /// # Examples
3533 ///
3534 /// ```rust
3535 /// #![feature(atomic_try_update)]
3536 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3537 ///
3538 #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3539 /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3540 /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3541 /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3542 /// assert_eq!(x.load(Ordering::SeqCst), 9);
3543 /// ```
3544 #[inline]
3545 #[unstable(feature = "atomic_try_update", issue = "135894")]
3546 #[$cfg_cas]
3547 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3548 #[rustc_should_not_be_called_on_const_items]
3549 pub fn try_update(
3550 &self,
3551 set_order: Ordering,
3552 fetch_order: Ordering,
3553 f: impl FnMut($int_type) -> Option<$int_type>,
3554 ) -> Result<$int_type, $int_type> {
3555 // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`;
3556 // when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`.
3557 self.fetch_update(set_order, fetch_order, f)
3558 }
3559
3560 /// Fetches the value, applies a function to it that it return a new value.
3561 /// The new value is stored and the old value is returned.
3562 ///
3563 #[doc = concat!("See also: [`try_update`](`", stringify!($atomic_type), "::try_update`).")]
3564 ///
3565 /// Note: This may call the function multiple times if the value has been changed from other threads in
3566 /// the meantime, but the function will have been applied only once to the stored value.
3567 ///
3568 /// `update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3569 /// The first describes the required ordering for when the operation finally succeeds while the second
3570 /// describes the required ordering for loads. These correspond to the success and failure orderings of
3571 #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3572 /// respectively.
3573 ///
3574 /// Using [`Acquire`] as success ordering makes the store part
3575 /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3576 /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3577 ///
3578 /// **Note**: This method is only available on platforms that support atomic operations on
3579 #[doc = concat!("[`", $s_int_type, "`].")]
3580 ///
3581 /// # Considerations
3582 ///
3583 /// [CAS operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3584 /// This method is not magic; it is not provided by the hardware, and does not act like a
3585 /// critical section or mutex.
3586 ///
3587 /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
3588 /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem]
3589 /// if this atomic integer is an index or more generally if knowledge of only the *bitwise value*
3590 /// of the atomic is not in and of itself sufficient to ensure any required preconditions.
3591 ///
3592 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3593 /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3594 ///
3595 /// # Examples
3596 ///
3597 /// ```rust
3598 /// #![feature(atomic_try_update)]
3599 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3600 ///
3601 #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3602 /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| x + 1), 7);
3603 /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| x + 1), 8);
3604 /// assert_eq!(x.load(Ordering::SeqCst), 9);
3605 /// ```
3606 #[inline]
3607 #[unstable(feature = "atomic_try_update", issue = "135894")]
3608 #[$cfg_cas]
3609 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3610 #[rustc_should_not_be_called_on_const_items]
3611 pub fn update(
3612 &self,
3613 set_order: Ordering,
3614 fetch_order: Ordering,
3615 mut f: impl FnMut($int_type) -> $int_type,
3616 ) -> $int_type {
3617 let mut prev = self.load(fetch_order);
3618 loop {
3619 match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
3620 Ok(x) => break x,
3621 Err(next_prev) => prev = next_prev,
3622 }
3623 }
3624 }
3625
3626 /// Maximum with the current value.
3627 ///
3628 /// Finds the maximum of the current value and the argument `val`, and
3629 /// sets the new value to the result.
3630 ///
3631 /// Returns the previous value.
3632 ///
3633 /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering
3634 /// of this operation. All ordering modes are possible. Note that using
3635 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3636 /// using [`Release`] makes the load part [`Relaxed`].
3637 ///
3638 /// **Note**: This method is only available on platforms that support atomic operations on
3639 #[doc = concat!("[`", $s_int_type, "`].")]
3640 ///
3641 /// # Examples
3642 ///
3643 /// ```
3644 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3645 ///
3646 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3647 /// assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
3648 /// assert_eq!(foo.load(Ordering::SeqCst), 42);
3649 /// ```
3650 ///
3651 /// If you want to obtain the maximum value in one step, you can use the following:
3652 ///
3653 /// ```
3654 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3655 ///
3656 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3657 /// let bar = 42;
3658 /// let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
3659 /// assert!(max_foo == 42);
3660 /// ```
3661 #[inline]
3662 #[stable(feature = "atomic_min_max", since = "1.45.0")]
3663 #[$cfg_cas]
3664 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3665 #[rustc_should_not_be_called_on_const_items]
3666 pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
3667 // SAFETY: data races are prevented by atomic intrinsics.
3668 unsafe { $max_fn(self.v.get(), val, order) }
3669 }
3670
3671 /// Minimum with the current value.
3672 ///
3673 /// Finds the minimum of the current value and the argument `val`, and
3674 /// sets the new value to the result.
3675 ///
3676 /// Returns the previous value.
3677 ///
3678 /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering
3679 /// of this operation. All ordering modes are possible. Note that using
3680 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3681 /// using [`Release`] makes the load part [`Relaxed`].
3682 ///
3683 /// **Note**: This method is only available on platforms that support atomic operations on
3684 #[doc = concat!("[`", $s_int_type, "`].")]
3685 ///
3686 /// # Examples
3687 ///
3688 /// ```
3689 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3690 ///
3691 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3692 /// assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
3693 /// assert_eq!(foo.load(Ordering::Relaxed), 23);
3694 /// assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
3695 /// assert_eq!(foo.load(Ordering::Relaxed), 22);
3696 /// ```
3697 ///
3698 /// If you want to obtain the minimum value in one step, you can use the following:
3699 ///
3700 /// ```
3701 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3702 ///
3703 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3704 /// let bar = 12;
3705 /// let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
3706 /// assert_eq!(min_foo, 12);
3707 /// ```
3708 #[inline]
3709 #[stable(feature = "atomic_min_max", since = "1.45.0")]
3710 #[$cfg_cas]
3711 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3712 #[rustc_should_not_be_called_on_const_items]
3713 pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
3714 // SAFETY: data races are prevented by atomic intrinsics.
3715 unsafe { $min_fn(self.v.get(), val, order) }
3716 }
3717
3718 /// Returns a mutable pointer to the underlying integer.
3719 ///
3720 /// Doing non-atomic reads and writes on the resulting integer can be a data race.
3721 /// This method is mostly useful for FFI, where the function signature may use
3722 #[doc = concat!("`*mut ", stringify!($int_type), "` instead of `&", stringify!($atomic_type), "`.")]
3723 ///
3724 /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
3725 /// atomic types work with interior mutability. All modifications of an atomic change the value
3726 /// through a shared reference, and can do so safely as long as they use atomic operations. Any
3727 /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the
3728 /// requirements of the [memory model].
3729 ///
3730 /// # Examples
3731 ///
3732 /// ```ignore (extern-declaration)
3733 /// # fn main() {
3734 #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
3735 ///
3736 /// extern "C" {
3737 #[doc = concat!(" fn my_atomic_op(arg: *mut ", stringify!($int_type), ");")]
3738 /// }
3739 ///
3740 #[doc = concat!("let atomic = ", stringify!($atomic_type), "::new(1);")]
3741 ///
3742 /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
3743 /// unsafe {
3744 /// my_atomic_op(atomic.as_ptr());
3745 /// }
3746 /// # }
3747 /// ```
3748 ///
3749 /// [memory model]: self#memory-model-for-atomic-accesses
3750 #[inline]
3751 #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
3752 #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
3753 #[rustc_never_returns_null_ptr]
3754 pub const fn as_ptr(&self) -> *mut $int_type {
3755 self.v.get()
3756 }
3757 }
3758 }
3759}
3760
3761#[cfg(target_has_atomic_load_store = "8")]
3762#[cfg(not(feature = "ferrocene_subset"))]
3763atomic_int! {
3764 cfg(target_has_atomic = "8"),
3765 cfg(target_has_atomic_equal_alignment = "8"),
3766 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3767 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3768 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3769 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3770 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3771 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3772 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3773 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3774 rustc_diagnostic_item = "AtomicI8",
3775 "i8",
3776 "",
3777 atomic_min, atomic_max,
3778 1,
3779 i8 AtomicI8
3780}
3781#[cfg(target_has_atomic_load_store = "8")]
3782atomic_int! {
3783 cfg(target_has_atomic = "8"),
3784 cfg(target_has_atomic_equal_alignment = "8"),
3785 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3786 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3787 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3788 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3789 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3790 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3791 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3792 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3793 rustc_diagnostic_item = "AtomicU8",
3794 "u8",
3795 "",
3796 atomic_umin, atomic_umax,
3797 1,
3798 u8 AtomicU8
3799}
3800#[cfg(target_has_atomic_load_store = "16")]
3801#[cfg(not(feature = "ferrocene_subset"))]
3802atomic_int! {
3803 cfg(target_has_atomic = "16"),
3804 cfg(target_has_atomic_equal_alignment = "16"),
3805 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3806 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3807 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3808 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3809 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3810 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3811 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3812 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3813 rustc_diagnostic_item = "AtomicI16",
3814 "i16",
3815 "",
3816 atomic_min, atomic_max,
3817 2,
3818 i16 AtomicI16
3819}
3820#[cfg(target_has_atomic_load_store = "16")]
3821atomic_int! {
3822 cfg(target_has_atomic = "16"),
3823 cfg(target_has_atomic_equal_alignment = "16"),
3824 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3825 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3826 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3827 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3828 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3829 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3830 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3831 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3832 rustc_diagnostic_item = "AtomicU16",
3833 "u16",
3834 "",
3835 atomic_umin, atomic_umax,
3836 2,
3837 u16 AtomicU16
3838}
3839#[cfg(target_has_atomic_load_store = "32")]
3840#[cfg(not(feature = "ferrocene_subset"))]
3841atomic_int! {
3842 cfg(target_has_atomic = "32"),
3843 cfg(target_has_atomic_equal_alignment = "32"),
3844 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3845 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3846 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3847 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3848 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3849 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3850 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3851 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3852 rustc_diagnostic_item = "AtomicI32",
3853 "i32",
3854 "",
3855 atomic_min, atomic_max,
3856 4,
3857 i32 AtomicI32
3858}
3859#[cfg(target_has_atomic_load_store = "32")]
3860atomic_int! {
3861 cfg(target_has_atomic = "32"),
3862 cfg(target_has_atomic_equal_alignment = "32"),
3863 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3864 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3865 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3866 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3867 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3868 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3869 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3870 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3871 rustc_diagnostic_item = "AtomicU32",
3872 "u32",
3873 "",
3874 atomic_umin, atomic_umax,
3875 4,
3876 u32 AtomicU32
3877}
3878#[cfg(target_has_atomic_load_store = "64")]
3879#[cfg(not(feature = "ferrocene_subset"))]
3880atomic_int! {
3881 cfg(target_has_atomic = "64"),
3882 cfg(target_has_atomic_equal_alignment = "64"),
3883 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3884 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3885 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3886 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3887 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3888 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3889 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3890 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3891 rustc_diagnostic_item = "AtomicI64",
3892 "i64",
3893 "",
3894 atomic_min, atomic_max,
3895 8,
3896 i64 AtomicI64
3897}
3898#[cfg(target_has_atomic_load_store = "64")]
3899atomic_int! {
3900 cfg(target_has_atomic = "64"),
3901 cfg(target_has_atomic_equal_alignment = "64"),
3902 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3903 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3904 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3905 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3906 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3907 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3908 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3909 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3910 rustc_diagnostic_item = "AtomicU64",
3911 "u64",
3912 "",
3913 atomic_umin, atomic_umax,
3914 8,
3915 u64 AtomicU64
3916}
3917#[cfg(target_has_atomic_load_store = "128")]
3918#[cfg(not(feature = "ferrocene_subset"))]
3919atomic_int! {
3920 cfg(target_has_atomic = "128"),
3921 cfg(target_has_atomic_equal_alignment = "128"),
3922 unstable(feature = "integer_atomics", issue = "99069"),
3923 unstable(feature = "integer_atomics", issue = "99069"),
3924 unstable(feature = "integer_atomics", issue = "99069"),
3925 unstable(feature = "integer_atomics", issue = "99069"),
3926 unstable(feature = "integer_atomics", issue = "99069"),
3927 unstable(feature = "integer_atomics", issue = "99069"),
3928 rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3929 rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3930 rustc_diagnostic_item = "AtomicI128",
3931 "i128",
3932 "#![feature(integer_atomics)]\n\n",
3933 atomic_min, atomic_max,
3934 16,
3935 i128 AtomicI128
3936}
3937#[cfg(target_has_atomic_load_store = "128")]
3938#[cfg(not(feature = "ferrocene_subset"))]
3939atomic_int! {
3940 cfg(target_has_atomic = "128"),
3941 cfg(target_has_atomic_equal_alignment = "128"),
3942 unstable(feature = "integer_atomics", issue = "99069"),
3943 unstable(feature = "integer_atomics", issue = "99069"),
3944 unstable(feature = "integer_atomics", issue = "99069"),
3945 unstable(feature = "integer_atomics", issue = "99069"),
3946 unstable(feature = "integer_atomics", issue = "99069"),
3947 unstable(feature = "integer_atomics", issue = "99069"),
3948 rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3949 rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3950 rustc_diagnostic_item = "AtomicU128",
3951 "u128",
3952 "#![feature(integer_atomics)]\n\n",
3953 atomic_umin, atomic_umax,
3954 16,
3955 u128 AtomicU128
3956}
3957
3958#[cfg(target_has_atomic_load_store = "ptr")]
3959macro_rules! atomic_int_ptr_sized {
3960 ( $($target_pointer_width:literal $align:literal)* ) => { $(
3961 #[cfg(target_pointer_width = $target_pointer_width)]
3962 #[cfg(not(feature = "ferrocene_subset"))]
3963 atomic_int! {
3964 cfg(target_has_atomic = "ptr"),
3965 cfg(target_has_atomic_equal_alignment = "ptr"),
3966 stable(feature = "rust1", since = "1.0.0"),
3967 stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3968 stable(feature = "atomic_debug", since = "1.3.0"),
3969 stable(feature = "atomic_access", since = "1.15.0"),
3970 stable(feature = "atomic_from", since = "1.23.0"),
3971 stable(feature = "atomic_nand", since = "1.27.0"),
3972 rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3973 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3974 rustc_diagnostic_item = "AtomicIsize",
3975 "isize",
3976 "",
3977 atomic_min, atomic_max,
3978 $align,
3979 isize AtomicIsize
3980 }
3981 #[cfg(target_pointer_width = $target_pointer_width)]
3982 atomic_int! {
3983 cfg(target_has_atomic = "ptr"),
3984 cfg(target_has_atomic_equal_alignment = "ptr"),
3985 stable(feature = "rust1", since = "1.0.0"),
3986 stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3987 stable(feature = "atomic_debug", since = "1.3.0"),
3988 stable(feature = "atomic_access", since = "1.15.0"),
3989 stable(feature = "atomic_from", since = "1.23.0"),
3990 stable(feature = "atomic_nand", since = "1.27.0"),
3991 rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3992 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3993 rustc_diagnostic_item = "AtomicUsize",
3994 "usize",
3995 "",
3996 atomic_umin, atomic_umax,
3997 $align,
3998 usize AtomicUsize
3999 }
4000
4001 /// An [`AtomicIsize`] initialized to `0`.
4002 #[cfg(target_pointer_width = $target_pointer_width)]
4003 #[stable(feature = "rust1", since = "1.0.0")]
4004 #[deprecated(
4005 since = "1.34.0",
4006 note = "the `new` function is now preferred",
4007 suggestion = "AtomicIsize::new(0)",
4008 )]
4009 #[cfg(not(feature = "ferrocene_subset"))]
4010 pub const ATOMIC_ISIZE_INIT: AtomicIsize = AtomicIsize::new(0);
4011
4012 /// An [`AtomicUsize`] initialized to `0`.
4013 #[cfg(target_pointer_width = $target_pointer_width)]
4014 #[stable(feature = "rust1", since = "1.0.0")]
4015 #[deprecated(
4016 since = "1.34.0",
4017 note = "the `new` function is now preferred",
4018 suggestion = "AtomicUsize::new(0)",
4019 )]
4020 pub const ATOMIC_USIZE_INIT: AtomicUsize = AtomicUsize::new(0);
4021 )* };
4022}
4023
4024#[cfg(target_has_atomic_load_store = "ptr")]
4025atomic_int_ptr_sized! {
4026 "16" 2
4027 "32" 4
4028 "64" 8
4029}
4030
4031#[inline]
4032#[cfg(target_has_atomic)]
4033fn strongest_failure_ordering(order: Ordering) -> Ordering {
4034 match order {
4035 Release => Relaxed,
4036 Relaxed => Relaxed,
4037 SeqCst => SeqCst,
4038 Acquire => Acquire,
4039 AcqRel => Acquire,
4040 }
4041}
4042
4043#[inline]
4044#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4045unsafe fn atomic_store<T: Copy>(dst: *mut T, val: T, order: Ordering) {
4046 // SAFETY: the caller must uphold the safety contract for `atomic_store`.
4047 unsafe {
4048 match order {
4049 Relaxed => intrinsics::atomic_store::<T, { AO::Relaxed }>(dst, val),
4050 Release => intrinsics::atomic_store::<T, { AO::Release }>(dst, val),
4051 SeqCst => intrinsics::atomic_store::<T, { AO::SeqCst }>(dst, val),
4052 Acquire => panic!("there is no such thing as an acquire store"),
4053 AcqRel => panic!("there is no such thing as an acquire-release store"),
4054 }
4055 }
4056}
4057
4058#[inline]
4059#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4060unsafe fn atomic_load<T: Copy>(dst: *const T, order: Ordering) -> T {
4061 // SAFETY: the caller must uphold the safety contract for `atomic_load`.
4062 unsafe {
4063 match order {
4064 Relaxed => intrinsics::atomic_load::<T, { AO::Relaxed }>(dst),
4065 Acquire => intrinsics::atomic_load::<T, { AO::Acquire }>(dst),
4066 SeqCst => intrinsics::atomic_load::<T, { AO::SeqCst }>(dst),
4067 Release => panic!("there is no such thing as a release load"),
4068 AcqRel => panic!("there is no such thing as an acquire-release load"),
4069 }
4070 }
4071}
4072
4073#[inline]
4074#[cfg(target_has_atomic)]
4075#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4076unsafe fn atomic_swap<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4077 // SAFETY: the caller must uphold the safety contract for `atomic_swap`.
4078 unsafe {
4079 match order {
4080 Relaxed => intrinsics::atomic_xchg::<T, { AO::Relaxed }>(dst, val),
4081 Acquire => intrinsics::atomic_xchg::<T, { AO::Acquire }>(dst, val),
4082 Release => intrinsics::atomic_xchg::<T, { AO::Release }>(dst, val),
4083 AcqRel => intrinsics::atomic_xchg::<T, { AO::AcqRel }>(dst, val),
4084 SeqCst => intrinsics::atomic_xchg::<T, { AO::SeqCst }>(dst, val),
4085 }
4086 }
4087}
4088
4089/// Returns the previous value (like __sync_fetch_and_add).
4090#[inline]
4091#[cfg(target_has_atomic)]
4092#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4093unsafe fn atomic_add<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4094 // SAFETY: the caller must uphold the safety contract for `atomic_add`.
4095 unsafe {
4096 match order {
4097 Relaxed => intrinsics::atomic_xadd::<T, U, { AO::Relaxed }>(dst, val),
4098 Acquire => intrinsics::atomic_xadd::<T, U, { AO::Acquire }>(dst, val),
4099 Release => intrinsics::atomic_xadd::<T, U, { AO::Release }>(dst, val),
4100 AcqRel => intrinsics::atomic_xadd::<T, U, { AO::AcqRel }>(dst, val),
4101 SeqCst => intrinsics::atomic_xadd::<T, U, { AO::SeqCst }>(dst, val),
4102 }
4103 }
4104}
4105
4106/// Returns the previous value (like __sync_fetch_and_sub).
4107#[inline]
4108#[cfg(target_has_atomic)]
4109#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4110unsafe fn atomic_sub<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4111 // SAFETY: the caller must uphold the safety contract for `atomic_sub`.
4112 unsafe {
4113 match order {
4114 Relaxed => intrinsics::atomic_xsub::<T, U, { AO::Relaxed }>(dst, val),
4115 Acquire => intrinsics::atomic_xsub::<T, U, { AO::Acquire }>(dst, val),
4116 Release => intrinsics::atomic_xsub::<T, U, { AO::Release }>(dst, val),
4117 AcqRel => intrinsics::atomic_xsub::<T, U, { AO::AcqRel }>(dst, val),
4118 SeqCst => intrinsics::atomic_xsub::<T, U, { AO::SeqCst }>(dst, val),
4119 }
4120 }
4121}
4122
4123/// Publicly exposed for stdarch; nobody else should use this.
4124#[inline]
4125#[cfg(target_has_atomic)]
4126#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4127#[unstable(feature = "core_intrinsics", issue = "none")]
4128#[doc(hidden)]
4129pub unsafe fn atomic_compare_exchange<T: Copy>(
4130 dst: *mut T,
4131 old: T,
4132 new: T,
4133 success: Ordering,
4134 failure: Ordering,
4135) -> Result<T, T> {
4136 // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange`.
4137 let (val, ok) = unsafe {
4138 match (success, failure) {
4139 (Relaxed, Relaxed) => {
4140 intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::Relaxed }>(dst, old, new)
4141 }
4142 (Relaxed, Acquire) => {
4143 intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::Acquire }>(dst, old, new)
4144 }
4145 (Relaxed, SeqCst) => {
4146 intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::SeqCst }>(dst, old, new)
4147 }
4148 (Acquire, Relaxed) => {
4149 intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::Relaxed }>(dst, old, new)
4150 }
4151 (Acquire, Acquire) => {
4152 intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::Acquire }>(dst, old, new)
4153 }
4154 (Acquire, SeqCst) => {
4155 intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::SeqCst }>(dst, old, new)
4156 }
4157 (Release, Relaxed) => {
4158 intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::Relaxed }>(dst, old, new)
4159 }
4160 (Release, Acquire) => {
4161 intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::Acquire }>(dst, old, new)
4162 }
4163 (Release, SeqCst) => {
4164 intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::SeqCst }>(dst, old, new)
4165 }
4166 (AcqRel, Relaxed) => {
4167 intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::Relaxed }>(dst, old, new)
4168 }
4169 (AcqRel, Acquire) => {
4170 intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::Acquire }>(dst, old, new)
4171 }
4172 (AcqRel, SeqCst) => {
4173 intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::SeqCst }>(dst, old, new)
4174 }
4175 (SeqCst, Relaxed) => {
4176 intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::Relaxed }>(dst, old, new)
4177 }
4178 (SeqCst, Acquire) => {
4179 intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::Acquire }>(dst, old, new)
4180 }
4181 (SeqCst, SeqCst) => {
4182 intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::SeqCst }>(dst, old, new)
4183 }
4184 (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
4185 (_, Release) => panic!("there is no such thing as a release failure ordering"),
4186 }
4187 };
4188 if ok { Ok(val) } else { Err(val) }
4189}
4190
4191#[inline]
4192#[cfg(target_has_atomic)]
4193#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4194unsafe fn atomic_compare_exchange_weak<T: Copy>(
4195 dst: *mut T,
4196 old: T,
4197 new: T,
4198 success: Ordering,
4199 failure: Ordering,
4200) -> Result<T, T> {
4201 // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange_weak`.
4202 let (val, ok) = unsafe {
4203 match (success, failure) {
4204 (Relaxed, Relaxed) => {
4205 intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::Relaxed }>(dst, old, new)
4206 }
4207 (Relaxed, Acquire) => {
4208 intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::Acquire }>(dst, old, new)
4209 }
4210 (Relaxed, SeqCst) => {
4211 intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::SeqCst }>(dst, old, new)
4212 }
4213 (Acquire, Relaxed) => {
4214 intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::Relaxed }>(dst, old, new)
4215 }
4216 (Acquire, Acquire) => {
4217 intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::Acquire }>(dst, old, new)
4218 }
4219 (Acquire, SeqCst) => {
4220 intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::SeqCst }>(dst, old, new)
4221 }
4222 (Release, Relaxed) => {
4223 intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::Relaxed }>(dst, old, new)
4224 }
4225 (Release, Acquire) => {
4226 intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::Acquire }>(dst, old, new)
4227 }
4228 (Release, SeqCst) => {
4229 intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::SeqCst }>(dst, old, new)
4230 }
4231 (AcqRel, Relaxed) => {
4232 intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::Relaxed }>(dst, old, new)
4233 }
4234 (AcqRel, Acquire) => {
4235 intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::Acquire }>(dst, old, new)
4236 }
4237 (AcqRel, SeqCst) => {
4238 intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::SeqCst }>(dst, old, new)
4239 }
4240 (SeqCst, Relaxed) => {
4241 intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::Relaxed }>(dst, old, new)
4242 }
4243 (SeqCst, Acquire) => {
4244 intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::Acquire }>(dst, old, new)
4245 }
4246 (SeqCst, SeqCst) => {
4247 intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::SeqCst }>(dst, old, new)
4248 }
4249 (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
4250 (_, Release) => panic!("there is no such thing as a release failure ordering"),
4251 }
4252 };
4253 if ok { Ok(val) } else { Err(val) }
4254}
4255
4256#[inline]
4257#[cfg(target_has_atomic)]
4258#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4259unsafe fn atomic_and<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4260 // SAFETY: the caller must uphold the safety contract for `atomic_and`
4261 unsafe {
4262 match order {
4263 Relaxed => intrinsics::atomic_and::<T, U, { AO::Relaxed }>(dst, val),
4264 Acquire => intrinsics::atomic_and::<T, U, { AO::Acquire }>(dst, val),
4265 Release => intrinsics::atomic_and::<T, U, { AO::Release }>(dst, val),
4266 AcqRel => intrinsics::atomic_and::<T, U, { AO::AcqRel }>(dst, val),
4267 SeqCst => intrinsics::atomic_and::<T, U, { AO::SeqCst }>(dst, val),
4268 }
4269 }
4270}
4271
4272#[inline]
4273#[cfg(target_has_atomic)]
4274#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4275unsafe fn atomic_nand<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4276 // SAFETY: the caller must uphold the safety contract for `atomic_nand`
4277 unsafe {
4278 match order {
4279 Relaxed => intrinsics::atomic_nand::<T, U, { AO::Relaxed }>(dst, val),
4280 Acquire => intrinsics::atomic_nand::<T, U, { AO::Acquire }>(dst, val),
4281 Release => intrinsics::atomic_nand::<T, U, { AO::Release }>(dst, val),
4282 AcqRel => intrinsics::atomic_nand::<T, U, { AO::AcqRel }>(dst, val),
4283 SeqCst => intrinsics::atomic_nand::<T, U, { AO::SeqCst }>(dst, val),
4284 }
4285 }
4286}
4287
4288#[inline]
4289#[cfg(target_has_atomic)]
4290#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4291unsafe fn atomic_or<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4292 // SAFETY: the caller must uphold the safety contract for `atomic_or`
4293 unsafe {
4294 match order {
4295 SeqCst => intrinsics::atomic_or::<T, U, { AO::SeqCst }>(dst, val),
4296 Acquire => intrinsics::atomic_or::<T, U, { AO::Acquire }>(dst, val),
4297 Release => intrinsics::atomic_or::<T, U, { AO::Release }>(dst, val),
4298 AcqRel => intrinsics::atomic_or::<T, U, { AO::AcqRel }>(dst, val),
4299 Relaxed => intrinsics::atomic_or::<T, U, { AO::Relaxed }>(dst, val),
4300 }
4301 }
4302}
4303
4304#[inline]
4305#[cfg(target_has_atomic)]
4306#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4307unsafe fn atomic_xor<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4308 // SAFETY: the caller must uphold the safety contract for `atomic_xor`
4309 unsafe {
4310 match order {
4311 SeqCst => intrinsics::atomic_xor::<T, U, { AO::SeqCst }>(dst, val),
4312 Acquire => intrinsics::atomic_xor::<T, U, { AO::Acquire }>(dst, val),
4313 Release => intrinsics::atomic_xor::<T, U, { AO::Release }>(dst, val),
4314 AcqRel => intrinsics::atomic_xor::<T, U, { AO::AcqRel }>(dst, val),
4315 Relaxed => intrinsics::atomic_xor::<T, U, { AO::Relaxed }>(dst, val),
4316 }
4317 }
4318}
4319
4320/// Updates `*dst` to the max value of `val` and the old value (signed comparison)
4321#[inline]
4322#[cfg(target_has_atomic)]
4323#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4324#[cfg(not(feature = "ferrocene_subset"))]
4325unsafe fn atomic_max<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4326 // SAFETY: the caller must uphold the safety contract for `atomic_max`
4327 unsafe {
4328 match order {
4329 Relaxed => intrinsics::atomic_max::<T, { AO::Relaxed }>(dst, val),
4330 Acquire => intrinsics::atomic_max::<T, { AO::Acquire }>(dst, val),
4331 Release => intrinsics::atomic_max::<T, { AO::Release }>(dst, val),
4332 AcqRel => intrinsics::atomic_max::<T, { AO::AcqRel }>(dst, val),
4333 SeqCst => intrinsics::atomic_max::<T, { AO::SeqCst }>(dst, val),
4334 }
4335 }
4336}
4337
4338/// Updates `*dst` to the min value of `val` and the old value (signed comparison)
4339#[inline]
4340#[cfg(target_has_atomic)]
4341#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4342#[cfg(not(feature = "ferrocene_subset"))]
4343unsafe fn atomic_min<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4344 // SAFETY: the caller must uphold the safety contract for `atomic_min`
4345 unsafe {
4346 match order {
4347 Relaxed => intrinsics::atomic_min::<T, { AO::Relaxed }>(dst, val),
4348 Acquire => intrinsics::atomic_min::<T, { AO::Acquire }>(dst, val),
4349 Release => intrinsics::atomic_min::<T, { AO::Release }>(dst, val),
4350 AcqRel => intrinsics::atomic_min::<T, { AO::AcqRel }>(dst, val),
4351 SeqCst => intrinsics::atomic_min::<T, { AO::SeqCst }>(dst, val),
4352 }
4353 }
4354}
4355
4356/// Updates `*dst` to the max value of `val` and the old value (unsigned comparison)
4357#[inline]
4358#[cfg(target_has_atomic)]
4359#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4360unsafe fn atomic_umax<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4361 // SAFETY: the caller must uphold the safety contract for `atomic_umax`
4362 unsafe {
4363 match order {
4364 Relaxed => intrinsics::atomic_umax::<T, { AO::Relaxed }>(dst, val),
4365 Acquire => intrinsics::atomic_umax::<T, { AO::Acquire }>(dst, val),
4366 Release => intrinsics::atomic_umax::<T, { AO::Release }>(dst, val),
4367 AcqRel => intrinsics::atomic_umax::<T, { AO::AcqRel }>(dst, val),
4368 SeqCst => intrinsics::atomic_umax::<T, { AO::SeqCst }>(dst, val),
4369 }
4370 }
4371}
4372
4373/// Updates `*dst` to the min value of `val` and the old value (unsigned comparison)
4374#[inline]
4375#[cfg(target_has_atomic)]
4376#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4377unsafe fn atomic_umin<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4378 // SAFETY: the caller must uphold the safety contract for `atomic_umin`
4379 unsafe {
4380 match order {
4381 Relaxed => intrinsics::atomic_umin::<T, { AO::Relaxed }>(dst, val),
4382 Acquire => intrinsics::atomic_umin::<T, { AO::Acquire }>(dst, val),
4383 Release => intrinsics::atomic_umin::<T, { AO::Release }>(dst, val),
4384 AcqRel => intrinsics::atomic_umin::<T, { AO::AcqRel }>(dst, val),
4385 SeqCst => intrinsics::atomic_umin::<T, { AO::SeqCst }>(dst, val),
4386 }
4387 }
4388}
4389
4390/// An atomic fence.
4391///
4392/// Fences create synchronization between themselves and atomic operations or fences in other
4393/// threads. To achieve this, a fence prevents the compiler and CPU from reordering certain types of
4394/// memory operations around it.
4395///
4396/// There are 3 different ways to use an atomic fence:
4397///
4398/// - atomic - fence synchronization: an atomic operation with (at least) [`Release`] ordering
4399/// semantics synchronizes with a fence with (at least) [`Acquire`] ordering semantics.
4400/// - fence - atomic synchronization: a fence with (at least) [`Release`] ordering semantics
4401/// synchronizes with an atomic operation with (at least) [`Acquire`] ordering semantics.
4402/// - fence - fence synchronization: a fence with (at least) [`Release`] ordering semantics
4403/// synchronizes with a fence with (at least) [`Acquire`] ordering semantics.
4404///
4405/// These 3 ways complement the regular, fence-less, atomic - atomic synchronization.
4406///
4407/// ## Atomic - Fence
4408///
4409/// An atomic operation on one thread will synchronize with a fence on another thread when:
4410///
4411/// - on thread 1:
4412/// - an atomic operation 'X' with (at least) [`Release`] ordering semantics on some atomic
4413/// object 'm',
4414///
4415/// - is paired on thread 2 with:
4416/// - an atomic read 'Y' with any order on 'm',
4417/// - followed by a fence 'B' with (at least) [`Acquire`] ordering semantics.
4418///
4419/// This provides a happens-before dependence between X and B.
4420///
4421/// ```text
4422/// Thread 1 Thread 2
4423///
4424/// m.store(3, Release); X ---------
4425/// |
4426/// |
4427/// -------------> Y if m.load(Relaxed) == 3 {
4428/// B fence(Acquire);
4429/// ...
4430/// }
4431/// ```
4432///
4433/// ## Fence - Atomic
4434///
4435/// A fence on one thread will synchronize with an atomic operation on another thread when:
4436///
4437/// - on thread:
4438/// - a fence 'A' with (at least) [`Release`] ordering semantics,
4439/// - followed by an atomic write 'X' with any ordering on some atomic object 'm',
4440///
4441/// - is paired on thread 2 with:
4442/// - an atomic operation 'Y' with (at least) [`Acquire`] ordering semantics.
4443///
4444/// This provides a happens-before dependence between A and Y.
4445///
4446/// ```text
4447/// Thread 1 Thread 2
4448///
4449/// fence(Release); A
4450/// m.store(3, Relaxed); X ---------
4451/// |
4452/// |
4453/// -------------> Y if m.load(Acquire) == 3 {
4454/// ...
4455/// }
4456/// ```
4457///
4458/// ## Fence - Fence
4459///
4460/// A fence on one thread will synchronize with a fence on another thread when:
4461///
4462/// - on thread 1:
4463/// - a fence 'A' which has (at least) [`Release`] ordering semantics,
4464/// - followed by an atomic write 'X' with any ordering on some atomic object 'm',
4465///
4466/// - is paired on thread 2 with:
4467/// - an atomic read 'Y' with any ordering on 'm',
4468/// - followed by a fence 'B' with (at least) [`Acquire`] ordering semantics.
4469///
4470/// This provides a happens-before dependence between A and B.
4471///
4472/// ```text
4473/// Thread 1 Thread 2
4474///
4475/// fence(Release); A --------------
4476/// m.store(3, Relaxed); X --------- |
4477/// | |
4478/// | |
4479/// -------------> Y if m.load(Relaxed) == 3 {
4480/// |-------> B fence(Acquire);
4481/// ...
4482/// }
4483/// ```
4484///
4485/// ## Mandatory Atomic
4486///
4487/// Note that in the examples above, it is crucial that the access to `m` are atomic. Fences cannot
4488/// be used to establish synchronization between non-atomic accesses in different threads. However,
4489/// thanks to the happens-before relationship, any non-atomic access that happen-before the atomic
4490/// operation or fence with (at least) [`Release`] ordering semantics are now also properly
4491/// synchronized with any non-atomic accesses that happen-after the atomic operation or fence with
4492/// (at least) [`Acquire`] ordering semantics.
4493///
4494/// ## Memory Ordering
4495///
4496/// A fence which has [`SeqCst`] ordering, in addition to having both [`Acquire`] and [`Release`]
4497/// semantics, participates in the global program order of the other [`SeqCst`] operations and/or
4498/// fences.
4499///
4500/// Accepts [`Acquire`], [`Release`], [`AcqRel`] and [`SeqCst`] orderings.
4501///
4502/// # Panics
4503///
4504/// Panics if `order` is [`Relaxed`].
4505///
4506/// # Examples
4507///
4508/// ```
4509/// use std::sync::atomic::AtomicBool;
4510/// use std::sync::atomic::fence;
4511/// use std::sync::atomic::Ordering;
4512///
4513/// // A mutual exclusion primitive based on spinlock.
4514/// pub struct Mutex {
4515/// flag: AtomicBool,
4516/// }
4517///
4518/// impl Mutex {
4519/// pub fn new() -> Mutex {
4520/// Mutex {
4521/// flag: AtomicBool::new(false),
4522/// }
4523/// }
4524///
4525/// pub fn lock(&self) {
4526/// // Wait until the old value is `false`.
4527/// while self
4528/// .flag
4529/// .compare_exchange_weak(false, true, Ordering::Relaxed, Ordering::Relaxed)
4530/// .is_err()
4531/// {}
4532/// // This fence synchronizes-with store in `unlock`.
4533/// fence(Ordering::Acquire);
4534/// }
4535///
4536/// pub fn unlock(&self) {
4537/// self.flag.store(false, Ordering::Release);
4538/// }
4539/// }
4540/// ```
4541#[inline]
4542#[stable(feature = "rust1", since = "1.0.0")]
4543#[rustc_diagnostic_item = "fence"]
4544#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4545pub fn fence(order: Ordering) {
4546 // SAFETY: using an atomic fence is safe.
4547 unsafe {
4548 match order {
4549 Acquire => intrinsics::atomic_fence::<{ AO::Acquire }>(),
4550 Release => intrinsics::atomic_fence::<{ AO::Release }>(),
4551 AcqRel => intrinsics::atomic_fence::<{ AO::AcqRel }>(),
4552 SeqCst => intrinsics::atomic_fence::<{ AO::SeqCst }>(),
4553 Relaxed => panic!("there is no such thing as a relaxed fence"),
4554 }
4555 }
4556}
4557
4558/// A "compiler-only" atomic fence.
4559///
4560/// Like [`fence`], this function establishes synchronization with other atomic operations and
4561/// fences. However, unlike [`fence`], `compiler_fence` only establishes synchronization with
4562/// operations *in the same thread*. This may at first sound rather useless, since code within a
4563/// thread is typically already totally ordered and does not need any further synchronization.
4564/// However, there are cases where code can run on the same thread without being ordered:
4565/// - The most common case is that of a *signal handler*: a signal handler runs in the same thread
4566/// as the code it interrupted, but it is not ordered with respect to that code. `compiler_fence`
4567/// can be used to establish synchronization between a thread and its signal handler, the same way
4568/// that `fence` can be used to establish synchronization across threads.
4569/// - Similar situations can arise in embedded programming with interrupt handlers, or in custom
4570/// implementations of preemptive green threads. In general, `compiler_fence` can establish
4571/// synchronization with code that is guaranteed to run on the same hardware CPU.
4572///
4573/// See [`fence`] for how a fence can be used to achieve synchronization. Note that just like
4574/// [`fence`], synchronization still requires atomic operations to be used in both threads -- it is
4575/// not possible to perform synchronization entirely with fences and non-atomic operations.
4576///
4577/// `compiler_fence` does not emit any machine code, but restricts the kinds of memory re-ordering
4578/// the compiler is allowed to do. `compiler_fence` corresponds to [`atomic_signal_fence`] in C and
4579/// C++.
4580///
4581/// [`atomic_signal_fence`]: https://en.cppreference.com/w/cpp/atomic/atomic_signal_fence
4582///
4583/// # Panics
4584///
4585/// Panics if `order` is [`Relaxed`].
4586///
4587/// # Examples
4588///
4589/// Without the two `compiler_fence` calls, the read of `IMPORTANT_VARIABLE` in `signal_handler`
4590/// is *undefined behavior* due to a data race, despite everything happening in a single thread.
4591/// This is because the signal handler is considered to run concurrently with its associated
4592/// thread, and explicit synchronization is required to pass data between a thread and its
4593/// signal handler. The code below uses two `compiler_fence` calls to establish the usual
4594/// release-acquire synchronization pattern (see [`fence`] for an image).
4595///
4596/// ```
4597/// use std::sync::atomic::AtomicBool;
4598/// use std::sync::atomic::Ordering;
4599/// use std::sync::atomic::compiler_fence;
4600///
4601/// static mut IMPORTANT_VARIABLE: usize = 0;
4602/// static IS_READY: AtomicBool = AtomicBool::new(false);
4603///
4604/// fn main() {
4605/// unsafe { IMPORTANT_VARIABLE = 42 };
4606/// // Marks earlier writes as being released with future relaxed stores.
4607/// compiler_fence(Ordering::Release);
4608/// IS_READY.store(true, Ordering::Relaxed);
4609/// }
4610///
4611/// fn signal_handler() {
4612/// if IS_READY.load(Ordering::Relaxed) {
4613/// // Acquires writes that were released with relaxed stores that we read from.
4614/// compiler_fence(Ordering::Acquire);
4615/// assert_eq!(unsafe { IMPORTANT_VARIABLE }, 42);
4616/// }
4617/// }
4618/// ```
4619#[inline]
4620#[stable(feature = "compiler_fences", since = "1.21.0")]
4621#[rustc_diagnostic_item = "compiler_fence"]
4622#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4623pub fn compiler_fence(order: Ordering) {
4624 // SAFETY: using an atomic fence is safe.
4625 unsafe {
4626 match order {
4627 Acquire => intrinsics::atomic_singlethreadfence::<{ AO::Acquire }>(),
4628 Release => intrinsics::atomic_singlethreadfence::<{ AO::Release }>(),
4629 AcqRel => intrinsics::atomic_singlethreadfence::<{ AO::AcqRel }>(),
4630 SeqCst => intrinsics::atomic_singlethreadfence::<{ AO::SeqCst }>(),
4631 Relaxed => panic!("there is no such thing as a relaxed fence"),
4632 }
4633 }
4634}
4635
4636#[cfg(target_has_atomic_load_store = "8")]
4637#[stable(feature = "atomic_debug", since = "1.3.0")]
4638#[cfg(not(feature = "ferrocene_subset"))]
4639impl fmt::Debug for AtomicBool {
4640 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4641 fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
4642 }
4643}
4644
4645#[cfg(target_has_atomic_load_store = "ptr")]
4646#[stable(feature = "atomic_debug", since = "1.3.0")]
4647#[cfg(not(feature = "ferrocene_subset"))]
4648impl<T> fmt::Debug for AtomicPtr<T> {
4649 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4650 fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
4651 }
4652}
4653
4654#[cfg(target_has_atomic_load_store = "ptr")]
4655#[stable(feature = "atomic_pointer", since = "1.24.0")]
4656#[cfg(not(feature = "ferrocene_subset"))]
4657impl<T> fmt::Pointer for AtomicPtr<T> {
4658 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4659 fmt::Pointer::fmt(&self.load(Ordering::Relaxed), f)
4660 }
4661}
4662
4663/// Signals the processor that it is inside a busy-wait spin-loop ("spin lock").
4664///
4665/// This function is deprecated in favor of [`hint::spin_loop`].
4666///
4667/// [`hint::spin_loop`]: crate::hint::spin_loop
4668#[inline]
4669#[stable(feature = "spin_loop_hint", since = "1.24.0")]
4670#[deprecated(since = "1.51.0", note = "use hint::spin_loop instead")]
4671#[cfg(not(feature = "ferrocene_subset"))]
4672pub fn spin_loop_hint() {
4673 spin_loop()
4674}