core/sync/
atomic.rs

1//! Atomic types
2//!
3//! Atomic types provide primitive shared-memory communication between
4//! threads, and are the building blocks of other concurrent
5//! types.
6//!
7//! This module defines atomic versions of a select number of primitive
8//! types, including [`AtomicBool`], [`AtomicIsize`], [`AtomicUsize`],
9//! [`AtomicI8`], [`AtomicU16`], etc.
10//! Atomic types present operations that, when used correctly, synchronize
11//! updates between threads.
12//!
13//! Atomic variables are safe to share between threads (they implement [`Sync`])
14//! but they do not themselves provide the mechanism for sharing and follow the
15//! [threading model](../../../std/thread/index.html#the-threading-model) of Rust.
16//! The most common way to share an atomic variable is to put it into an [`Arc`][arc] (an
17//! atomically-reference-counted shared pointer).
18//!
19//! [arc]: ../../../std/sync/struct.Arc.html
20//!
21//! Atomic types may be stored in static variables, initialized using
22//! the constant initializers like [`AtomicBool::new`]. Atomic statics
23//! are often used for lazy global initialization.
24//!
25//! ## Memory model for atomic accesses
26//!
27//! Rust atomics currently follow the same rules as [C++20 atomics][cpp], specifically the rules
28//! from the [`intro.races`][cpp-intro.races] section, without the "consume" memory ordering. Since
29//! C++ uses an object-based memory model whereas Rust is access-based, a bit of translation work
30//! has to be done to apply the C++ rules to Rust: whenever C++ talks about "the value of an
31//! object", we understand that to mean the resulting bytes obtained when doing a read. When the C++
32//! standard talks about "the value of an atomic object", this refers to the result of doing an
33//! atomic load (via the operations provided in this module). A "modification of an atomic object"
34//! refers to an atomic store.
35//!
36//! The end result is *almost* equivalent to saying that creating a *shared reference* to one of the
37//! Rust atomic types corresponds to creating an `atomic_ref` in C++, with the `atomic_ref` being
38//! destroyed when the lifetime of the shared reference ends. The main difference is that Rust
39//! permits concurrent atomic and non-atomic reads to the same memory as those cause no issue in the
40//! C++ memory model, they are just forbidden in C++ because memory is partitioned into "atomic
41//! objects" and "non-atomic objects" (with `atomic_ref` temporarily converting a non-atomic object
42//! into an atomic object).
43//!
44//! The most important aspect of this model is that *data races* are undefined behavior. A data race
45//! is defined as conflicting non-synchronized accesses where at least one of the accesses is
46//! non-atomic. Here, accesses are *conflicting* if they affect overlapping regions of memory and at
47//! least one of them is a write. (A `compare_exchange` or `compare_exchange_weak` that does not
48//! succeed is not considered a write.) They are *non-synchronized* if neither of them
49//! *happens-before* the other, according to the happens-before order of the memory model.
50//!
51//! The other possible cause of undefined behavior in the memory model are mixed-size accesses: Rust
52//! inherits the C++ limitation that non-synchronized conflicting atomic accesses may not partially
53//! overlap. In other words, every pair of non-synchronized atomic accesses must be either disjoint,
54//! access the exact same memory (including using the same access size), or both be reads.
55//!
56//! Each atomic access takes an [`Ordering`] which defines how the operation interacts with the
57//! happens-before order. These orderings behave the same as the corresponding [C++20 atomic
58//! orderings][cpp_memory_order]. For more information, see the [nomicon].
59//!
60//! [cpp]: https://en.cppreference.com/w/cpp/atomic
61//! [cpp-intro.races]: https://timsong-cpp.github.io/cppwp/n4868/intro.multithread#intro.races
62//! [cpp_memory_order]: https://en.cppreference.com/w/cpp/atomic/memory_order
63//! [nomicon]: ../../../nomicon/atomics.html
64//!
65//! ```rust,no_run undefined_behavior
66//! use std::sync::atomic::{AtomicU16, AtomicU8, Ordering};
67//! use std::mem::transmute;
68//! use std::thread;
69//!
70//! let atomic = AtomicU16::new(0);
71//!
72//! thread::scope(|s| {
73//!     // This is UB: conflicting non-synchronized accesses, at least one of which is non-atomic.
74//!     s.spawn(|| atomic.store(1, Ordering::Relaxed)); // atomic store
75//!     s.spawn(|| unsafe { atomic.as_ptr().write(2) }); // non-atomic write
76//! });
77//!
78//! thread::scope(|s| {
79//!     // This is fine: the accesses do not conflict (as none of them performs any modification).
80//!     // In C++ this would be disallowed since creating an `atomic_ref` precludes
81//!     // further non-atomic accesses, but Rust does not have that limitation.
82//!     s.spawn(|| atomic.load(Ordering::Relaxed)); // atomic load
83//!     s.spawn(|| unsafe { atomic.as_ptr().read() }); // non-atomic read
84//! });
85//!
86//! thread::scope(|s| {
87//!     // This is fine: `join` synchronizes the code in a way such that the atomic
88//!     // store happens-before the non-atomic write.
89//!     let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed)); // atomic store
90//!     handle.join().expect("thread won't panic"); // synchronize
91//!     s.spawn(|| unsafe { atomic.as_ptr().write(2) }); // non-atomic write
92//! });
93//!
94//! thread::scope(|s| {
95//!     // This is UB: non-synchronized conflicting differently-sized atomic accesses.
96//!     s.spawn(|| atomic.store(1, Ordering::Relaxed));
97//!     s.spawn(|| unsafe {
98//!         let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic);
99//!         differently_sized.store(2, Ordering::Relaxed);
100//!     });
101//! });
102//!
103//! thread::scope(|s| {
104//!     // This is fine: `join` synchronizes the code in a way such that
105//!     // the 1-byte store happens-before the 2-byte store.
106//!     let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed));
107//!     handle.join().expect("thread won't panic");
108//!     s.spawn(|| unsafe {
109//!         let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic);
110//!         differently_sized.store(2, Ordering::Relaxed);
111//!     });
112//! });
113//! ```
114//!
115//! # Portability
116//!
117//! All atomic types in this module are guaranteed to be [lock-free] if they're
118//! available. This means they don't internally acquire a global mutex. Atomic
119//! types and operations are not guaranteed to be wait-free. This means that
120//! operations like `fetch_or` may be implemented with a compare-and-swap loop.
121//!
122//! Atomic operations may be implemented at the instruction layer with
123//! larger-size atomics. For example some platforms use 4-byte atomic
124//! instructions to implement `AtomicI8`. Note that this emulation should not
125//! have an impact on correctness of code, it's just something to be aware of.
126//!
127//! The atomic types in this module might not be available on all platforms. The
128//! atomic types here are all widely available, however, and can generally be
129//! relied upon existing. Some notable exceptions are:
130//!
131//! * PowerPC and MIPS platforms with 32-bit pointers do not have `AtomicU64` or
132//!   `AtomicI64` types.
133//! * ARM platforms like `armv5te` that aren't for Linux only provide `load`
134//!   and `store` operations, and do not support Compare and Swap (CAS)
135//!   operations, such as `swap`, `fetch_add`, etc. Additionally on Linux,
136//!   these CAS operations are implemented via [operating system support], which
137//!   may come with a performance penalty.
138//! * ARM targets with `thumbv6m` only provide `load` and `store` operations,
139//!   and do not support Compare and Swap (CAS) operations, such as `swap`,
140//!   `fetch_add`, etc.
141//!
142//! [operating system support]: https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt
143//!
144//! Note that future platforms may be added that also do not have support for
145//! some atomic operations. Maximally portable code will want to be careful
146//! about which atomic types are used. `AtomicUsize` and `AtomicIsize` are
147//! generally the most portable, but even then they're not available everywhere.
148//! For reference, the `std` library requires `AtomicBool`s and pointer-sized atomics, although
149//! `core` does not.
150//!
151//! The `#[cfg(target_has_atomic)]` attribute can be used to conditionally
152//! compile based on the target's supported bit widths. It is a key-value
153//! option set for each supported size, with values "8", "16", "32", "64",
154//! "128", and "ptr" for pointer-sized atomics.
155//!
156//! [lock-free]: https://en.wikipedia.org/wiki/Non-blocking_algorithm
157//!
158//! # Atomic accesses to read-only memory
159//!
160//! In general, *all* atomic accesses on read-only memory are undefined behavior. For instance, attempting
161//! to do a `compare_exchange` that will definitely fail (making it conceptually a read-only
162//! operation) can still cause a segmentation fault if the underlying memory page is mapped read-only. Since
163//! atomic `load`s might be implemented using compare-exchange operations, even a `load` can fault
164//! on read-only memory.
165//!
166//! For the purpose of this section, "read-only memory" is defined as memory that is read-only in
167//! the underlying target, i.e., the pages are mapped with a read-only flag and any attempt to write
168//! will cause a page fault. In particular, an `&u128` reference that points to memory that is
169//! read-write mapped is *not* considered to point to "read-only memory". In Rust, almost all memory
170//! is read-write; the only exceptions are memory created by `const` items or `static` items without
171//! interior mutability, and memory that was specifically marked as read-only by the operating
172//! system via platform-specific APIs.
173//!
174//! As an exception from the general rule stated above, "sufficiently small" atomic loads with
175//! `Ordering::Relaxed` are implemented in a way that works on read-only memory, and are hence not
176//! undefined behavior. The exact size limit for what makes a load "sufficiently small" varies
177//! depending on the target:
178//!
179//! | `target_arch` | Size limit |
180//! |---------------|---------|
181//! | `x86`, `arm`, `loongarch32`, `mips`, `mips32r6`, `powerpc`, `riscv32`, `sparc`, `hexagon` | 4 bytes |
182//! | `x86_64`, `aarch64`, `loongarch64`, `mips64`, `mips64r6`, `powerpc64`, `riscv64`, `sparc64`, `s390x` | 8 bytes |
183//!
184//! Atomics loads that are larger than this limit as well as atomic loads with ordering other
185//! than `Relaxed`, as well as *all* atomic loads on targets not listed in the table, might still be
186//! read-only under certain conditions, but that is not a stable guarantee and should not be relied
187//! upon.
188//!
189//! If you need to do an acquire load on read-only memory, you can do a relaxed load followed by an
190//! acquire fence instead.
191//!
192//! # Examples
193//!
194//! A simple spinlock:
195//!
196//! ```ignore-wasm
197//! use std::sync::Arc;
198//! use std::sync::atomic::{AtomicUsize, Ordering};
199//! use std::{hint, thread};
200//!
201//! fn main() {
202//!     let spinlock = Arc::new(AtomicUsize::new(1));
203//!
204//!     let spinlock_clone = Arc::clone(&spinlock);
205//!
206//!     let thread = thread::spawn(move || {
207//!         spinlock_clone.store(0, Ordering::Release);
208//!     });
209//!
210//!     // Wait for the other thread to release the lock
211//!     while spinlock.load(Ordering::Acquire) != 0 {
212//!         hint::spin_loop();
213//!     }
214//!
215//!     if let Err(panic) = thread.join() {
216//!         println!("Thread had an error: {panic:?}");
217//!     }
218//! }
219//! ```
220//!
221//! Keep a global count of live threads:
222//!
223//! ```
224//! use std::sync::atomic::{AtomicUsize, Ordering};
225//!
226//! static GLOBAL_THREAD_COUNT: AtomicUsize = AtomicUsize::new(0);
227//!
228//! // Note that Relaxed ordering doesn't synchronize anything
229//! // except the global thread counter itself.
230//! let old_thread_count = GLOBAL_THREAD_COUNT.fetch_add(1, Ordering::Relaxed);
231//! // Note that this number may not be true at the moment of printing
232//! // because some other thread may have changed static value already.
233//! println!("live threads: {}", old_thread_count + 1);
234//! ```
235
236#![stable(feature = "rust1", since = "1.0.0")]
237#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(dead_code))]
238#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(unused_imports))]
239#![rustc_diagnostic_item = "atomic_mod"]
240// Clippy complains about the pattern of "safe function calling unsafe function taking pointers".
241// This happens with AtomicPtr intrinsics but is fine, as the pointers clippy is concerned about
242// are just normal values that get loaded/stored, but not dereferenced.
243#![allow(clippy::not_unsafe_ptr_arg_deref)]
244
245use self::Ordering::*;
246use crate::cell::UnsafeCell;
247#[cfg(not(feature = "ferrocene_certified"))]
248use crate::hint::spin_loop;
249use crate::intrinsics::AtomicOrdering as AO;
250#[cfg(not(feature = "ferrocene_certified"))]
251use crate::{fmt, intrinsics};
252
253// Ferrocene addition: imports for certified subset
254#[cfg(feature = "ferrocene_certified")]
255#[rustfmt::skip]
256use crate::intrinsics;
257
258trait Sealed {}
259
260/// A marker trait for primitive types which can be modified atomically.
261///
262/// This is an implementation detail for <code>[Atomic]\<T></code> which may disappear or be replaced at any time.
263///
264/// # Safety
265///
266/// Types implementing this trait must be primitives that can be modified atomically.
267///
268/// The associated `Self::AtomicInner` type must have the same size and bit validity as `Self`,
269/// but may have a higher alignment requirement, so the following `transmute`s are sound:
270///
271/// - `&mut Self::AtomicInner` as `&mut Self`
272/// - `Self` as `Self::AtomicInner` or the reverse
273#[unstable(
274    feature = "atomic_internals",
275    reason = "implementation detail which may disappear or be replaced at any time",
276    issue = "none"
277)]
278#[expect(private_bounds)]
279pub unsafe trait AtomicPrimitive: Sized + Copy + Sealed {
280    /// Temporary implementation detail.
281    type AtomicInner: Sized;
282}
283
284macro impl_atomic_primitive(
285    $Atom:ident $(<$T:ident>)? ($Primitive:ty),
286    size($size:literal),
287    align($align:literal) $(,)?
288) {
289    impl $(<$T>)? Sealed for $Primitive {}
290
291    #[unstable(
292        feature = "atomic_internals",
293        reason = "implementation detail which may disappear or be replaced at any time",
294        issue = "none"
295    )]
296    #[cfg(target_has_atomic_load_store = $size)]
297    unsafe impl $(<$T>)? AtomicPrimitive for $Primitive {
298        type AtomicInner = $Atom $(<$T>)?;
299    }
300}
301
302#[cfg(not(feature = "ferrocene_certified"))]
303impl_atomic_primitive!(AtomicBool(bool), size("8"), align(1));
304#[cfg(not(feature = "ferrocene_certified"))]
305impl_atomic_primitive!(AtomicI8(i8), size("8"), align(1));
306#[cfg(not(feature = "ferrocene_certified"))]
307impl_atomic_primitive!(AtomicU8(u8), size("8"), align(1));
308#[cfg(not(feature = "ferrocene_certified"))]
309impl_atomic_primitive!(AtomicI16(i16), size("16"), align(2));
310#[cfg(not(feature = "ferrocene_certified"))]
311impl_atomic_primitive!(AtomicU16(u16), size("16"), align(2));
312#[cfg(not(feature = "ferrocene_certified"))]
313impl_atomic_primitive!(AtomicI32(i32), size("32"), align(4));
314impl_atomic_primitive!(AtomicU32(u32), size("32"), align(4));
315#[cfg(not(feature = "ferrocene_certified"))]
316impl_atomic_primitive!(AtomicI64(i64), size("64"), align(8));
317#[cfg(not(feature = "ferrocene_certified"))]
318impl_atomic_primitive!(AtomicU64(u64), size("64"), align(8));
319#[cfg(not(feature = "ferrocene_certified"))]
320impl_atomic_primitive!(AtomicI128(i128), size("128"), align(16));
321#[cfg(not(feature = "ferrocene_certified"))]
322impl_atomic_primitive!(AtomicU128(u128), size("128"), align(16));
323
324#[cfg(target_pointer_width = "16")]
325#[cfg(not(feature = "ferrocene_certified"))]
326impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(2));
327#[cfg(target_pointer_width = "32")]
328#[cfg(not(feature = "ferrocene_certified"))]
329impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(4));
330#[cfg(target_pointer_width = "64")]
331#[cfg(not(feature = "ferrocene_certified"))]
332impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(8));
333
334#[cfg(target_pointer_width = "16")]
335#[cfg(not(feature = "ferrocene_certified"))]
336impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(2));
337#[cfg(target_pointer_width = "32")]
338#[cfg(not(feature = "ferrocene_certified"))]
339impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(4));
340#[cfg(target_pointer_width = "64")]
341#[cfg(not(feature = "ferrocene_certified"))]
342impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(8));
343
344#[cfg(target_pointer_width = "16")]
345#[cfg(not(feature = "ferrocene_certified"))]
346impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(2));
347#[cfg(target_pointer_width = "32")]
348#[cfg(not(feature = "ferrocene_certified"))]
349impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(4));
350#[cfg(target_pointer_width = "64")]
351#[cfg(not(feature = "ferrocene_certified"))]
352impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(8));
353
354/// A memory location which can be safely modified from multiple threads.
355///
356/// This has the same size and bit validity as the underlying type `T`. However,
357/// the alignment of this type is always equal to its size, even on targets where
358/// `T` has alignment less than its size.
359///
360/// For more about the differences between atomic types and non-atomic types as
361/// well as information about the portability of this type, please see the
362/// [module-level documentation].
363///
364/// **Note:** This type is only available on platforms that support atomic loads
365/// and stores of `T`.
366///
367/// [module-level documentation]: crate::sync::atomic
368#[unstable(feature = "generic_atomic", issue = "130539")]
369#[cfg(not(feature = "ferrocene_certified"))]
370pub type Atomic<T> = <T as AtomicPrimitive>::AtomicInner;
371
372// Some architectures don't have byte-sized atomics, which results in LLVM
373// emulating them using a LL/SC loop. However for AtomicBool we can take
374// advantage of the fact that it only ever contains 0 or 1 and use atomic OR/AND
375// instead, which LLVM can emulate using a larger atomic OR/AND operation.
376//
377// This list should only contain architectures which have word-sized atomic-or/
378// atomic-and instructions but don't natively support byte-sized atomics.
379#[cfg(target_has_atomic = "8")]
380#[cfg(not(feature = "ferrocene_certified"))]
381const EMULATE_ATOMIC_BOOL: bool = cfg!(any(
382    target_arch = "riscv32",
383    target_arch = "riscv64",
384    target_arch = "loongarch32",
385    target_arch = "loongarch64"
386));
387
388/// A boolean type which can be safely shared between threads.
389///
390/// This type has the same size, alignment, and bit validity as a [`bool`].
391///
392/// **Note**: This type is only available on platforms that support atomic
393/// loads and stores of `u8`.
394#[cfg(target_has_atomic_load_store = "8")]
395#[stable(feature = "rust1", since = "1.0.0")]
396#[rustc_diagnostic_item = "AtomicBool"]
397#[repr(C, align(1))]
398#[cfg(not(feature = "ferrocene_certified"))]
399pub struct AtomicBool {
400    v: UnsafeCell<u8>,
401}
402
403#[cfg(target_has_atomic_load_store = "8")]
404#[stable(feature = "rust1", since = "1.0.0")]
405#[cfg(not(feature = "ferrocene_certified"))]
406impl Default for AtomicBool {
407    /// Creates an `AtomicBool` initialized to `false`.
408    #[inline]
409    fn default() -> Self {
410        Self::new(false)
411    }
412}
413
414// Send is implicitly implemented for AtomicBool.
415#[cfg(target_has_atomic_load_store = "8")]
416#[stable(feature = "rust1", since = "1.0.0")]
417#[cfg(not(feature = "ferrocene_certified"))]
418unsafe impl Sync for AtomicBool {}
419
420/// A raw pointer type which can be safely shared between threads.
421///
422/// This type has the same size and bit validity as a `*mut T`.
423///
424/// **Note**: This type is only available on platforms that support atomic
425/// loads and stores of pointers. Its size depends on the target pointer's size.
426#[cfg(target_has_atomic_load_store = "ptr")]
427#[stable(feature = "rust1", since = "1.0.0")]
428#[rustc_diagnostic_item = "AtomicPtr"]
429#[cfg_attr(target_pointer_width = "16", repr(C, align(2)))]
430#[cfg_attr(target_pointer_width = "32", repr(C, align(4)))]
431#[cfg_attr(target_pointer_width = "64", repr(C, align(8)))]
432#[cfg(not(feature = "ferrocene_certified"))]
433pub struct AtomicPtr<T> {
434    p: UnsafeCell<*mut T>,
435}
436
437#[cfg(target_has_atomic_load_store = "ptr")]
438#[stable(feature = "rust1", since = "1.0.0")]
439#[cfg(not(feature = "ferrocene_certified"))]
440impl<T> Default for AtomicPtr<T> {
441    /// Creates a null `AtomicPtr<T>`.
442    fn default() -> AtomicPtr<T> {
443        AtomicPtr::new(crate::ptr::null_mut())
444    }
445}
446
447#[cfg(target_has_atomic_load_store = "ptr")]
448#[stable(feature = "rust1", since = "1.0.0")]
449#[cfg(not(feature = "ferrocene_certified"))]
450unsafe impl<T> Send for AtomicPtr<T> {}
451#[cfg(target_has_atomic_load_store = "ptr")]
452#[stable(feature = "rust1", since = "1.0.0")]
453#[cfg(not(feature = "ferrocene_certified"))]
454unsafe impl<T> Sync for AtomicPtr<T> {}
455
456/// Atomic memory orderings
457///
458/// Memory orderings specify the way atomic operations synchronize memory.
459/// In its weakest [`Ordering::Relaxed`], only the memory directly touched by the
460/// operation is synchronized. On the other hand, a store-load pair of [`Ordering::SeqCst`]
461/// operations synchronize other memory while additionally preserving a total order of such
462/// operations across all threads.
463///
464/// Rust's memory orderings are [the same as those of
465/// C++20](https://en.cppreference.com/w/cpp/atomic/memory_order).
466///
467/// For more information see the [nomicon].
468///
469/// [nomicon]: ../../../nomicon/atomics.html
470#[stable(feature = "rust1", since = "1.0.0")]
471#[cfg_attr(not(feature = "ferrocene_certified"), derive(Copy, Clone, Debug, Eq, PartialEq, Hash))]
472#[cfg_attr(feature = "ferrocene_certified", derive(Copy, Clone))]
473#[non_exhaustive]
474#[rustc_diagnostic_item = "Ordering"]
475pub enum Ordering {
476    /// No ordering constraints, only atomic operations.
477    ///
478    /// Corresponds to [`memory_order_relaxed`] in C++20.
479    ///
480    /// [`memory_order_relaxed`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Relaxed_ordering
481    #[stable(feature = "rust1", since = "1.0.0")]
482    Relaxed,
483    /// When coupled with a store, all previous operations become ordered
484    /// before any load of this value with [`Acquire`] (or stronger) ordering.
485    /// In particular, all previous writes become visible to all threads
486    /// that perform an [`Acquire`] (or stronger) load of this value.
487    ///
488    /// Notice that using this ordering for an operation that combines loads
489    /// and stores leads to a [`Relaxed`] load operation!
490    ///
491    /// This ordering is only applicable for operations that can perform a store.
492    ///
493    /// Corresponds to [`memory_order_release`] in C++20.
494    ///
495    /// [`memory_order_release`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
496    #[stable(feature = "rust1", since = "1.0.0")]
497    Release,
498    /// When coupled with a load, if the loaded value was written by a store operation with
499    /// [`Release`] (or stronger) ordering, then all subsequent operations
500    /// become ordered after that store. In particular, all subsequent loads will see data
501    /// written before the store.
502    ///
503    /// Notice that using this ordering for an operation that combines loads
504    /// and stores leads to a [`Relaxed`] store operation!
505    ///
506    /// This ordering is only applicable for operations that can perform a load.
507    ///
508    /// Corresponds to [`memory_order_acquire`] in C++20.
509    ///
510    /// [`memory_order_acquire`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
511    #[stable(feature = "rust1", since = "1.0.0")]
512    Acquire,
513    /// Has the effects of both [`Acquire`] and [`Release`] together:
514    /// For loads it uses [`Acquire`] ordering. For stores it uses the [`Release`] ordering.
515    ///
516    /// Notice that in the case of `compare_and_swap`, it is possible that the operation ends up
517    /// not performing any store and hence it has just [`Acquire`] ordering. However,
518    /// `AcqRel` will never perform [`Relaxed`] accesses.
519    ///
520    /// This ordering is only applicable for operations that combine both loads and stores.
521    ///
522    /// Corresponds to [`memory_order_acq_rel`] in C++20.
523    ///
524    /// [`memory_order_acq_rel`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
525    #[stable(feature = "rust1", since = "1.0.0")]
526    AcqRel,
527    /// Like [`Acquire`]/[`Release`]/[`AcqRel`] (for load, store, and load-with-store
528    /// operations, respectively) with the additional guarantee that all threads see all
529    /// sequentially consistent operations in the same order.
530    ///
531    /// Corresponds to [`memory_order_seq_cst`] in C++20.
532    ///
533    /// [`memory_order_seq_cst`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Sequentially-consistent_ordering
534    #[stable(feature = "rust1", since = "1.0.0")]
535    SeqCst,
536}
537
538/// An [`AtomicBool`] initialized to `false`.
539#[cfg(target_has_atomic_load_store = "8")]
540#[stable(feature = "rust1", since = "1.0.0")]
541#[deprecated(
542    since = "1.34.0",
543    note = "the `new` function is now preferred",
544    suggestion = "AtomicBool::new(false)"
545)]
546#[cfg(not(feature = "ferrocene_certified"))]
547pub const ATOMIC_BOOL_INIT: AtomicBool = AtomicBool::new(false);
548
549#[cfg(target_has_atomic_load_store = "8")]
550#[cfg(not(feature = "ferrocene_certified"))]
551impl AtomicBool {
552    /// Creates a new `AtomicBool`.
553    ///
554    /// # Examples
555    ///
556    /// ```
557    /// use std::sync::atomic::AtomicBool;
558    ///
559    /// let atomic_true = AtomicBool::new(true);
560    /// let atomic_false = AtomicBool::new(false);
561    /// ```
562    #[inline]
563    #[stable(feature = "rust1", since = "1.0.0")]
564    #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
565    #[must_use]
566    pub const fn new(v: bool) -> AtomicBool {
567        AtomicBool { v: UnsafeCell::new(v as u8) }
568    }
569
570    /// Creates a new `AtomicBool` from a pointer.
571    ///
572    /// # Examples
573    ///
574    /// ```
575    /// use std::sync::atomic::{self, AtomicBool};
576    ///
577    /// // Get a pointer to an allocated value
578    /// let ptr: *mut bool = Box::into_raw(Box::new(false));
579    ///
580    /// assert!(ptr.cast::<AtomicBool>().is_aligned());
581    ///
582    /// {
583    ///     // Create an atomic view of the allocated value
584    ///     let atomic = unsafe { AtomicBool::from_ptr(ptr) };
585    ///
586    ///     // Use `atomic` for atomic operations, possibly share it with other threads
587    ///     atomic.store(true, atomic::Ordering::Relaxed);
588    /// }
589    ///
590    /// // It's ok to non-atomically access the value behind `ptr`,
591    /// // since the reference to the atomic ended its lifetime in the block above
592    /// assert_eq!(unsafe { *ptr }, true);
593    ///
594    /// // Deallocate the value
595    /// unsafe { drop(Box::from_raw(ptr)) }
596    /// ```
597    ///
598    /// # Safety
599    ///
600    /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that this is always true, since
601    ///   `align_of::<AtomicBool>() == 1`).
602    /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
603    /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
604    ///   allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different
605    ///   sizes, without synchronization.
606    ///
607    /// [valid]: crate::ptr#safety
608    /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
609    #[inline]
610    #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
611    #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
612    pub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a AtomicBool {
613        // SAFETY: guaranteed by the caller
614        unsafe { &*ptr.cast() }
615    }
616
617    /// Returns a mutable reference to the underlying [`bool`].
618    ///
619    /// This is safe because the mutable reference guarantees that no other threads are
620    /// concurrently accessing the atomic data.
621    ///
622    /// # Examples
623    ///
624    /// ```
625    /// use std::sync::atomic::{AtomicBool, Ordering};
626    ///
627    /// let mut some_bool = AtomicBool::new(true);
628    /// assert_eq!(*some_bool.get_mut(), true);
629    /// *some_bool.get_mut() = false;
630    /// assert_eq!(some_bool.load(Ordering::SeqCst), false);
631    /// ```
632    #[inline]
633    #[stable(feature = "atomic_access", since = "1.15.0")]
634    pub fn get_mut(&mut self) -> &mut bool {
635        // SAFETY: the mutable reference guarantees unique ownership.
636        unsafe { &mut *(self.v.get() as *mut bool) }
637    }
638
639    /// Gets atomic access to a `&mut bool`.
640    ///
641    /// # Examples
642    ///
643    /// ```
644    /// #![feature(atomic_from_mut)]
645    /// use std::sync::atomic::{AtomicBool, Ordering};
646    ///
647    /// let mut some_bool = true;
648    /// let a = AtomicBool::from_mut(&mut some_bool);
649    /// a.store(false, Ordering::Relaxed);
650    /// assert_eq!(some_bool, false);
651    /// ```
652    #[inline]
653    #[cfg(target_has_atomic_equal_alignment = "8")]
654    #[unstable(feature = "atomic_from_mut", issue = "76314")]
655    pub fn from_mut(v: &mut bool) -> &mut Self {
656        // SAFETY: the mutable reference guarantees unique ownership, and
657        // alignment of both `bool` and `Self` is 1.
658        unsafe { &mut *(v as *mut bool as *mut Self) }
659    }
660
661    /// Gets non-atomic access to a `&mut [AtomicBool]` slice.
662    ///
663    /// This is safe because the mutable reference guarantees that no other threads are
664    /// concurrently accessing the atomic data.
665    ///
666    /// # Examples
667    ///
668    /// ```ignore-wasm
669    /// #![feature(atomic_from_mut)]
670    /// use std::sync::atomic::{AtomicBool, Ordering};
671    ///
672    /// let mut some_bools = [const { AtomicBool::new(false) }; 10];
673    ///
674    /// let view: &mut [bool] = AtomicBool::get_mut_slice(&mut some_bools);
675    /// assert_eq!(view, [false; 10]);
676    /// view[..5].copy_from_slice(&[true; 5]);
677    ///
678    /// std::thread::scope(|s| {
679    ///     for t in &some_bools[..5] {
680    ///         s.spawn(move || assert_eq!(t.load(Ordering::Relaxed), true));
681    ///     }
682    ///
683    ///     for f in &some_bools[5..] {
684    ///         s.spawn(move || assert_eq!(f.load(Ordering::Relaxed), false));
685    ///     }
686    /// });
687    /// ```
688    #[inline]
689    #[unstable(feature = "atomic_from_mut", issue = "76314")]
690    pub fn get_mut_slice(this: &mut [Self]) -> &mut [bool] {
691        // SAFETY: the mutable reference guarantees unique ownership.
692        unsafe { &mut *(this as *mut [Self] as *mut [bool]) }
693    }
694
695    /// Gets atomic access to a `&mut [bool]` slice.
696    ///
697    /// # Examples
698    ///
699    /// ```rust,ignore-wasm
700    /// #![feature(atomic_from_mut)]
701    /// use std::sync::atomic::{AtomicBool, Ordering};
702    ///
703    /// let mut some_bools = [false; 10];
704    /// let a = &*AtomicBool::from_mut_slice(&mut some_bools);
705    /// std::thread::scope(|s| {
706    ///     for i in 0..a.len() {
707    ///         s.spawn(move || a[i].store(true, Ordering::Relaxed));
708    ///     }
709    /// });
710    /// assert_eq!(some_bools, [true; 10]);
711    /// ```
712    #[inline]
713    #[cfg(target_has_atomic_equal_alignment = "8")]
714    #[unstable(feature = "atomic_from_mut", issue = "76314")]
715    pub fn from_mut_slice(v: &mut [bool]) -> &mut [Self] {
716        // SAFETY: the mutable reference guarantees unique ownership, and
717        // alignment of both `bool` and `Self` is 1.
718        unsafe { &mut *(v as *mut [bool] as *mut [Self]) }
719    }
720
721    /// Consumes the atomic and returns the contained value.
722    ///
723    /// This is safe because passing `self` by value guarantees that no other threads are
724    /// concurrently accessing the atomic data.
725    ///
726    /// # Examples
727    ///
728    /// ```
729    /// use std::sync::atomic::AtomicBool;
730    ///
731    /// let some_bool = AtomicBool::new(true);
732    /// assert_eq!(some_bool.into_inner(), true);
733    /// ```
734    #[inline]
735    #[stable(feature = "atomic_access", since = "1.15.0")]
736    #[rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0")]
737    pub const fn into_inner(self) -> bool {
738        self.v.into_inner() != 0
739    }
740
741    /// Loads a value from the bool.
742    ///
743    /// `load` takes an [`Ordering`] argument which describes the memory ordering
744    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
745    ///
746    /// # Panics
747    ///
748    /// Panics if `order` is [`Release`] or [`AcqRel`].
749    ///
750    /// # Examples
751    ///
752    /// ```
753    /// use std::sync::atomic::{AtomicBool, Ordering};
754    ///
755    /// let some_bool = AtomicBool::new(true);
756    ///
757    /// assert_eq!(some_bool.load(Ordering::Relaxed), true);
758    /// ```
759    #[inline]
760    #[stable(feature = "rust1", since = "1.0.0")]
761    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
762    pub fn load(&self, order: Ordering) -> bool {
763        // SAFETY: any data races are prevented by atomic intrinsics and the raw
764        // pointer passed in is valid because we got it from a reference.
765        unsafe { atomic_load(self.v.get(), order) != 0 }
766    }
767
768    /// Stores a value into the bool.
769    ///
770    /// `store` takes an [`Ordering`] argument which describes the memory ordering
771    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
772    ///
773    /// # Panics
774    ///
775    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
776    ///
777    /// # Examples
778    ///
779    /// ```
780    /// use std::sync::atomic::{AtomicBool, Ordering};
781    ///
782    /// let some_bool = AtomicBool::new(true);
783    ///
784    /// some_bool.store(false, Ordering::Relaxed);
785    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
786    /// ```
787    #[inline]
788    #[stable(feature = "rust1", since = "1.0.0")]
789    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
790    pub fn store(&self, val: bool, order: Ordering) {
791        // SAFETY: any data races are prevented by atomic intrinsics and the raw
792        // pointer passed in is valid because we got it from a reference.
793        unsafe {
794            atomic_store(self.v.get(), val as u8, order);
795        }
796    }
797
798    /// Stores a value into the bool, returning the previous value.
799    ///
800    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
801    /// of this operation. All ordering modes are possible. Note that using
802    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
803    /// using [`Release`] makes the load part [`Relaxed`].
804    ///
805    /// **Note:** This method is only available on platforms that support atomic
806    /// operations on `u8`.
807    ///
808    /// # Examples
809    ///
810    /// ```
811    /// use std::sync::atomic::{AtomicBool, Ordering};
812    ///
813    /// let some_bool = AtomicBool::new(true);
814    ///
815    /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true);
816    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
817    /// ```
818    #[inline]
819    #[stable(feature = "rust1", since = "1.0.0")]
820    #[cfg(target_has_atomic = "8")]
821    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
822    pub fn swap(&self, val: bool, order: Ordering) -> bool {
823        if EMULATE_ATOMIC_BOOL {
824            if val { self.fetch_or(true, order) } else { self.fetch_and(false, order) }
825        } else {
826            // SAFETY: data races are prevented by atomic intrinsics.
827            unsafe { atomic_swap(self.v.get(), val as u8, order) != 0 }
828        }
829    }
830
831    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
832    ///
833    /// The return value is always the previous value. If it is equal to `current`, then the value
834    /// was updated.
835    ///
836    /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
837    /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
838    /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
839    /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
840    /// happens, and using [`Release`] makes the load part [`Relaxed`].
841    ///
842    /// **Note:** This method is only available on platforms that support atomic
843    /// operations on `u8`.
844    ///
845    /// # Migrating to `compare_exchange` and `compare_exchange_weak`
846    ///
847    /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
848    /// memory orderings:
849    ///
850    /// Original | Success | Failure
851    /// -------- | ------- | -------
852    /// Relaxed  | Relaxed | Relaxed
853    /// Acquire  | Acquire | Acquire
854    /// Release  | Release | Relaxed
855    /// AcqRel   | AcqRel  | Acquire
856    /// SeqCst   | SeqCst  | SeqCst
857    ///
858    /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
859    /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
860    /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
861    /// rather than to infer success vs failure based on the value that was read.
862    ///
863    /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
864    /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
865    /// which allows the compiler to generate better assembly code when the compare and swap
866    /// is used in a loop.
867    ///
868    /// # Examples
869    ///
870    /// ```
871    /// use std::sync::atomic::{AtomicBool, Ordering};
872    ///
873    /// let some_bool = AtomicBool::new(true);
874    ///
875    /// assert_eq!(some_bool.compare_and_swap(true, false, Ordering::Relaxed), true);
876    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
877    ///
878    /// assert_eq!(some_bool.compare_and_swap(true, true, Ordering::Relaxed), false);
879    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
880    /// ```
881    #[cfg(not(feature = "ferrocene_certified"))]
882    #[inline]
883    #[stable(feature = "rust1", since = "1.0.0")]
884    #[deprecated(
885        since = "1.50.0",
886        note = "Use `compare_exchange` or `compare_exchange_weak` instead"
887    )]
888    #[cfg(target_has_atomic = "8")]
889    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
890    pub fn compare_and_swap(&self, current: bool, new: bool, order: Ordering) -> bool {
891        match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
892            Ok(x) => x,
893            Err(x) => x,
894        }
895    }
896
897    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
898    ///
899    /// The return value is a result indicating whether the new value was written and containing
900    /// the previous value. On success this value is guaranteed to be equal to `current`.
901    ///
902    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
903    /// ordering of this operation. `success` describes the required ordering for the
904    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
905    /// `failure` describes the required ordering for the load operation that takes place when
906    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
907    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
908    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
909    ///
910    /// **Note:** This method is only available on platforms that support atomic
911    /// operations on `u8`.
912    ///
913    /// # Examples
914    ///
915    /// ```
916    /// use std::sync::atomic::{AtomicBool, Ordering};
917    ///
918    /// let some_bool = AtomicBool::new(true);
919    ///
920    /// assert_eq!(some_bool.compare_exchange(true,
921    ///                                       false,
922    ///                                       Ordering::Acquire,
923    ///                                       Ordering::Relaxed),
924    ///            Ok(true));
925    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
926    ///
927    /// assert_eq!(some_bool.compare_exchange(true, true,
928    ///                                       Ordering::SeqCst,
929    ///                                       Ordering::Acquire),
930    ///            Err(false));
931    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
932    /// ```
933    ///
934    /// # Considerations
935    ///
936    /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
937    /// of CAS operations. In particular, a load of the value followed by a successful
938    /// `compare_exchange` with the previous load *does not ensure* that other threads have not
939    /// changed the value in the interim. This is usually important when the *equality* check in
940    /// the `compare_exchange` is being used to check the *identity* of a value, but equality
941    /// does not necessarily imply identity. In this case, `compare_exchange` can lead to the
942    /// [ABA problem].
943    ///
944    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
945    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
946    #[inline]
947    #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
948    #[doc(alias = "compare_and_swap")]
949    #[cfg(target_has_atomic = "8")]
950    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
951    pub fn compare_exchange(
952        &self,
953        current: bool,
954        new: bool,
955        success: Ordering,
956        failure: Ordering,
957    ) -> Result<bool, bool> {
958        if EMULATE_ATOMIC_BOOL {
959            // Pick the strongest ordering from success and failure.
960            let order = match (success, failure) {
961                (SeqCst, _) => SeqCst,
962                (_, SeqCst) => SeqCst,
963                (AcqRel, _) => AcqRel,
964                (_, AcqRel) => {
965                    panic!("there is no such thing as an acquire-release failure ordering")
966                }
967                (Release, Acquire) => AcqRel,
968                (Acquire, _) => Acquire,
969                (_, Acquire) => Acquire,
970                (Release, Relaxed) => Release,
971                (_, Release) => panic!("there is no such thing as a release failure ordering"),
972                (Relaxed, Relaxed) => Relaxed,
973            };
974            let old = if current == new {
975                // This is a no-op, but we still need to perform the operation
976                // for memory ordering reasons.
977                self.fetch_or(false, order)
978            } else {
979                // This sets the value to the new one and returns the old one.
980                self.swap(new, order)
981            };
982            if old == current { Ok(old) } else { Err(old) }
983        } else {
984            // SAFETY: data races are prevented by atomic intrinsics.
985            match unsafe {
986                atomic_compare_exchange(self.v.get(), current as u8, new as u8, success, failure)
987            } {
988                Ok(x) => Ok(x != 0),
989                Err(x) => Err(x != 0),
990            }
991        }
992    }
993
994    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
995    ///
996    /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the
997    /// comparison succeeds, which can result in more efficient code on some platforms. The
998    /// return value is a result indicating whether the new value was written and containing the
999    /// previous value.
1000    ///
1001    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1002    /// ordering of this operation. `success` describes the required ordering for the
1003    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1004    /// `failure` describes the required ordering for the load operation that takes place when
1005    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1006    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1007    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1008    ///
1009    /// **Note:** This method is only available on platforms that support atomic
1010    /// operations on `u8`.
1011    ///
1012    /// # Examples
1013    ///
1014    /// ```
1015    /// use std::sync::atomic::{AtomicBool, Ordering};
1016    ///
1017    /// let val = AtomicBool::new(false);
1018    ///
1019    /// let new = true;
1020    /// let mut old = val.load(Ordering::Relaxed);
1021    /// loop {
1022    ///     match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1023    ///         Ok(_) => break,
1024    ///         Err(x) => old = x,
1025    ///     }
1026    /// }
1027    /// ```
1028    ///
1029    /// # Considerations
1030    ///
1031    /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
1032    /// of CAS operations. In particular, a load of the value followed by a successful
1033    /// `compare_exchange` with the previous load *does not ensure* that other threads have not
1034    /// changed the value in the interim. This is usually important when the *equality* check in
1035    /// the `compare_exchange` is being used to check the *identity* of a value, but equality
1036    /// does not necessarily imply identity. In this case, `compare_exchange` can lead to the
1037    /// [ABA problem].
1038    ///
1039    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1040    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1041    #[inline]
1042    #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1043    #[doc(alias = "compare_and_swap")]
1044    #[cfg(target_has_atomic = "8")]
1045    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1046    pub fn compare_exchange_weak(
1047        &self,
1048        current: bool,
1049        new: bool,
1050        success: Ordering,
1051        failure: Ordering,
1052    ) -> Result<bool, bool> {
1053        if EMULATE_ATOMIC_BOOL {
1054            return self.compare_exchange(current, new, success, failure);
1055        }
1056
1057        // SAFETY: data races are prevented by atomic intrinsics.
1058        match unsafe {
1059            atomic_compare_exchange_weak(self.v.get(), current as u8, new as u8, success, failure)
1060        } {
1061            Ok(x) => Ok(x != 0),
1062            Err(x) => Err(x != 0),
1063        }
1064    }
1065
1066    /// Logical "and" with a boolean value.
1067    ///
1068    /// Performs a logical "and" operation on the current value and the argument `val`, and sets
1069    /// the new value to the result.
1070    ///
1071    /// Returns the previous value.
1072    ///
1073    /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
1074    /// of this operation. All ordering modes are possible. Note that using
1075    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1076    /// using [`Release`] makes the load part [`Relaxed`].
1077    ///
1078    /// **Note:** This method is only available on platforms that support atomic
1079    /// operations on `u8`.
1080    ///
1081    /// # Examples
1082    ///
1083    /// ```
1084    /// use std::sync::atomic::{AtomicBool, Ordering};
1085    ///
1086    /// let foo = AtomicBool::new(true);
1087    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
1088    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1089    ///
1090    /// let foo = AtomicBool::new(true);
1091    /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
1092    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1093    ///
1094    /// let foo = AtomicBool::new(false);
1095    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
1096    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1097    /// ```
1098    #[inline]
1099    #[stable(feature = "rust1", since = "1.0.0")]
1100    #[cfg(target_has_atomic = "8")]
1101    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1102    pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
1103        // SAFETY: data races are prevented by atomic intrinsics.
1104        unsafe { atomic_and(self.v.get(), val as u8, order) != 0 }
1105    }
1106
1107    /// Logical "nand" with a boolean value.
1108    ///
1109    /// Performs a logical "nand" operation on the current value and the argument `val`, and sets
1110    /// the new value to the result.
1111    ///
1112    /// Returns the previous value.
1113    ///
1114    /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
1115    /// of this operation. All ordering modes are possible. Note that using
1116    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1117    /// using [`Release`] makes the load part [`Relaxed`].
1118    ///
1119    /// **Note:** This method is only available on platforms that support atomic
1120    /// operations on `u8`.
1121    ///
1122    /// # Examples
1123    ///
1124    /// ```
1125    /// use std::sync::atomic::{AtomicBool, Ordering};
1126    ///
1127    /// let foo = AtomicBool::new(true);
1128    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
1129    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1130    ///
1131    /// let foo = AtomicBool::new(true);
1132    /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
1133    /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
1134    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1135    ///
1136    /// let foo = AtomicBool::new(false);
1137    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
1138    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1139    /// ```
1140    #[inline]
1141    #[stable(feature = "rust1", since = "1.0.0")]
1142    #[cfg(target_has_atomic = "8")]
1143    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1144    pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
1145        // We can't use atomic_nand here because it can result in a bool with
1146        // an invalid value. This happens because the atomic operation is done
1147        // with an 8-bit integer internally, which would set the upper 7 bits.
1148        // So we just use fetch_xor or swap instead.
1149        if val {
1150            // !(x & true) == !x
1151            // We must invert the bool.
1152            self.fetch_xor(true, order)
1153        } else {
1154            // !(x & false) == true
1155            // We must set the bool to true.
1156            self.swap(true, order)
1157        }
1158    }
1159
1160    /// Logical "or" with a boolean value.
1161    ///
1162    /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1163    /// new value to the result.
1164    ///
1165    /// Returns the previous value.
1166    ///
1167    /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
1168    /// of this operation. All ordering modes are possible. Note that using
1169    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1170    /// using [`Release`] makes the load part [`Relaxed`].
1171    ///
1172    /// **Note:** This method is only available on platforms that support atomic
1173    /// operations on `u8`.
1174    ///
1175    /// # Examples
1176    ///
1177    /// ```
1178    /// use std::sync::atomic::{AtomicBool, Ordering};
1179    ///
1180    /// let foo = AtomicBool::new(true);
1181    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
1182    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1183    ///
1184    /// let foo = AtomicBool::new(true);
1185    /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true);
1186    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1187    ///
1188    /// let foo = AtomicBool::new(false);
1189    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
1190    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1191    /// ```
1192    #[inline]
1193    #[stable(feature = "rust1", since = "1.0.0")]
1194    #[cfg(target_has_atomic = "8")]
1195    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1196    pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
1197        // SAFETY: data races are prevented by atomic intrinsics.
1198        unsafe { atomic_or(self.v.get(), val as u8, order) != 0 }
1199    }
1200
1201    /// Logical "xor" with a boolean value.
1202    ///
1203    /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1204    /// the new value to the result.
1205    ///
1206    /// Returns the previous value.
1207    ///
1208    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
1209    /// of this operation. All ordering modes are possible. Note that using
1210    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1211    /// using [`Release`] makes the load part [`Relaxed`].
1212    ///
1213    /// **Note:** This method is only available on platforms that support atomic
1214    /// operations on `u8`.
1215    ///
1216    /// # Examples
1217    ///
1218    /// ```
1219    /// use std::sync::atomic::{AtomicBool, Ordering};
1220    ///
1221    /// let foo = AtomicBool::new(true);
1222    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
1223    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1224    ///
1225    /// let foo = AtomicBool::new(true);
1226    /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
1227    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1228    ///
1229    /// let foo = AtomicBool::new(false);
1230    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
1231    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1232    /// ```
1233    #[inline]
1234    #[stable(feature = "rust1", since = "1.0.0")]
1235    #[cfg(target_has_atomic = "8")]
1236    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1237    pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
1238        // SAFETY: data races are prevented by atomic intrinsics.
1239        unsafe { atomic_xor(self.v.get(), val as u8, order) != 0 }
1240    }
1241
1242    /// Logical "not" with a boolean value.
1243    ///
1244    /// Performs a logical "not" operation on the current value, and sets
1245    /// the new value to the result.
1246    ///
1247    /// Returns the previous value.
1248    ///
1249    /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering
1250    /// of this operation. All ordering modes are possible. Note that using
1251    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1252    /// using [`Release`] makes the load part [`Relaxed`].
1253    ///
1254    /// **Note:** This method is only available on platforms that support atomic
1255    /// operations on `u8`.
1256    ///
1257    /// # Examples
1258    ///
1259    /// ```
1260    /// use std::sync::atomic::{AtomicBool, Ordering};
1261    ///
1262    /// let foo = AtomicBool::new(true);
1263    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
1264    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1265    ///
1266    /// let foo = AtomicBool::new(false);
1267    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
1268    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1269    /// ```
1270    #[inline]
1271    #[stable(feature = "atomic_bool_fetch_not", since = "1.81.0")]
1272    #[cfg(target_has_atomic = "8")]
1273    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1274    pub fn fetch_not(&self, order: Ordering) -> bool {
1275        self.fetch_xor(true, order)
1276    }
1277
1278    /// Returns a mutable pointer to the underlying [`bool`].
1279    ///
1280    /// Doing non-atomic reads and writes on the resulting boolean can be a data race.
1281    /// This method is mostly useful for FFI, where the function signature may use
1282    /// `*mut bool` instead of `&AtomicBool`.
1283    ///
1284    /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
1285    /// atomic types work with interior mutability. All modifications of an atomic change the value
1286    /// through a shared reference, and can do so safely as long as they use atomic operations. Any
1287    /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the
1288    /// requirements of the [memory model].
1289    ///
1290    /// # Examples
1291    ///
1292    /// ```ignore (extern-declaration)
1293    /// # fn main() {
1294    /// use std::sync::atomic::AtomicBool;
1295    ///
1296    /// extern "C" {
1297    ///     fn my_atomic_op(arg: *mut bool);
1298    /// }
1299    ///
1300    /// let mut atomic = AtomicBool::new(true);
1301    /// unsafe {
1302    ///     my_atomic_op(atomic.as_ptr());
1303    /// }
1304    /// # }
1305    /// ```
1306    ///
1307    /// [memory model]: self#memory-model-for-atomic-accesses
1308    #[inline]
1309    #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
1310    #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
1311    #[rustc_never_returns_null_ptr]
1312    pub const fn as_ptr(&self) -> *mut bool {
1313        self.v.get().cast()
1314    }
1315
1316    /// Fetches the value, and applies a function to it that returns an optional
1317    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1318    /// returned `Some(_)`, else `Err(previous_value)`.
1319    ///
1320    /// Note: This may call the function multiple times if the value has been
1321    /// changed from other threads in the meantime, as long as the function
1322    /// returns `Some(_)`, but the function will have been applied only once to
1323    /// the stored value.
1324    ///
1325    /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1326    /// ordering of this operation. The first describes the required ordering for
1327    /// when the operation finally succeeds while the second describes the
1328    /// required ordering for loads. These correspond to the success and failure
1329    /// orderings of [`AtomicBool::compare_exchange`] respectively.
1330    ///
1331    /// Using [`Acquire`] as success ordering makes the store part of this
1332    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1333    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1334    /// [`Acquire`] or [`Relaxed`].
1335    ///
1336    /// **Note:** This method is only available on platforms that support atomic
1337    /// operations on `u8`.
1338    ///
1339    /// # Considerations
1340    ///
1341    /// This method is not magic; it is not provided by the hardware, and does not act like a
1342    /// critical section or mutex.
1343    ///
1344    /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
1345    /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem].
1346    ///
1347    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1348    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1349    ///
1350    /// # Examples
1351    ///
1352    /// ```rust
1353    /// use std::sync::atomic::{AtomicBool, Ordering};
1354    ///
1355    /// let x = AtomicBool::new(false);
1356    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1357    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1358    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1359    /// assert_eq!(x.load(Ordering::SeqCst), false);
1360    /// ```
1361    #[inline]
1362    #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
1363    #[cfg(target_has_atomic = "8")]
1364    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1365    pub fn fetch_update<F>(
1366        &self,
1367        set_order: Ordering,
1368        fetch_order: Ordering,
1369        mut f: F,
1370    ) -> Result<bool, bool>
1371    where
1372        F: FnMut(bool) -> Option<bool>,
1373    {
1374        let mut prev = self.load(fetch_order);
1375        while let Some(next) = f(prev) {
1376            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1377                x @ Ok(_) => return x,
1378                Err(next_prev) => prev = next_prev,
1379            }
1380        }
1381        Err(prev)
1382    }
1383
1384    /// Fetches the value, and applies a function to it that returns an optional
1385    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1386    /// returned `Some(_)`, else `Err(previous_value)`.
1387    ///
1388    /// See also: [`update`](`AtomicBool::update`).
1389    ///
1390    /// Note: This may call the function multiple times if the value has been
1391    /// changed from other threads in the meantime, as long as the function
1392    /// returns `Some(_)`, but the function will have been applied only once to
1393    /// the stored value.
1394    ///
1395    /// `try_update` takes two [`Ordering`] arguments to describe the memory
1396    /// ordering of this operation. The first describes the required ordering for
1397    /// when the operation finally succeeds while the second describes the
1398    /// required ordering for loads. These correspond to the success and failure
1399    /// orderings of [`AtomicBool::compare_exchange`] respectively.
1400    ///
1401    /// Using [`Acquire`] as success ordering makes the store part of this
1402    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1403    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1404    /// [`Acquire`] or [`Relaxed`].
1405    ///
1406    /// **Note:** This method is only available on platforms that support atomic
1407    /// operations on `u8`.
1408    ///
1409    /// # Considerations
1410    ///
1411    /// This method is not magic; it is not provided by the hardware, and does not act like a
1412    /// critical section or mutex.
1413    ///
1414    /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
1415    /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem].
1416    ///
1417    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1418    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1419    ///
1420    /// # Examples
1421    ///
1422    /// ```rust
1423    /// #![feature(atomic_try_update)]
1424    /// use std::sync::atomic::{AtomicBool, Ordering};
1425    ///
1426    /// let x = AtomicBool::new(false);
1427    /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1428    /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1429    /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1430    /// assert_eq!(x.load(Ordering::SeqCst), false);
1431    /// ```
1432    #[inline]
1433    #[unstable(feature = "atomic_try_update", issue = "135894")]
1434    #[cfg(target_has_atomic = "8")]
1435    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1436    pub fn try_update(
1437        &self,
1438        set_order: Ordering,
1439        fetch_order: Ordering,
1440        f: impl FnMut(bool) -> Option<bool>,
1441    ) -> Result<bool, bool> {
1442        // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`;
1443        //      when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`.
1444        self.fetch_update(set_order, fetch_order, f)
1445    }
1446
1447    /// Fetches the value, applies a function to it that it return a new value.
1448    /// The new value is stored and the old value is returned.
1449    ///
1450    /// See also: [`try_update`](`AtomicBool::try_update`).
1451    ///
1452    /// Note: This may call the function multiple times if the value has been changed from other threads in
1453    /// the meantime, but the function will have been applied only once to the stored value.
1454    ///
1455    /// `update` takes two [`Ordering`] arguments to describe the memory
1456    /// ordering of this operation. The first describes the required ordering for
1457    /// when the operation finally succeeds while the second describes the
1458    /// required ordering for loads. These correspond to the success and failure
1459    /// orderings of [`AtomicBool::compare_exchange`] respectively.
1460    ///
1461    /// Using [`Acquire`] as success ordering makes the store part
1462    /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
1463    /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1464    ///
1465    /// **Note:** This method is only available on platforms that support atomic operations on `u8`.
1466    ///
1467    /// # Considerations
1468    ///
1469    /// This method is not magic; it is not provided by the hardware, and does not act like a
1470    /// critical section or mutex.
1471    ///
1472    /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
1473    /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem].
1474    ///
1475    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1476    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1477    ///
1478    /// # Examples
1479    ///
1480    /// ```rust
1481    /// #![feature(atomic_try_update)]
1482    ///
1483    /// use std::sync::atomic::{AtomicBool, Ordering};
1484    ///
1485    /// let x = AtomicBool::new(false);
1486    /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| !x), false);
1487    /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| !x), true);
1488    /// assert_eq!(x.load(Ordering::SeqCst), false);
1489    /// ```
1490    #[inline]
1491    #[unstable(feature = "atomic_try_update", issue = "135894")]
1492    #[cfg(target_has_atomic = "8")]
1493    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1494    pub fn update(
1495        &self,
1496        set_order: Ordering,
1497        fetch_order: Ordering,
1498        mut f: impl FnMut(bool) -> bool,
1499    ) -> bool {
1500        let mut prev = self.load(fetch_order);
1501        loop {
1502            match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
1503                Ok(x) => break x,
1504                Err(next_prev) => prev = next_prev,
1505            }
1506        }
1507    }
1508}
1509
1510#[cfg(target_has_atomic_load_store = "ptr")]
1511#[cfg(not(feature = "ferrocene_certified"))]
1512impl<T> AtomicPtr<T> {
1513    /// Creates a new `AtomicPtr`.
1514    ///
1515    /// # Examples
1516    ///
1517    /// ```
1518    /// use std::sync::atomic::AtomicPtr;
1519    ///
1520    /// let ptr = &mut 5;
1521    /// let atomic_ptr = AtomicPtr::new(ptr);
1522    /// ```
1523    #[inline]
1524    #[stable(feature = "rust1", since = "1.0.0")]
1525    #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
1526    pub const fn new(p: *mut T) -> AtomicPtr<T> {
1527        AtomicPtr { p: UnsafeCell::new(p) }
1528    }
1529
1530    /// Creates a new `AtomicPtr` from a pointer.
1531    ///
1532    /// # Examples
1533    ///
1534    /// ```
1535    /// use std::sync::atomic::{self, AtomicPtr};
1536    ///
1537    /// // Get a pointer to an allocated value
1538    /// let ptr: *mut *mut u8 = Box::into_raw(Box::new(std::ptr::null_mut()));
1539    ///
1540    /// assert!(ptr.cast::<AtomicPtr<u8>>().is_aligned());
1541    ///
1542    /// {
1543    ///     // Create an atomic view of the allocated value
1544    ///     let atomic = unsafe { AtomicPtr::from_ptr(ptr) };
1545    ///
1546    ///     // Use `atomic` for atomic operations, possibly share it with other threads
1547    ///     atomic.store(std::ptr::NonNull::dangling().as_ptr(), atomic::Ordering::Relaxed);
1548    /// }
1549    ///
1550    /// // It's ok to non-atomically access the value behind `ptr`,
1551    /// // since the reference to the atomic ended its lifetime in the block above
1552    /// assert!(!unsafe { *ptr }.is_null());
1553    ///
1554    /// // Deallocate the value
1555    /// unsafe { drop(Box::from_raw(ptr)) }
1556    /// ```
1557    ///
1558    /// # Safety
1559    ///
1560    /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this
1561    ///   can be bigger than `align_of::<*mut T>()`).
1562    /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
1563    /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
1564    ///   allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different
1565    ///   sizes, without synchronization.
1566    ///
1567    /// [valid]: crate::ptr#safety
1568    /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
1569    #[inline]
1570    #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
1571    #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
1572    pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a AtomicPtr<T> {
1573        // SAFETY: guaranteed by the caller
1574        unsafe { &*ptr.cast() }
1575    }
1576
1577    /// Returns a mutable reference to the underlying pointer.
1578    ///
1579    /// This is safe because the mutable reference guarantees that no other threads are
1580    /// concurrently accessing the atomic data.
1581    ///
1582    /// # Examples
1583    ///
1584    /// ```
1585    /// use std::sync::atomic::{AtomicPtr, Ordering};
1586    ///
1587    /// let mut data = 10;
1588    /// let mut atomic_ptr = AtomicPtr::new(&mut data);
1589    /// let mut other_data = 5;
1590    /// *atomic_ptr.get_mut() = &mut other_data;
1591    /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
1592    /// ```
1593    #[inline]
1594    #[stable(feature = "atomic_access", since = "1.15.0")]
1595    pub fn get_mut(&mut self) -> &mut *mut T {
1596        self.p.get_mut()
1597    }
1598
1599    /// Gets atomic access to a pointer.
1600    ///
1601    /// **Note:** This function is only available on targets where `AtomicPtr<T>` has the same alignment as `*const T`
1602    ///
1603    /// # Examples
1604    ///
1605    /// ```
1606    /// #![feature(atomic_from_mut)]
1607    /// use std::sync::atomic::{AtomicPtr, Ordering};
1608    ///
1609    /// let mut data = 123;
1610    /// let mut some_ptr = &mut data as *mut i32;
1611    /// let a = AtomicPtr::from_mut(&mut some_ptr);
1612    /// let mut other_data = 456;
1613    /// a.store(&mut other_data, Ordering::Relaxed);
1614    /// assert_eq!(unsafe { *some_ptr }, 456);
1615    /// ```
1616    #[inline]
1617    #[cfg(target_has_atomic_equal_alignment = "ptr")]
1618    #[unstable(feature = "atomic_from_mut", issue = "76314")]
1619    pub fn from_mut(v: &mut *mut T) -> &mut Self {
1620        let [] = [(); align_of::<AtomicPtr<()>>() - align_of::<*mut ()>()];
1621        // SAFETY:
1622        //  - the mutable reference guarantees unique ownership.
1623        //  - the alignment of `*mut T` and `Self` is the same on all platforms
1624        //    supported by rust, as verified above.
1625        unsafe { &mut *(v as *mut *mut T as *mut Self) }
1626    }
1627
1628    /// Gets non-atomic access to a `&mut [AtomicPtr]` slice.
1629    ///
1630    /// This is safe because the mutable reference guarantees that no other threads are
1631    /// concurrently accessing the atomic data.
1632    ///
1633    /// # Examples
1634    ///
1635    /// ```ignore-wasm
1636    /// #![feature(atomic_from_mut)]
1637    /// use std::ptr::null_mut;
1638    /// use std::sync::atomic::{AtomicPtr, Ordering};
1639    ///
1640    /// let mut some_ptrs = [const { AtomicPtr::new(null_mut::<String>()) }; 10];
1641    ///
1642    /// let view: &mut [*mut String] = AtomicPtr::get_mut_slice(&mut some_ptrs);
1643    /// assert_eq!(view, [null_mut::<String>(); 10]);
1644    /// view
1645    ///     .iter_mut()
1646    ///     .enumerate()
1647    ///     .for_each(|(i, ptr)| *ptr = Box::into_raw(Box::new(format!("iteration#{i}"))));
1648    ///
1649    /// std::thread::scope(|s| {
1650    ///     for ptr in &some_ptrs {
1651    ///         s.spawn(move || {
1652    ///             let ptr = ptr.load(Ordering::Relaxed);
1653    ///             assert!(!ptr.is_null());
1654    ///
1655    ///             let name = unsafe { Box::from_raw(ptr) };
1656    ///             println!("Hello, {name}!");
1657    ///         });
1658    ///     }
1659    /// });
1660    /// ```
1661    #[inline]
1662    #[unstable(feature = "atomic_from_mut", issue = "76314")]
1663    pub fn get_mut_slice(this: &mut [Self]) -> &mut [*mut T] {
1664        // SAFETY: the mutable reference guarantees unique ownership.
1665        unsafe { &mut *(this as *mut [Self] as *mut [*mut T]) }
1666    }
1667
1668    /// Gets atomic access to a slice of pointers.
1669    ///
1670    /// **Note:** This function is only available on targets where `AtomicPtr<T>` has the same alignment as `*const T`
1671    ///
1672    /// # Examples
1673    ///
1674    /// ```ignore-wasm
1675    /// #![feature(atomic_from_mut)]
1676    /// use std::ptr::null_mut;
1677    /// use std::sync::atomic::{AtomicPtr, Ordering};
1678    ///
1679    /// let mut some_ptrs = [null_mut::<String>(); 10];
1680    /// let a = &*AtomicPtr::from_mut_slice(&mut some_ptrs);
1681    /// std::thread::scope(|s| {
1682    ///     for i in 0..a.len() {
1683    ///         s.spawn(move || {
1684    ///             let name = Box::new(format!("thread{i}"));
1685    ///             a[i].store(Box::into_raw(name), Ordering::Relaxed);
1686    ///         });
1687    ///     }
1688    /// });
1689    /// for p in some_ptrs {
1690    ///     assert!(!p.is_null());
1691    ///     let name = unsafe { Box::from_raw(p) };
1692    ///     println!("Hello, {name}!");
1693    /// }
1694    /// ```
1695    #[inline]
1696    #[cfg(target_has_atomic_equal_alignment = "ptr")]
1697    #[unstable(feature = "atomic_from_mut", issue = "76314")]
1698    pub fn from_mut_slice(v: &mut [*mut T]) -> &mut [Self] {
1699        // SAFETY:
1700        //  - the mutable reference guarantees unique ownership.
1701        //  - the alignment of `*mut T` and `Self` is the same on all platforms
1702        //    supported by rust, as verified above.
1703        unsafe { &mut *(v as *mut [*mut T] as *mut [Self]) }
1704    }
1705
1706    /// Consumes the atomic and returns the contained value.
1707    ///
1708    /// This is safe because passing `self` by value guarantees that no other threads are
1709    /// concurrently accessing the atomic data.
1710    ///
1711    /// # Examples
1712    ///
1713    /// ```
1714    /// use std::sync::atomic::AtomicPtr;
1715    ///
1716    /// let mut data = 5;
1717    /// let atomic_ptr = AtomicPtr::new(&mut data);
1718    /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
1719    /// ```
1720    #[inline]
1721    #[stable(feature = "atomic_access", since = "1.15.0")]
1722    #[rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0")]
1723    pub const fn into_inner(self) -> *mut T {
1724        self.p.into_inner()
1725    }
1726
1727    /// Loads a value from the pointer.
1728    ///
1729    /// `load` takes an [`Ordering`] argument which describes the memory ordering
1730    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
1731    ///
1732    /// # Panics
1733    ///
1734    /// Panics if `order` is [`Release`] or [`AcqRel`].
1735    ///
1736    /// # Examples
1737    ///
1738    /// ```
1739    /// use std::sync::atomic::{AtomicPtr, Ordering};
1740    ///
1741    /// let ptr = &mut 5;
1742    /// let some_ptr = AtomicPtr::new(ptr);
1743    ///
1744    /// let value = some_ptr.load(Ordering::Relaxed);
1745    /// ```
1746    #[inline]
1747    #[stable(feature = "rust1", since = "1.0.0")]
1748    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1749    pub fn load(&self, order: Ordering) -> *mut T {
1750        // SAFETY: data races are prevented by atomic intrinsics.
1751        unsafe { atomic_load(self.p.get(), order) }
1752    }
1753
1754    /// Stores a value into the pointer.
1755    ///
1756    /// `store` takes an [`Ordering`] argument which describes the memory ordering
1757    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
1758    ///
1759    /// # Panics
1760    ///
1761    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
1762    ///
1763    /// # Examples
1764    ///
1765    /// ```
1766    /// use std::sync::atomic::{AtomicPtr, Ordering};
1767    ///
1768    /// let ptr = &mut 5;
1769    /// let some_ptr = AtomicPtr::new(ptr);
1770    ///
1771    /// let other_ptr = &mut 10;
1772    ///
1773    /// some_ptr.store(other_ptr, Ordering::Relaxed);
1774    /// ```
1775    #[inline]
1776    #[stable(feature = "rust1", since = "1.0.0")]
1777    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1778    pub fn store(&self, ptr: *mut T, order: Ordering) {
1779        // SAFETY: data races are prevented by atomic intrinsics.
1780        unsafe {
1781            atomic_store(self.p.get(), ptr, order);
1782        }
1783    }
1784
1785    /// Stores a value into the pointer, returning the previous value.
1786    ///
1787    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
1788    /// of this operation. All ordering modes are possible. Note that using
1789    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1790    /// using [`Release`] makes the load part [`Relaxed`].
1791    ///
1792    /// **Note:** This method is only available on platforms that support atomic
1793    /// operations on pointers.
1794    ///
1795    /// # Examples
1796    ///
1797    /// ```
1798    /// use std::sync::atomic::{AtomicPtr, Ordering};
1799    ///
1800    /// let ptr = &mut 5;
1801    /// let some_ptr = AtomicPtr::new(ptr);
1802    ///
1803    /// let other_ptr = &mut 10;
1804    ///
1805    /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
1806    /// ```
1807    #[inline]
1808    #[stable(feature = "rust1", since = "1.0.0")]
1809    #[cfg(target_has_atomic = "ptr")]
1810    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1811    pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
1812        // SAFETY: data races are prevented by atomic intrinsics.
1813        unsafe { atomic_swap(self.p.get(), ptr, order) }
1814    }
1815
1816    /// Stores a value into the pointer if the current value is the same as the `current` value.
1817    ///
1818    /// The return value is always the previous value. If it is equal to `current`, then the value
1819    /// was updated.
1820    ///
1821    /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
1822    /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
1823    /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
1824    /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
1825    /// happens, and using [`Release`] makes the load part [`Relaxed`].
1826    ///
1827    /// **Note:** This method is only available on platforms that support atomic
1828    /// operations on pointers.
1829    ///
1830    /// # Migrating to `compare_exchange` and `compare_exchange_weak`
1831    ///
1832    /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
1833    /// memory orderings:
1834    ///
1835    /// Original | Success | Failure
1836    /// -------- | ------- | -------
1837    /// Relaxed  | Relaxed | Relaxed
1838    /// Acquire  | Acquire | Acquire
1839    /// Release  | Release | Relaxed
1840    /// AcqRel   | AcqRel  | Acquire
1841    /// SeqCst   | SeqCst  | SeqCst
1842    ///
1843    /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
1844    /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
1845    /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
1846    /// rather than to infer success vs failure based on the value that was read.
1847    ///
1848    /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
1849    /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
1850    /// which allows the compiler to generate better assembly code when the compare and swap
1851    /// is used in a loop.
1852    ///
1853    /// # Examples
1854    ///
1855    /// ```
1856    /// use std::sync::atomic::{AtomicPtr, Ordering};
1857    ///
1858    /// let ptr = &mut 5;
1859    /// let some_ptr = AtomicPtr::new(ptr);
1860    ///
1861    /// let other_ptr = &mut 10;
1862    ///
1863    /// let value = some_ptr.compare_and_swap(ptr, other_ptr, Ordering::Relaxed);
1864    /// ```
1865    #[inline]
1866    #[stable(feature = "rust1", since = "1.0.0")]
1867    #[deprecated(
1868        since = "1.50.0",
1869        note = "Use `compare_exchange` or `compare_exchange_weak` instead"
1870    )]
1871    #[cfg(target_has_atomic = "ptr")]
1872    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1873    pub fn compare_and_swap(&self, current: *mut T, new: *mut T, order: Ordering) -> *mut T {
1874        match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
1875            Ok(x) => x,
1876            Err(x) => x,
1877        }
1878    }
1879
1880    /// Stores a value into the pointer if the current value is the same as the `current` value.
1881    ///
1882    /// The return value is a result indicating whether the new value was written and containing
1883    /// the previous value. On success this value is guaranteed to be equal to `current`.
1884    ///
1885    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
1886    /// ordering of this operation. `success` describes the required ordering for the
1887    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1888    /// `failure` describes the required ordering for the load operation that takes place when
1889    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1890    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1891    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1892    ///
1893    /// **Note:** This method is only available on platforms that support atomic
1894    /// operations on pointers.
1895    ///
1896    /// # Examples
1897    ///
1898    /// ```
1899    /// use std::sync::atomic::{AtomicPtr, Ordering};
1900    ///
1901    /// let ptr = &mut 5;
1902    /// let some_ptr = AtomicPtr::new(ptr);
1903    ///
1904    /// let other_ptr = &mut 10;
1905    ///
1906    /// let value = some_ptr.compare_exchange(ptr, other_ptr,
1907    ///                                       Ordering::SeqCst, Ordering::Relaxed);
1908    /// ```
1909    ///
1910    /// # Considerations
1911    ///
1912    /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
1913    /// of CAS operations. In particular, a load of the value followed by a successful
1914    /// `compare_exchange` with the previous load *does not ensure* that other threads have not
1915    /// changed the value in the interim. This is usually important when the *equality* check in
1916    /// the `compare_exchange` is being used to check the *identity* of a value, but equality
1917    /// does not necessarily imply identity. This is a particularly common case for pointers, as
1918    /// a pointer holding the same address does not imply that the same object exists at that
1919    /// address! In this case, `compare_exchange` can lead to the [ABA problem].
1920    ///
1921    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1922    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1923    #[inline]
1924    #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1925    #[cfg(target_has_atomic = "ptr")]
1926    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1927    pub fn compare_exchange(
1928        &self,
1929        current: *mut T,
1930        new: *mut T,
1931        success: Ordering,
1932        failure: Ordering,
1933    ) -> Result<*mut T, *mut T> {
1934        // SAFETY: data races are prevented by atomic intrinsics.
1935        unsafe { atomic_compare_exchange(self.p.get(), current, new, success, failure) }
1936    }
1937
1938    /// Stores a value into the pointer if the current value is the same as the `current` value.
1939    ///
1940    /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the
1941    /// comparison succeeds, which can result in more efficient code on some platforms. The
1942    /// return value is a result indicating whether the new value was written and containing the
1943    /// previous value.
1944    ///
1945    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1946    /// ordering of this operation. `success` describes the required ordering for the
1947    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1948    /// `failure` describes the required ordering for the load operation that takes place when
1949    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1950    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1951    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1952    ///
1953    /// **Note:** This method is only available on platforms that support atomic
1954    /// operations on pointers.
1955    ///
1956    /// # Examples
1957    ///
1958    /// ```
1959    /// use std::sync::atomic::{AtomicPtr, Ordering};
1960    ///
1961    /// let some_ptr = AtomicPtr::new(&mut 5);
1962    ///
1963    /// let new = &mut 10;
1964    /// let mut old = some_ptr.load(Ordering::Relaxed);
1965    /// loop {
1966    ///     match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1967    ///         Ok(_) => break,
1968    ///         Err(x) => old = x,
1969    ///     }
1970    /// }
1971    /// ```
1972    ///
1973    /// # Considerations
1974    ///
1975    /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
1976    /// of CAS operations. In particular, a load of the value followed by a successful
1977    /// `compare_exchange` with the previous load *does not ensure* that other threads have not
1978    /// changed the value in the interim. This is usually important when the *equality* check in
1979    /// the `compare_exchange` is being used to check the *identity* of a value, but equality
1980    /// does not necessarily imply identity. This is a particularly common case for pointers, as
1981    /// a pointer holding the same address does not imply that the same object exists at that
1982    /// address! In this case, `compare_exchange` can lead to the [ABA problem].
1983    ///
1984    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1985    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1986    #[inline]
1987    #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1988    #[cfg(target_has_atomic = "ptr")]
1989    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1990    pub fn compare_exchange_weak(
1991        &self,
1992        current: *mut T,
1993        new: *mut T,
1994        success: Ordering,
1995        failure: Ordering,
1996    ) -> Result<*mut T, *mut T> {
1997        // SAFETY: This intrinsic is unsafe because it operates on a raw pointer
1998        // but we know for sure that the pointer is valid (we just got it from
1999        // an `UnsafeCell` that we have by reference) and the atomic operation
2000        // itself allows us to safely mutate the `UnsafeCell` contents.
2001        unsafe { atomic_compare_exchange_weak(self.p.get(), current, new, success, failure) }
2002    }
2003
2004    /// Fetches the value, and applies a function to it that returns an optional
2005    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
2006    /// returned `Some(_)`, else `Err(previous_value)`.
2007    ///
2008    /// Note: This may call the function multiple times if the value has been
2009    /// changed from other threads in the meantime, as long as the function
2010    /// returns `Some(_)`, but the function will have been applied only once to
2011    /// the stored value.
2012    ///
2013    /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
2014    /// ordering of this operation. The first describes the required ordering for
2015    /// when the operation finally succeeds while the second describes the
2016    /// required ordering for loads. These correspond to the success and failure
2017    /// orderings of [`AtomicPtr::compare_exchange`] respectively.
2018    ///
2019    /// Using [`Acquire`] as success ordering makes the store part of this
2020    /// operation [`Relaxed`], and using [`Release`] makes the final successful
2021    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
2022    /// [`Acquire`] or [`Relaxed`].
2023    ///
2024    /// **Note:** This method is only available on platforms that support atomic
2025    /// operations on pointers.
2026    ///
2027    /// # Considerations
2028    ///
2029    /// This method is not magic; it is not provided by the hardware, and does not act like a
2030    /// critical section or mutex.
2031    ///
2032    /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
2033    /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem],
2034    /// which is a particularly common pitfall for pointers!
2035    ///
2036    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2037    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
2038    ///
2039    /// # Examples
2040    ///
2041    /// ```rust
2042    /// use std::sync::atomic::{AtomicPtr, Ordering};
2043    ///
2044    /// let ptr: *mut _ = &mut 5;
2045    /// let some_ptr = AtomicPtr::new(ptr);
2046    ///
2047    /// let new: *mut _ = &mut 10;
2048    /// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
2049    /// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
2050    ///     if x == ptr {
2051    ///         Some(new)
2052    ///     } else {
2053    ///         None
2054    ///     }
2055    /// });
2056    /// assert_eq!(result, Ok(ptr));
2057    /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2058    /// ```
2059    #[inline]
2060    #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
2061    #[cfg(target_has_atomic = "ptr")]
2062    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2063    pub fn fetch_update<F>(
2064        &self,
2065        set_order: Ordering,
2066        fetch_order: Ordering,
2067        mut f: F,
2068    ) -> Result<*mut T, *mut T>
2069    where
2070        F: FnMut(*mut T) -> Option<*mut T>,
2071    {
2072        let mut prev = self.load(fetch_order);
2073        while let Some(next) = f(prev) {
2074            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
2075                x @ Ok(_) => return x,
2076                Err(next_prev) => prev = next_prev,
2077            }
2078        }
2079        Err(prev)
2080    }
2081    /// Fetches the value, and applies a function to it that returns an optional
2082    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
2083    /// returned `Some(_)`, else `Err(previous_value)`.
2084    ///
2085    /// See also: [`update`](`AtomicPtr::update`).
2086    ///
2087    /// Note: This may call the function multiple times if the value has been
2088    /// changed from other threads in the meantime, as long as the function
2089    /// returns `Some(_)`, but the function will have been applied only once to
2090    /// the stored value.
2091    ///
2092    /// `try_update` takes two [`Ordering`] arguments to describe the memory
2093    /// ordering of this operation. The first describes the required ordering for
2094    /// when the operation finally succeeds while the second describes the
2095    /// required ordering for loads. These correspond to the success and failure
2096    /// orderings of [`AtomicPtr::compare_exchange`] respectively.
2097    ///
2098    /// Using [`Acquire`] as success ordering makes the store part of this
2099    /// operation [`Relaxed`], and using [`Release`] makes the final successful
2100    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
2101    /// [`Acquire`] or [`Relaxed`].
2102    ///
2103    /// **Note:** This method is only available on platforms that support atomic
2104    /// operations on pointers.
2105    ///
2106    /// # Considerations
2107    ///
2108    /// This method is not magic; it is not provided by the hardware, and does not act like a
2109    /// critical section or mutex.
2110    ///
2111    /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
2112    /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem],
2113    /// which is a particularly common pitfall for pointers!
2114    ///
2115    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2116    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
2117    ///
2118    /// # Examples
2119    ///
2120    /// ```rust
2121    /// #![feature(atomic_try_update)]
2122    /// use std::sync::atomic::{AtomicPtr, Ordering};
2123    ///
2124    /// let ptr: *mut _ = &mut 5;
2125    /// let some_ptr = AtomicPtr::new(ptr);
2126    ///
2127    /// let new: *mut _ = &mut 10;
2128    /// assert_eq!(some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
2129    /// let result = some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
2130    ///     if x == ptr {
2131    ///         Some(new)
2132    ///     } else {
2133    ///         None
2134    ///     }
2135    /// });
2136    /// assert_eq!(result, Ok(ptr));
2137    /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2138    /// ```
2139    #[inline]
2140    #[unstable(feature = "atomic_try_update", issue = "135894")]
2141    #[cfg(target_has_atomic = "ptr")]
2142    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2143    pub fn try_update(
2144        &self,
2145        set_order: Ordering,
2146        fetch_order: Ordering,
2147        f: impl FnMut(*mut T) -> Option<*mut T>,
2148    ) -> Result<*mut T, *mut T> {
2149        // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`;
2150        //      when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`.
2151        self.fetch_update(set_order, fetch_order, f)
2152    }
2153
2154    /// Fetches the value, applies a function to it that it return a new value.
2155    /// The new value is stored and the old value is returned.
2156    ///
2157    /// See also: [`try_update`](`AtomicPtr::try_update`).
2158    ///
2159    /// Note: This may call the function multiple times if the value has been changed from other threads in
2160    /// the meantime, but the function will have been applied only once to the stored value.
2161    ///
2162    /// `update` takes two [`Ordering`] arguments to describe the memory
2163    /// ordering of this operation. The first describes the required ordering for
2164    /// when the operation finally succeeds while the second describes the
2165    /// required ordering for loads. These correspond to the success and failure
2166    /// orderings of [`AtomicPtr::compare_exchange`] respectively.
2167    ///
2168    /// Using [`Acquire`] as success ordering makes the store part
2169    /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
2170    /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2171    ///
2172    /// **Note:** This method is only available on platforms that support atomic
2173    /// operations on pointers.
2174    ///
2175    /// # Considerations
2176    ///
2177    /// This method is not magic; it is not provided by the hardware, and does not act like a
2178    /// critical section or mutex.
2179    ///
2180    /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
2181    /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem],
2182    /// which is a particularly common pitfall for pointers!
2183    ///
2184    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2185    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
2186    ///
2187    /// # Examples
2188    ///
2189    /// ```rust
2190    /// #![feature(atomic_try_update)]
2191    ///
2192    /// use std::sync::atomic::{AtomicPtr, Ordering};
2193    ///
2194    /// let ptr: *mut _ = &mut 5;
2195    /// let some_ptr = AtomicPtr::new(ptr);
2196    ///
2197    /// let new: *mut _ = &mut 10;
2198    /// let result = some_ptr.update(Ordering::SeqCst, Ordering::SeqCst, |_| new);
2199    /// assert_eq!(result, ptr);
2200    /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2201    /// ```
2202    #[inline]
2203    #[unstable(feature = "atomic_try_update", issue = "135894")]
2204    #[cfg(target_has_atomic = "8")]
2205    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2206    pub fn update(
2207        &self,
2208        set_order: Ordering,
2209        fetch_order: Ordering,
2210        mut f: impl FnMut(*mut T) -> *mut T,
2211    ) -> *mut T {
2212        let mut prev = self.load(fetch_order);
2213        loop {
2214            match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
2215                Ok(x) => break x,
2216                Err(next_prev) => prev = next_prev,
2217            }
2218        }
2219    }
2220
2221    /// Offsets the pointer's address by adding `val` (in units of `T`),
2222    /// returning the previous pointer.
2223    ///
2224    /// This is equivalent to using [`wrapping_add`] to atomically perform the
2225    /// equivalent of `ptr = ptr.wrapping_add(val);`.
2226    ///
2227    /// This method operates in units of `T`, which means that it cannot be used
2228    /// to offset the pointer by an amount which is not a multiple of
2229    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2230    /// work with a deliberately misaligned pointer. In such cases, you may use
2231    /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead.
2232    ///
2233    /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the
2234    /// memory ordering of this operation. All ordering modes are possible. Note
2235    /// that using [`Acquire`] makes the store part of this operation
2236    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2237    ///
2238    /// **Note**: This method is only available on platforms that support atomic
2239    /// operations on [`AtomicPtr`].
2240    ///
2241    /// [`wrapping_add`]: pointer::wrapping_add
2242    ///
2243    /// # Examples
2244    ///
2245    /// ```
2246    /// use core::sync::atomic::{AtomicPtr, Ordering};
2247    ///
2248    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2249    /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
2250    /// // Note: units of `size_of::<i64>()`.
2251    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
2252    /// ```
2253    #[inline]
2254    #[cfg(target_has_atomic = "ptr")]
2255    #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2256    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2257    pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T {
2258        self.fetch_byte_add(val.wrapping_mul(size_of::<T>()), order)
2259    }
2260
2261    /// Offsets the pointer's address by subtracting `val` (in units of `T`),
2262    /// returning the previous pointer.
2263    ///
2264    /// This is equivalent to using [`wrapping_sub`] to atomically perform the
2265    /// equivalent of `ptr = ptr.wrapping_sub(val);`.
2266    ///
2267    /// This method operates in units of `T`, which means that it cannot be used
2268    /// to offset the pointer by an amount which is not a multiple of
2269    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2270    /// work with a deliberately misaligned pointer. In such cases, you may use
2271    /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead.
2272    ///
2273    /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory
2274    /// ordering of this operation. All ordering modes are possible. Note that
2275    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2276    /// and using [`Release`] makes the load part [`Relaxed`].
2277    ///
2278    /// **Note**: This method is only available on platforms that support atomic
2279    /// operations on [`AtomicPtr`].
2280    ///
2281    /// [`wrapping_sub`]: pointer::wrapping_sub
2282    ///
2283    /// # Examples
2284    ///
2285    /// ```
2286    /// use core::sync::atomic::{AtomicPtr, Ordering};
2287    ///
2288    /// let array = [1i32, 2i32];
2289    /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
2290    ///
2291    /// assert!(core::ptr::eq(
2292    ///     atom.fetch_ptr_sub(1, Ordering::Relaxed),
2293    ///     &array[1],
2294    /// ));
2295    /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
2296    /// ```
2297    #[inline]
2298    #[cfg(target_has_atomic = "ptr")]
2299    #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2300    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2301    pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T {
2302        self.fetch_byte_sub(val.wrapping_mul(size_of::<T>()), order)
2303    }
2304
2305    /// Offsets the pointer's address by adding `val` *bytes*, returning the
2306    /// previous pointer.
2307    ///
2308    /// This is equivalent to using [`wrapping_byte_add`] to atomically
2309    /// perform `ptr = ptr.wrapping_byte_add(val)`.
2310    ///
2311    /// `fetch_byte_add` takes an [`Ordering`] argument which describes the
2312    /// memory ordering of this operation. All ordering modes are possible. Note
2313    /// that using [`Acquire`] makes the store part of this operation
2314    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2315    ///
2316    /// **Note**: This method is only available on platforms that support atomic
2317    /// operations on [`AtomicPtr`].
2318    ///
2319    /// [`wrapping_byte_add`]: pointer::wrapping_byte_add
2320    ///
2321    /// # Examples
2322    ///
2323    /// ```
2324    /// use core::sync::atomic::{AtomicPtr, Ordering};
2325    ///
2326    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2327    /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
2328    /// // Note: in units of bytes, not `size_of::<i64>()`.
2329    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
2330    /// ```
2331    #[inline]
2332    #[cfg(target_has_atomic = "ptr")]
2333    #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2334    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2335    pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T {
2336        // SAFETY: data races are prevented by atomic intrinsics.
2337        unsafe { atomic_add(self.p.get(), val, order).cast() }
2338    }
2339
2340    /// Offsets the pointer's address by subtracting `val` *bytes*, returning the
2341    /// previous pointer.
2342    ///
2343    /// This is equivalent to using [`wrapping_byte_sub`] to atomically
2344    /// perform `ptr = ptr.wrapping_byte_sub(val)`.
2345    ///
2346    /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the
2347    /// memory ordering of this operation. All ordering modes are possible. Note
2348    /// that using [`Acquire`] makes the store part of this operation
2349    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2350    ///
2351    /// **Note**: This method is only available on platforms that support atomic
2352    /// operations on [`AtomicPtr`].
2353    ///
2354    /// [`wrapping_byte_sub`]: pointer::wrapping_byte_sub
2355    ///
2356    /// # Examples
2357    ///
2358    /// ```
2359    /// use core::sync::atomic::{AtomicPtr, Ordering};
2360    ///
2361    /// let mut arr = [0i64, 1];
2362    /// let atom = AtomicPtr::<i64>::new(&raw mut arr[1]);
2363    /// assert_eq!(atom.fetch_byte_sub(8, Ordering::Relaxed).addr(), (&raw const arr[1]).addr());
2364    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), (&raw const arr[0]).addr());
2365    /// ```
2366    #[inline]
2367    #[cfg(target_has_atomic = "ptr")]
2368    #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2369    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2370    pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T {
2371        // SAFETY: data races are prevented by atomic intrinsics.
2372        unsafe { atomic_sub(self.p.get(), val, order).cast() }
2373    }
2374
2375    /// Performs a bitwise "or" operation on the address of the current pointer,
2376    /// and the argument `val`, and stores a pointer with provenance of the
2377    /// current pointer and the resulting address.
2378    ///
2379    /// This is equivalent to using [`map_addr`] to atomically perform
2380    /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged
2381    /// pointer schemes to atomically set tag bits.
2382    ///
2383    /// **Caveat**: This operation returns the previous value. To compute the
2384    /// stored value without losing provenance, you may use [`map_addr`]. For
2385    /// example: `a.fetch_or(val).map_addr(|a| a | val)`.
2386    ///
2387    /// `fetch_or` takes an [`Ordering`] argument which describes the memory
2388    /// ordering of this operation. All ordering modes are possible. Note that
2389    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2390    /// and using [`Release`] makes the load part [`Relaxed`].
2391    ///
2392    /// **Note**: This method is only available on platforms that support atomic
2393    /// operations on [`AtomicPtr`].
2394    ///
2395    /// This API and its claimed semantics are part of the Strict Provenance
2396    /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2397    /// details.
2398    ///
2399    /// [`map_addr`]: pointer::map_addr
2400    ///
2401    /// # Examples
2402    ///
2403    /// ```
2404    /// use core::sync::atomic::{AtomicPtr, Ordering};
2405    ///
2406    /// let pointer = &mut 3i64 as *mut i64;
2407    ///
2408    /// let atom = AtomicPtr::<i64>::new(pointer);
2409    /// // Tag the bottom bit of the pointer.
2410    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
2411    /// // Extract and untag.
2412    /// let tagged = atom.load(Ordering::Relaxed);
2413    /// assert_eq!(tagged.addr() & 1, 1);
2414    /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2415    /// ```
2416    #[inline]
2417    #[cfg(target_has_atomic = "ptr")]
2418    #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2419    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2420    pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T {
2421        // SAFETY: data races are prevented by atomic intrinsics.
2422        unsafe { atomic_or(self.p.get(), val, order).cast() }
2423    }
2424
2425    /// Performs a bitwise "and" operation on the address of the current
2426    /// pointer, and the argument `val`, and stores a pointer with provenance of
2427    /// the current pointer and the resulting address.
2428    ///
2429    /// This is equivalent to using [`map_addr`] to atomically perform
2430    /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged
2431    /// pointer schemes to atomically unset tag bits.
2432    ///
2433    /// **Caveat**: This operation returns the previous value. To compute the
2434    /// stored value without losing provenance, you may use [`map_addr`]. For
2435    /// example: `a.fetch_and(val).map_addr(|a| a & val)`.
2436    ///
2437    /// `fetch_and` takes an [`Ordering`] argument which describes the memory
2438    /// ordering of this operation. All ordering modes are possible. Note that
2439    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2440    /// and using [`Release`] makes the load part [`Relaxed`].
2441    ///
2442    /// **Note**: This method is only available on platforms that support atomic
2443    /// operations on [`AtomicPtr`].
2444    ///
2445    /// This API and its claimed semantics are part of the Strict Provenance
2446    /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2447    /// details.
2448    ///
2449    /// [`map_addr`]: pointer::map_addr
2450    ///
2451    /// # Examples
2452    ///
2453    /// ```
2454    /// use core::sync::atomic::{AtomicPtr, Ordering};
2455    ///
2456    /// let pointer = &mut 3i64 as *mut i64;
2457    /// // A tagged pointer
2458    /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2459    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
2460    /// // Untag, and extract the previously tagged pointer.
2461    /// let untagged = atom.fetch_and(!1, Ordering::Relaxed)
2462    ///     .map_addr(|a| a & !1);
2463    /// assert_eq!(untagged, pointer);
2464    /// ```
2465    #[inline]
2466    #[cfg(target_has_atomic = "ptr")]
2467    #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2468    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2469    pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T {
2470        // SAFETY: data races are prevented by atomic intrinsics.
2471        unsafe { atomic_and(self.p.get(), val, order).cast() }
2472    }
2473
2474    /// Performs a bitwise "xor" operation on the address of the current
2475    /// pointer, and the argument `val`, and stores a pointer with provenance of
2476    /// the current pointer and the resulting address.
2477    ///
2478    /// This is equivalent to using [`map_addr`] to atomically perform
2479    /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged
2480    /// pointer schemes to atomically toggle tag bits.
2481    ///
2482    /// **Caveat**: This operation returns the previous value. To compute the
2483    /// stored value without losing provenance, you may use [`map_addr`]. For
2484    /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`.
2485    ///
2486    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory
2487    /// ordering of this operation. All ordering modes are possible. Note that
2488    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2489    /// and using [`Release`] makes the load part [`Relaxed`].
2490    ///
2491    /// **Note**: This method is only available on platforms that support atomic
2492    /// operations on [`AtomicPtr`].
2493    ///
2494    /// This API and its claimed semantics are part of the Strict Provenance
2495    /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2496    /// details.
2497    ///
2498    /// [`map_addr`]: pointer::map_addr
2499    ///
2500    /// # Examples
2501    ///
2502    /// ```
2503    /// use core::sync::atomic::{AtomicPtr, Ordering};
2504    ///
2505    /// let pointer = &mut 3i64 as *mut i64;
2506    /// let atom = AtomicPtr::<i64>::new(pointer);
2507    ///
2508    /// // Toggle a tag bit on the pointer.
2509    /// atom.fetch_xor(1, Ordering::Relaxed);
2510    /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2511    /// ```
2512    #[inline]
2513    #[cfg(target_has_atomic = "ptr")]
2514    #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2515    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2516    pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T {
2517        // SAFETY: data races are prevented by atomic intrinsics.
2518        unsafe { atomic_xor(self.p.get(), val, order).cast() }
2519    }
2520
2521    /// Returns a mutable pointer to the underlying pointer.
2522    ///
2523    /// Doing non-atomic reads and writes on the resulting pointer can be a data race.
2524    /// This method is mostly useful for FFI, where the function signature may use
2525    /// `*mut *mut T` instead of `&AtomicPtr<T>`.
2526    ///
2527    /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
2528    /// atomic types work with interior mutability. All modifications of an atomic change the value
2529    /// through a shared reference, and can do so safely as long as they use atomic operations. Any
2530    /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the
2531    /// requirements of the [memory model].
2532    ///
2533    /// # Examples
2534    ///
2535    /// ```ignore (extern-declaration)
2536    /// use std::sync::atomic::AtomicPtr;
2537    ///
2538    /// extern "C" {
2539    ///     fn my_atomic_op(arg: *mut *mut u32);
2540    /// }
2541    ///
2542    /// let mut value = 17;
2543    /// let atomic = AtomicPtr::new(&mut value);
2544    ///
2545    /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
2546    /// unsafe {
2547    ///     my_atomic_op(atomic.as_ptr());
2548    /// }
2549    /// ```
2550    ///
2551    /// [memory model]: self#memory-model-for-atomic-accesses
2552    #[inline]
2553    #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
2554    #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
2555    #[rustc_never_returns_null_ptr]
2556    pub const fn as_ptr(&self) -> *mut *mut T {
2557        self.p.get()
2558    }
2559}
2560
2561#[cfg(target_has_atomic_load_store = "8")]
2562#[stable(feature = "atomic_bool_from", since = "1.24.0")]
2563#[rustc_const_unstable(feature = "const_convert", issue = "143773")]
2564#[cfg(not(feature = "ferrocene_certified"))]
2565impl const From<bool> for AtomicBool {
2566    /// Converts a `bool` into an `AtomicBool`.
2567    ///
2568    /// # Examples
2569    ///
2570    /// ```
2571    /// use std::sync::atomic::AtomicBool;
2572    /// let atomic_bool = AtomicBool::from(true);
2573    /// assert_eq!(format!("{atomic_bool:?}"), "true")
2574    /// ```
2575    #[inline]
2576    fn from(b: bool) -> Self {
2577        Self::new(b)
2578    }
2579}
2580
2581#[cfg(target_has_atomic_load_store = "ptr")]
2582#[stable(feature = "atomic_from", since = "1.23.0")]
2583#[rustc_const_unstable(feature = "const_convert", issue = "143773")]
2584#[cfg(not(feature = "ferrocene_certified"))]
2585impl<T> const From<*mut T> for AtomicPtr<T> {
2586    /// Converts a `*mut T` into an `AtomicPtr<T>`.
2587    #[inline]
2588    fn from(p: *mut T) -> Self {
2589        Self::new(p)
2590    }
2591}
2592
2593#[allow(unused_macros)] // This macro ends up being unused on some architectures.
2594macro_rules! if_8_bit {
2595    (u8, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($yes)*)?) };
2596    (i8, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($yes)*)?) };
2597    ($_:ident, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($no)*)?) };
2598}
2599
2600#[cfg(target_has_atomic_load_store)]
2601macro_rules! atomic_int {
2602    ($cfg_cas:meta,
2603     $cfg_align:meta,
2604     $stable:meta,
2605     $stable_cxchg:meta,
2606     $stable_debug:meta,
2607     $stable_access:meta,
2608     $stable_from:meta,
2609     $stable_nand:meta,
2610     $const_stable_new:meta,
2611     $const_stable_into_inner:meta,
2612     $diagnostic_item:meta,
2613     $s_int_type:literal,
2614     $extra_feature:expr,
2615     $min_fn:ident, $max_fn:ident,
2616     $align:expr,
2617     $int_type:ident $atomic_type:ident) => {
2618        /// An integer type which can be safely shared between threads.
2619        ///
2620        /// This type has the same
2621        #[doc = if_8_bit!(
2622            $int_type,
2623            yes = ["size, alignment, and bit validity"],
2624            no = ["size and bit validity"],
2625        )]
2626        /// as the underlying integer type, [`
2627        #[doc = $s_int_type]
2628        /// `].
2629        #[doc = if_8_bit! {
2630            $int_type,
2631            no = [
2632                "However, the alignment of this type is always equal to its ",
2633                "size, even on targets where [`", $s_int_type, "`] has a ",
2634                "lesser alignment."
2635            ],
2636        }]
2637        ///
2638        /// For more about the differences between atomic types and
2639        /// non-atomic types as well as information about the portability of
2640        /// this type, please see the [module-level documentation].
2641        ///
2642        /// **Note:** This type is only available on platforms that support
2643        /// atomic loads and stores of [`
2644        #[doc = $s_int_type]
2645        /// `].
2646        ///
2647        /// [module-level documentation]: crate::sync::atomic
2648        #[$stable]
2649        #[$diagnostic_item]
2650        #[repr(C, align($align))]
2651        pub struct $atomic_type {
2652            v: UnsafeCell<$int_type>,
2653        }
2654
2655        #[$stable]
2656        impl Default for $atomic_type {
2657            #[inline]
2658            fn default() -> Self {
2659                Self::new(Default::default())
2660            }
2661        }
2662
2663        #[$stable_from]
2664        #[rustc_const_unstable(feature = "const_convert", issue = "143773")]
2665        impl const From<$int_type> for $atomic_type {
2666            #[doc = concat!("Converts an `", stringify!($int_type), "` into an `", stringify!($atomic_type), "`.")]
2667            #[inline]
2668            fn from(v: $int_type) -> Self { Self::new(v) }
2669        }
2670
2671        #[$stable_debug]
2672        #[cfg(not(feature = "ferrocene_certified"))]
2673        impl fmt::Debug for $atomic_type {
2674            fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
2675                fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
2676            }
2677        }
2678
2679        // Send is implicitly implemented.
2680        #[$stable]
2681        unsafe impl Sync for $atomic_type {}
2682
2683        impl $atomic_type {
2684            /// Creates a new atomic integer.
2685            ///
2686            /// # Examples
2687            ///
2688            /// ```
2689            #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2690            ///
2691            #[doc = concat!("let atomic_forty_two = ", stringify!($atomic_type), "::new(42);")]
2692            /// ```
2693            #[inline]
2694            #[$stable]
2695            #[$const_stable_new]
2696            #[must_use]
2697            pub const fn new(v: $int_type) -> Self {
2698                Self {v: UnsafeCell::new(v)}
2699            }
2700
2701            /// Creates a new reference to an atomic integer from a pointer.
2702            ///
2703            /// # Examples
2704            ///
2705            /// ```
2706            #[doc = concat!($extra_feature, "use std::sync::atomic::{self, ", stringify!($atomic_type), "};")]
2707            ///
2708            /// // Get a pointer to an allocated value
2709            #[doc = concat!("let ptr: *mut ", stringify!($int_type), " = Box::into_raw(Box::new(0));")]
2710            ///
2711            #[doc = concat!("assert!(ptr.cast::<", stringify!($atomic_type), ">().is_aligned());")]
2712            ///
2713            /// {
2714            ///     // Create an atomic view of the allocated value
2715            // SAFETY: this is a doc comment, tidy, it can't hurt you (also guaranteed by the construction of `ptr` and the assert above)
2716            #[doc = concat!("    let atomic = unsafe {", stringify!($atomic_type), "::from_ptr(ptr) };")]
2717            ///
2718            ///     // Use `atomic` for atomic operations, possibly share it with other threads
2719            ///     atomic.store(1, atomic::Ordering::Relaxed);
2720            /// }
2721            ///
2722            /// // It's ok to non-atomically access the value behind `ptr`,
2723            /// // since the reference to the atomic ended its lifetime in the block above
2724            /// assert_eq!(unsafe { *ptr }, 1);
2725            ///
2726            /// // Deallocate the value
2727            /// unsafe { drop(Box::from_raw(ptr)) }
2728            /// ```
2729            ///
2730            /// # Safety
2731            ///
2732            /// * `ptr` must be aligned to
2733            #[doc = concat!("  `align_of::<", stringify!($atomic_type), ">()`")]
2734            #[doc = if_8_bit!{
2735                $int_type,
2736                yes = [
2737                    "  (note that this is always true, since `align_of::<",
2738                    stringify!($atomic_type), ">() == 1`)."
2739                ],
2740                no = [
2741                    "  (note that on some platforms this can be bigger than `align_of::<",
2742                    stringify!($int_type), ">()`)."
2743                ],
2744            }]
2745            /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2746            /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
2747            ///   allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different
2748            ///   sizes, without synchronization.
2749            ///
2750            /// [valid]: crate::ptr#safety
2751            /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
2752            #[inline]
2753            #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
2754            #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
2755            pub const unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a $atomic_type {
2756                // SAFETY: guaranteed by the caller
2757                unsafe { &*ptr.cast() }
2758            }
2759
2760
2761            /// Returns a mutable reference to the underlying integer.
2762            ///
2763            /// This is safe because the mutable reference guarantees that no other threads are
2764            /// concurrently accessing the atomic data.
2765            ///
2766            /// # Examples
2767            ///
2768            /// ```
2769            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2770            ///
2771            #[doc = concat!("let mut some_var = ", stringify!($atomic_type), "::new(10);")]
2772            /// assert_eq!(*some_var.get_mut(), 10);
2773            /// *some_var.get_mut() = 5;
2774            /// assert_eq!(some_var.load(Ordering::SeqCst), 5);
2775            /// ```
2776            #[inline]
2777            #[$stable_access]
2778            pub fn get_mut(&mut self) -> &mut $int_type {
2779                self.v.get_mut()
2780            }
2781
2782            #[doc = concat!("Get atomic access to a `&mut ", stringify!($int_type), "`.")]
2783            ///
2784            #[doc = if_8_bit! {
2785                $int_type,
2786                no = [
2787                    "**Note:** This function is only available on targets where `",
2788                    stringify!($atomic_type), "` has the same alignment as `", stringify!($int_type), "`."
2789                ],
2790            }]
2791            ///
2792            /// # Examples
2793            ///
2794            /// ```
2795            /// #![feature(atomic_from_mut)]
2796            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2797            ///
2798            /// let mut some_int = 123;
2799            #[doc = concat!("let a = ", stringify!($atomic_type), "::from_mut(&mut some_int);")]
2800            /// a.store(100, Ordering::Relaxed);
2801            /// assert_eq!(some_int, 100);
2802            /// ```
2803            ///
2804            #[inline]
2805            #[$cfg_align]
2806            #[unstable(feature = "atomic_from_mut", issue = "76314")]
2807            pub fn from_mut(v: &mut $int_type) -> &mut Self {
2808                let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2809                // SAFETY:
2810                //  - the mutable reference guarantees unique ownership.
2811                //  - the alignment of `$int_type` and `Self` is the
2812                //    same, as promised by $cfg_align and verified above.
2813                unsafe { &mut *(v as *mut $int_type as *mut Self) }
2814            }
2815
2816            #[doc = concat!("Get non-atomic access to a `&mut [", stringify!($atomic_type), "]` slice")]
2817            ///
2818            /// This is safe because the mutable reference guarantees that no other threads are
2819            /// concurrently accessing the atomic data.
2820            ///
2821            /// # Examples
2822            ///
2823            /// ```ignore-wasm
2824            /// #![feature(atomic_from_mut)]
2825            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2826            ///
2827            #[doc = concat!("let mut some_ints = [const { ", stringify!($atomic_type), "::new(0) }; 10];")]
2828            ///
2829            #[doc = concat!("let view: &mut [", stringify!($int_type), "] = ", stringify!($atomic_type), "::get_mut_slice(&mut some_ints);")]
2830            /// assert_eq!(view, [0; 10]);
2831            /// view
2832            ///     .iter_mut()
2833            ///     .enumerate()
2834            ///     .for_each(|(idx, int)| *int = idx as _);
2835            ///
2836            /// std::thread::scope(|s| {
2837            ///     some_ints
2838            ///         .iter()
2839            ///         .enumerate()
2840            ///         .for_each(|(idx, int)| {
2841            ///             s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
2842            ///         })
2843            /// });
2844            /// ```
2845            #[inline]
2846            #[unstable(feature = "atomic_from_mut", issue = "76314")]
2847            pub fn get_mut_slice(this: &mut [Self]) -> &mut [$int_type] {
2848                // SAFETY: the mutable reference guarantees unique ownership.
2849                unsafe { &mut *(this as *mut [Self] as *mut [$int_type]) }
2850            }
2851
2852            #[doc = concat!("Get atomic access to a `&mut [", stringify!($int_type), "]` slice.")]
2853            ///
2854            #[doc = if_8_bit! {
2855                $int_type,
2856                no = [
2857                    "**Note:** This function is only available on targets where `",
2858                    stringify!($atomic_type), "` has the same alignment as `", stringify!($int_type), "`."
2859                ],
2860            }]
2861            ///
2862            /// # Examples
2863            ///
2864            /// ```ignore-wasm
2865            /// #![feature(atomic_from_mut)]
2866            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2867            ///
2868            /// let mut some_ints = [0; 10];
2869            #[doc = concat!("let a = &*", stringify!($atomic_type), "::from_mut_slice(&mut some_ints);")]
2870            /// std::thread::scope(|s| {
2871            ///     for i in 0..a.len() {
2872            ///         s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
2873            ///     }
2874            /// });
2875            /// for (i, n) in some_ints.into_iter().enumerate() {
2876            ///     assert_eq!(i, n as usize);
2877            /// }
2878            /// ```
2879            #[inline]
2880            #[$cfg_align]
2881            #[unstable(feature = "atomic_from_mut", issue = "76314")]
2882            pub fn from_mut_slice(v: &mut [$int_type]) -> &mut [Self] {
2883                let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2884                // SAFETY:
2885                //  - the mutable reference guarantees unique ownership.
2886                //  - the alignment of `$int_type` and `Self` is the
2887                //    same, as promised by $cfg_align and verified above.
2888                unsafe { &mut *(v as *mut [$int_type] as *mut [Self]) }
2889            }
2890
2891            /// Consumes the atomic and returns the contained value.
2892            ///
2893            /// This is safe because passing `self` by value guarantees that no other threads are
2894            /// concurrently accessing the atomic data.
2895            ///
2896            /// # Examples
2897            ///
2898            /// ```
2899            #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2900            ///
2901            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2902            /// assert_eq!(some_var.into_inner(), 5);
2903            /// ```
2904            #[inline]
2905            #[$stable_access]
2906            #[$const_stable_into_inner]
2907            pub const fn into_inner(self) -> $int_type {
2908                self.v.into_inner()
2909            }
2910
2911            /// Loads a value from the atomic integer.
2912            ///
2913            /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2914            /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
2915            ///
2916            /// # Panics
2917            ///
2918            /// Panics if `order` is [`Release`] or [`AcqRel`].
2919            ///
2920            /// # Examples
2921            ///
2922            /// ```
2923            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2924            ///
2925            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2926            ///
2927            /// assert_eq!(some_var.load(Ordering::Relaxed), 5);
2928            /// ```
2929            #[inline]
2930            #[$stable]
2931            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2932            pub fn load(&self, order: Ordering) -> $int_type {
2933                // SAFETY: data races are prevented by atomic intrinsics.
2934                unsafe { atomic_load(self.v.get(), order) }
2935            }
2936
2937            /// Stores a value into the atomic integer.
2938            ///
2939            /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2940            ///  Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
2941            ///
2942            /// # Panics
2943            ///
2944            /// Panics if `order` is [`Acquire`] or [`AcqRel`].
2945            ///
2946            /// # Examples
2947            ///
2948            /// ```
2949            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2950            ///
2951            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2952            ///
2953            /// some_var.store(10, Ordering::Relaxed);
2954            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2955            /// ```
2956            #[inline]
2957            #[$stable]
2958            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2959            pub fn store(&self, val: $int_type, order: Ordering) {
2960                // SAFETY: data races are prevented by atomic intrinsics.
2961                unsafe { atomic_store(self.v.get(), val, order); }
2962            }
2963
2964            /// Stores a value into the atomic integer, returning the previous value.
2965            ///
2966            /// `swap` takes an [`Ordering`] argument which describes the memory ordering
2967            /// of this operation. All ordering modes are possible. Note that using
2968            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2969            /// using [`Release`] makes the load part [`Relaxed`].
2970            ///
2971            /// **Note**: This method is only available on platforms that support atomic operations on
2972            #[doc = concat!("[`", $s_int_type, "`].")]
2973            ///
2974            /// # Examples
2975            ///
2976            /// ```
2977            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2978            ///
2979            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2980            ///
2981            /// assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
2982            /// ```
2983            #[inline]
2984            #[$stable]
2985            #[$cfg_cas]
2986            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2987            pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
2988                // SAFETY: data races are prevented by atomic intrinsics.
2989                unsafe { atomic_swap(self.v.get(), val, order) }
2990            }
2991
2992            /// Stores a value into the atomic integer if the current value is the same as
2993            /// the `current` value.
2994            ///
2995            /// The return value is always the previous value. If it is equal to `current`, then the
2996            /// value was updated.
2997            ///
2998            /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
2999            /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
3000            /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
3001            /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
3002            /// happens, and using [`Release`] makes the load part [`Relaxed`].
3003            ///
3004            /// **Note**: This method is only available on platforms that support atomic operations on
3005            #[doc = concat!("[`", $s_int_type, "`].")]
3006            ///
3007            /// # Migrating to `compare_exchange` and `compare_exchange_weak`
3008            ///
3009            /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
3010            /// memory orderings:
3011            ///
3012            /// Original | Success | Failure
3013            /// -------- | ------- | -------
3014            /// Relaxed  | Relaxed | Relaxed
3015            /// Acquire  | Acquire | Acquire
3016            /// Release  | Release | Relaxed
3017            /// AcqRel   | AcqRel  | Acquire
3018            /// SeqCst   | SeqCst  | SeqCst
3019            ///
3020            /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
3021            /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
3022            /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
3023            /// rather than to infer success vs failure based on the value that was read.
3024            ///
3025            /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
3026            /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
3027            /// which allows the compiler to generate better assembly code when the compare and swap
3028            /// is used in a loop.
3029            ///
3030            /// # Examples
3031            ///
3032            /// ```
3033            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3034            ///
3035            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
3036            ///
3037            /// assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
3038            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
3039            ///
3040            /// assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
3041            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
3042            /// ```
3043            #[cfg(not(feature = "ferrocene_certified"))]
3044            #[inline]
3045            #[$stable]
3046            #[deprecated(
3047                since = "1.50.0",
3048                note = "Use `compare_exchange` or `compare_exchange_weak` instead")
3049            ]
3050            #[$cfg_cas]
3051            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3052            pub fn compare_and_swap(&self,
3053                                    current: $int_type,
3054                                    new: $int_type,
3055                                    order: Ordering) -> $int_type {
3056                match self.compare_exchange(current,
3057                                            new,
3058                                            order,
3059                                            strongest_failure_ordering(order)) {
3060                    Ok(x) => x,
3061                    Err(x) => x,
3062                }
3063            }
3064
3065            /// Stores a value into the atomic integer if the current value is the same as
3066            /// the `current` value.
3067            ///
3068            /// The return value is a result indicating whether the new value was written and
3069            /// containing the previous value. On success this value is guaranteed to be equal to
3070            /// `current`.
3071            ///
3072            /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
3073            /// ordering of this operation. `success` describes the required ordering for the
3074            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
3075            /// `failure` describes the required ordering for the load operation that takes place when
3076            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
3077            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
3078            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3079            ///
3080            /// **Note**: This method is only available on platforms that support atomic operations on
3081            #[doc = concat!("[`", $s_int_type, "`].")]
3082            ///
3083            /// # Examples
3084            ///
3085            /// ```
3086            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3087            ///
3088            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
3089            ///
3090            /// assert_eq!(some_var.compare_exchange(5, 10,
3091            ///                                      Ordering::Acquire,
3092            ///                                      Ordering::Relaxed),
3093            ///            Ok(5));
3094            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
3095            ///
3096            /// assert_eq!(some_var.compare_exchange(6, 12,
3097            ///                                      Ordering::SeqCst,
3098            ///                                      Ordering::Acquire),
3099            ///            Err(10));
3100            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
3101            /// ```
3102            ///
3103            /// # Considerations
3104            ///
3105            /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
3106            /// of CAS operations. In particular, a load of the value followed by a successful
3107            /// `compare_exchange` with the previous load *does not ensure* that other threads have not
3108            /// changed the value in the interim! This is usually important when the *equality* check in
3109            /// the `compare_exchange` is being used to check the *identity* of a value, but equality
3110            /// does not necessarily imply identity. This is a particularly common case for pointers, as
3111            /// a pointer holding the same address does not imply that the same object exists at that
3112            /// address! In this case, `compare_exchange` can lead to the [ABA problem].
3113            ///
3114            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3115            /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3116            #[inline]
3117            #[$stable_cxchg]
3118            #[$cfg_cas]
3119            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3120            pub fn compare_exchange(&self,
3121                                    current: $int_type,
3122                                    new: $int_type,
3123                                    success: Ordering,
3124                                    failure: Ordering) -> Result<$int_type, $int_type> {
3125                // SAFETY: data races are prevented by atomic intrinsics.
3126                unsafe { atomic_compare_exchange(self.v.get(), current, new, success, failure) }
3127            }
3128
3129            /// Stores a value into the atomic integer if the current value is the same as
3130            /// the `current` value.
3131            ///
3132            #[doc = concat!("Unlike [`", stringify!($atomic_type), "::compare_exchange`],")]
3133            /// this function is allowed to spuriously fail even
3134            /// when the comparison succeeds, which can result in more efficient code on some
3135            /// platforms. The return value is a result indicating whether the new value was
3136            /// written and containing the previous value.
3137            ///
3138            /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
3139            /// ordering of this operation. `success` describes the required ordering for the
3140            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
3141            /// `failure` describes the required ordering for the load operation that takes place when
3142            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
3143            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
3144            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3145            ///
3146            /// **Note**: This method is only available on platforms that support atomic operations on
3147            #[doc = concat!("[`", $s_int_type, "`].")]
3148            ///
3149            /// # Examples
3150            ///
3151            /// ```
3152            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3153            ///
3154            #[doc = concat!("let val = ", stringify!($atomic_type), "::new(4);")]
3155            ///
3156            /// let mut old = val.load(Ordering::Relaxed);
3157            /// loop {
3158            ///     let new = old * 2;
3159            ///     match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
3160            ///         Ok(_) => break,
3161            ///         Err(x) => old = x,
3162            ///     }
3163            /// }
3164            /// ```
3165            ///
3166            /// # Considerations
3167            ///
3168            /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
3169            /// of CAS operations. In particular, a load of the value followed by a successful
3170            /// `compare_exchange` with the previous load *does not ensure* that other threads have not
3171            /// changed the value in the interim. This is usually important when the *equality* check in
3172            /// the `compare_exchange` is being used to check the *identity* of a value, but equality
3173            /// does not necessarily imply identity. This is a particularly common case for pointers, as
3174            /// a pointer holding the same address does not imply that the same object exists at that
3175            /// address! In this case, `compare_exchange` can lead to the [ABA problem].
3176            ///
3177            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3178            /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3179            #[inline]
3180            #[$stable_cxchg]
3181            #[$cfg_cas]
3182            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3183            pub fn compare_exchange_weak(&self,
3184                                         current: $int_type,
3185                                         new: $int_type,
3186                                         success: Ordering,
3187                                         failure: Ordering) -> Result<$int_type, $int_type> {
3188                // SAFETY: data races are prevented by atomic intrinsics.
3189                unsafe {
3190                    atomic_compare_exchange_weak(self.v.get(), current, new, success, failure)
3191                }
3192            }
3193
3194            /// Adds to the current value, returning the previous value.
3195            ///
3196            /// This operation wraps around on overflow.
3197            ///
3198            /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering
3199            /// of this operation. All ordering modes are possible. Note that using
3200            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3201            /// using [`Release`] makes the load part [`Relaxed`].
3202            ///
3203            /// **Note**: This method is only available on platforms that support atomic operations on
3204            #[doc = concat!("[`", $s_int_type, "`].")]
3205            ///
3206            /// # Examples
3207            ///
3208            /// ```
3209            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3210            ///
3211            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0);")]
3212            /// assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
3213            /// assert_eq!(foo.load(Ordering::SeqCst), 10);
3214            /// ```
3215            #[inline]
3216            #[$stable]
3217            #[$cfg_cas]
3218            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3219            pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
3220                // SAFETY: data races are prevented by atomic intrinsics.
3221                unsafe { atomic_add(self.v.get(), val, order) }
3222            }
3223
3224            /// Subtracts from the current value, returning the previous value.
3225            ///
3226            /// This operation wraps around on overflow.
3227            ///
3228            /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
3229            /// of this operation. All ordering modes are possible. Note that using
3230            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3231            /// using [`Release`] makes the load part [`Relaxed`].
3232            ///
3233            /// **Note**: This method is only available on platforms that support atomic operations on
3234            #[doc = concat!("[`", $s_int_type, "`].")]
3235            ///
3236            /// # Examples
3237            ///
3238            /// ```
3239            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3240            ///
3241            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(20);")]
3242            /// assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
3243            /// assert_eq!(foo.load(Ordering::SeqCst), 10);
3244            /// ```
3245            #[inline]
3246            #[$stable]
3247            #[$cfg_cas]
3248            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3249            pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
3250                // SAFETY: data races are prevented by atomic intrinsics.
3251                unsafe { atomic_sub(self.v.get(), val, order) }
3252            }
3253
3254            /// Bitwise "and" with the current value.
3255            ///
3256            /// Performs a bitwise "and" operation on the current value and the argument `val`, and
3257            /// sets the new value to the result.
3258            ///
3259            /// Returns the previous value.
3260            ///
3261            /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
3262            /// of this operation. All ordering modes are possible. Note that using
3263            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3264            /// using [`Release`] makes the load part [`Relaxed`].
3265            ///
3266            /// **Note**: This method is only available on platforms that support atomic operations on
3267            #[doc = concat!("[`", $s_int_type, "`].")]
3268            ///
3269            /// # Examples
3270            ///
3271            /// ```
3272            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3273            ///
3274            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3275            /// assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3276            /// assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3277            /// ```
3278            #[inline]
3279            #[$stable]
3280            #[$cfg_cas]
3281            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3282            pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
3283                // SAFETY: data races are prevented by atomic intrinsics.
3284                unsafe { atomic_and(self.v.get(), val, order) }
3285            }
3286
3287            /// Bitwise "nand" with the current value.
3288            ///
3289            /// Performs a bitwise "nand" operation on the current value and the argument `val`, and
3290            /// sets the new value to the result.
3291            ///
3292            /// Returns the previous value.
3293            ///
3294            /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
3295            /// of this operation. All ordering modes are possible. Note that using
3296            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3297            /// using [`Release`] makes the load part [`Relaxed`].
3298            ///
3299            /// **Note**: This method is only available on platforms that support atomic operations on
3300            #[doc = concat!("[`", $s_int_type, "`].")]
3301            ///
3302            /// # Examples
3303            ///
3304            /// ```
3305            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3306            ///
3307            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0x13);")]
3308            /// assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
3309            /// assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
3310            /// ```
3311            #[inline]
3312            #[$stable_nand]
3313            #[$cfg_cas]
3314            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3315            pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
3316                // SAFETY: data races are prevented by atomic intrinsics.
3317                unsafe { atomic_nand(self.v.get(), val, order) }
3318            }
3319
3320            /// Bitwise "or" with the current value.
3321            ///
3322            /// Performs a bitwise "or" operation on the current value and the argument `val`, and
3323            /// sets the new value to the result.
3324            ///
3325            /// Returns the previous value.
3326            ///
3327            /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
3328            /// of this operation. All ordering modes are possible. Note that using
3329            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3330            /// using [`Release`] makes the load part [`Relaxed`].
3331            ///
3332            /// **Note**: This method is only available on platforms that support atomic operations on
3333            #[doc = concat!("[`", $s_int_type, "`].")]
3334            ///
3335            /// # Examples
3336            ///
3337            /// ```
3338            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3339            ///
3340            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3341            /// assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3342            /// assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3343            /// ```
3344            #[inline]
3345            #[$stable]
3346            #[$cfg_cas]
3347            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3348            pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
3349                // SAFETY: data races are prevented by atomic intrinsics.
3350                unsafe { atomic_or(self.v.get(), val, order) }
3351            }
3352
3353            /// Bitwise "xor" with the current value.
3354            ///
3355            /// Performs a bitwise "xor" operation on the current value and the argument `val`, and
3356            /// sets the new value to the result.
3357            ///
3358            /// Returns the previous value.
3359            ///
3360            /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
3361            /// of this operation. All ordering modes are possible. Note that using
3362            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3363            /// using [`Release`] makes the load part [`Relaxed`].
3364            ///
3365            /// **Note**: This method is only available on platforms that support atomic operations on
3366            #[doc = concat!("[`", $s_int_type, "`].")]
3367            ///
3368            /// # Examples
3369            ///
3370            /// ```
3371            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3372            ///
3373            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3374            /// assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
3375            /// assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3376            /// ```
3377            #[inline]
3378            #[$stable]
3379            #[$cfg_cas]
3380            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3381            pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
3382                // SAFETY: data races are prevented by atomic intrinsics.
3383                unsafe { atomic_xor(self.v.get(), val, order) }
3384            }
3385
3386            /// Fetches the value, and applies a function to it that returns an optional
3387            /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3388            /// `Err(previous_value)`.
3389            ///
3390            /// Note: This may call the function multiple times if the value has been changed from other threads in
3391            /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3392            /// only once to the stored value.
3393            ///
3394            /// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3395            /// The first describes the required ordering for when the operation finally succeeds while the second
3396            /// describes the required ordering for loads. These correspond to the success and failure orderings of
3397            #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3398            /// respectively.
3399            ///
3400            /// Using [`Acquire`] as success ordering makes the store part
3401            /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3402            /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3403            ///
3404            /// **Note**: This method is only available on platforms that support atomic operations on
3405            #[doc = concat!("[`", $s_int_type, "`].")]
3406            ///
3407            /// # Considerations
3408            ///
3409            /// This method is not magic; it is not provided by the hardware, and does not act like a
3410            /// critical section or mutex.
3411            ///
3412            /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
3413            /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem]
3414            /// if this atomic integer is an index or more generally if knowledge of only the *bitwise value*
3415            /// of the atomic is not in and of itself sufficient to ensure any required preconditions.
3416            ///
3417            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3418            /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3419            ///
3420            /// # Examples
3421            ///
3422            /// ```rust
3423            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3424            ///
3425            #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3426            /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3427            /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3428            /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3429            /// assert_eq!(x.load(Ordering::SeqCst), 9);
3430            /// ```
3431            #[inline]
3432            #[stable(feature = "no_more_cas", since = "1.45.0")]
3433            #[$cfg_cas]
3434            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3435            pub fn fetch_update<F>(&self,
3436                                   set_order: Ordering,
3437                                   fetch_order: Ordering,
3438                                   mut f: F) -> Result<$int_type, $int_type>
3439            where F: FnMut($int_type) -> Option<$int_type> {
3440                let mut prev = self.load(fetch_order);
3441                while let Some(next) = f(prev) {
3442                    match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
3443                        x @ Ok(_) => return x,
3444                        Err(next_prev) => prev = next_prev
3445                    }
3446                }
3447                Err(prev)
3448            }
3449
3450            /// Fetches the value, and applies a function to it that returns an optional
3451            /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3452            /// `Err(previous_value)`.
3453            ///
3454            #[doc = concat!("See also: [`update`](`", stringify!($atomic_type), "::update`).")]
3455            ///
3456            /// Note: This may call the function multiple times if the value has been changed from other threads in
3457            /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3458            /// only once to the stored value.
3459            ///
3460            /// `try_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3461            /// The first describes the required ordering for when the operation finally succeeds while the second
3462            /// describes the required ordering for loads. These correspond to the success and failure orderings of
3463            #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3464            /// respectively.
3465            ///
3466            /// Using [`Acquire`] as success ordering makes the store part
3467            /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3468            /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3469            ///
3470            /// **Note**: This method is only available on platforms that support atomic operations on
3471            #[doc = concat!("[`", $s_int_type, "`].")]
3472            ///
3473            /// # Considerations
3474            ///
3475            /// This method is not magic; it is not provided by the hardware, and does not act like a
3476            /// critical section or mutex.
3477            ///
3478            /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
3479            /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem]
3480            /// if this atomic integer is an index or more generally if knowledge of only the *bitwise value*
3481            /// of the atomic is not in and of itself sufficient to ensure any required preconditions.
3482            ///
3483            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3484            /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3485            ///
3486            /// # Examples
3487            ///
3488            /// ```rust
3489            /// #![feature(atomic_try_update)]
3490            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3491            ///
3492            #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3493            /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3494            /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3495            /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3496            /// assert_eq!(x.load(Ordering::SeqCst), 9);
3497            /// ```
3498            #[inline]
3499            #[unstable(feature = "atomic_try_update", issue = "135894")]
3500            #[$cfg_cas]
3501            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3502            pub fn try_update(
3503                &self,
3504                set_order: Ordering,
3505                fetch_order: Ordering,
3506                f: impl FnMut($int_type) -> Option<$int_type>,
3507            ) -> Result<$int_type, $int_type> {
3508                // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`;
3509                //      when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`.
3510                self.fetch_update(set_order, fetch_order, f)
3511            }
3512
3513            /// Fetches the value, applies a function to it that it return a new value.
3514            /// The new value is stored and the old value is returned.
3515            ///
3516            #[doc = concat!("See also: [`try_update`](`", stringify!($atomic_type), "::try_update`).")]
3517            ///
3518            /// Note: This may call the function multiple times if the value has been changed from other threads in
3519            /// the meantime, but the function will have been applied only once to the stored value.
3520            ///
3521            /// `update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3522            /// The first describes the required ordering for when the operation finally succeeds while the second
3523            /// describes the required ordering for loads. These correspond to the success and failure orderings of
3524            #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3525            /// respectively.
3526            ///
3527            /// Using [`Acquire`] as success ordering makes the store part
3528            /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3529            /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3530            ///
3531            /// **Note**: This method is only available on platforms that support atomic operations on
3532            #[doc = concat!("[`", $s_int_type, "`].")]
3533            ///
3534            /// # Considerations
3535            ///
3536            /// [CAS operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3537            /// This method is not magic; it is not provided by the hardware, and does not act like a
3538            /// critical section or mutex.
3539            ///
3540            /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
3541            /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem]
3542            /// if this atomic integer is an index or more generally if knowledge of only the *bitwise value*
3543            /// of the atomic is not in and of itself sufficient to ensure any required preconditions.
3544            ///
3545            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3546            /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3547            ///
3548            /// # Examples
3549            ///
3550            /// ```rust
3551            /// #![feature(atomic_try_update)]
3552            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3553            ///
3554            #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3555            /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| x + 1), 7);
3556            /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| x + 1), 8);
3557            /// assert_eq!(x.load(Ordering::SeqCst), 9);
3558            /// ```
3559            #[inline]
3560            #[unstable(feature = "atomic_try_update", issue = "135894")]
3561            #[$cfg_cas]
3562            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3563            pub fn update(
3564                &self,
3565                set_order: Ordering,
3566                fetch_order: Ordering,
3567                mut f: impl FnMut($int_type) -> $int_type,
3568            ) -> $int_type {
3569                let mut prev = self.load(fetch_order);
3570                loop {
3571                    match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
3572                        Ok(x) => break x,
3573                        Err(next_prev) => prev = next_prev,
3574                    }
3575                }
3576            }
3577
3578            /// Maximum with the current value.
3579            ///
3580            /// Finds the maximum of the current value and the argument `val`, and
3581            /// sets the new value to the result.
3582            ///
3583            /// Returns the previous value.
3584            ///
3585            /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering
3586            /// of this operation. All ordering modes are possible. Note that using
3587            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3588            /// using [`Release`] makes the load part [`Relaxed`].
3589            ///
3590            /// **Note**: This method is only available on platforms that support atomic operations on
3591            #[doc = concat!("[`", $s_int_type, "`].")]
3592            ///
3593            /// # Examples
3594            ///
3595            /// ```
3596            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3597            ///
3598            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3599            /// assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
3600            /// assert_eq!(foo.load(Ordering::SeqCst), 42);
3601            /// ```
3602            ///
3603            /// If you want to obtain the maximum value in one step, you can use the following:
3604            ///
3605            /// ```
3606            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3607            ///
3608            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3609            /// let bar = 42;
3610            /// let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
3611            /// assert!(max_foo == 42);
3612            /// ```
3613            #[inline]
3614            #[stable(feature = "atomic_min_max", since = "1.45.0")]
3615            #[$cfg_cas]
3616            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3617            pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
3618                // SAFETY: data races are prevented by atomic intrinsics.
3619                unsafe { $max_fn(self.v.get(), val, order) }
3620            }
3621
3622            /// Minimum with the current value.
3623            ///
3624            /// Finds the minimum of the current value and the argument `val`, and
3625            /// sets the new value to the result.
3626            ///
3627            /// Returns the previous value.
3628            ///
3629            /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering
3630            /// of this operation. All ordering modes are possible. Note that using
3631            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3632            /// using [`Release`] makes the load part [`Relaxed`].
3633            ///
3634            /// **Note**: This method is only available on platforms that support atomic operations on
3635            #[doc = concat!("[`", $s_int_type, "`].")]
3636            ///
3637            /// # Examples
3638            ///
3639            /// ```
3640            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3641            ///
3642            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3643            /// assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
3644            /// assert_eq!(foo.load(Ordering::Relaxed), 23);
3645            /// assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
3646            /// assert_eq!(foo.load(Ordering::Relaxed), 22);
3647            /// ```
3648            ///
3649            /// If you want to obtain the minimum value in one step, you can use the following:
3650            ///
3651            /// ```
3652            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3653            ///
3654            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3655            /// let bar = 12;
3656            /// let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
3657            /// assert_eq!(min_foo, 12);
3658            /// ```
3659            #[inline]
3660            #[stable(feature = "atomic_min_max", since = "1.45.0")]
3661            #[$cfg_cas]
3662            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3663            pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
3664                // SAFETY: data races are prevented by atomic intrinsics.
3665                unsafe { $min_fn(self.v.get(), val, order) }
3666            }
3667
3668            /// Returns a mutable pointer to the underlying integer.
3669            ///
3670            /// Doing non-atomic reads and writes on the resulting integer can be a data race.
3671            /// This method is mostly useful for FFI, where the function signature may use
3672            #[doc = concat!("`*mut ", stringify!($int_type), "` instead of `&", stringify!($atomic_type), "`.")]
3673            ///
3674            /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
3675            /// atomic types work with interior mutability. All modifications of an atomic change the value
3676            /// through a shared reference, and can do so safely as long as they use atomic operations. Any
3677            /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the
3678            /// requirements of the [memory model].
3679            ///
3680            /// # Examples
3681            ///
3682            /// ```ignore (extern-declaration)
3683            /// # fn main() {
3684            #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
3685            ///
3686            /// extern "C" {
3687            #[doc = concat!("    fn my_atomic_op(arg: *mut ", stringify!($int_type), ");")]
3688            /// }
3689            ///
3690            #[doc = concat!("let atomic = ", stringify!($atomic_type), "::new(1);")]
3691            ///
3692            /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
3693            /// unsafe {
3694            ///     my_atomic_op(atomic.as_ptr());
3695            /// }
3696            /// # }
3697            /// ```
3698            ///
3699            /// [memory model]: self#memory-model-for-atomic-accesses
3700            #[inline]
3701            #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
3702            #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
3703            #[rustc_never_returns_null_ptr]
3704            pub const fn as_ptr(&self) -> *mut $int_type {
3705                self.v.get()
3706            }
3707        }
3708    }
3709}
3710
3711#[cfg(target_has_atomic_load_store = "8")]
3712#[cfg(not(feature = "ferrocene_certified"))]
3713atomic_int! {
3714    cfg(target_has_atomic = "8"),
3715    cfg(target_has_atomic_equal_alignment = "8"),
3716    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3717    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3718    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3719    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3720    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3721    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3722    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3723    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3724    rustc_diagnostic_item = "AtomicI8",
3725    "i8",
3726    "",
3727    atomic_min, atomic_max,
3728    1,
3729    i8 AtomicI8
3730}
3731#[cfg(target_has_atomic_load_store = "8")]
3732#[cfg(not(feature = "ferrocene_certified"))]
3733atomic_int! {
3734    cfg(target_has_atomic = "8"),
3735    cfg(target_has_atomic_equal_alignment = "8"),
3736    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3737    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3738    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3739    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3740    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3741    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3742    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3743    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3744    rustc_diagnostic_item = "AtomicU8",
3745    "u8",
3746    "",
3747    atomic_umin, atomic_umax,
3748    1,
3749    u8 AtomicU8
3750}
3751#[cfg(target_has_atomic_load_store = "16")]
3752#[cfg(not(feature = "ferrocene_certified"))]
3753atomic_int! {
3754    cfg(target_has_atomic = "16"),
3755    cfg(target_has_atomic_equal_alignment = "16"),
3756    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3757    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3758    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3759    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3760    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3761    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3762    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3763    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3764    rustc_diagnostic_item = "AtomicI16",
3765    "i16",
3766    "",
3767    atomic_min, atomic_max,
3768    2,
3769    i16 AtomicI16
3770}
3771#[cfg(target_has_atomic_load_store = "16")]
3772#[cfg(not(feature = "ferrocene_certified"))]
3773atomic_int! {
3774    cfg(target_has_atomic = "16"),
3775    cfg(target_has_atomic_equal_alignment = "16"),
3776    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3777    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3778    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3779    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3780    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3781    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3782    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3783    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3784    rustc_diagnostic_item = "AtomicU16",
3785    "u16",
3786    "",
3787    atomic_umin, atomic_umax,
3788    2,
3789    u16 AtomicU16
3790}
3791#[cfg(target_has_atomic_load_store = "32")]
3792#[cfg(not(feature = "ferrocene_certified"))]
3793atomic_int! {
3794    cfg(target_has_atomic = "32"),
3795    cfg(target_has_atomic_equal_alignment = "32"),
3796    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3797    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3798    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3799    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3800    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3801    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3802    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3803    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3804    rustc_diagnostic_item = "AtomicI32",
3805    "i32",
3806    "",
3807    atomic_min, atomic_max,
3808    4,
3809    i32 AtomicI32
3810}
3811#[cfg(target_has_atomic_load_store = "32")]
3812atomic_int! {
3813    cfg(target_has_atomic = "32"),
3814    cfg(target_has_atomic_equal_alignment = "32"),
3815    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3816    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3817    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3818    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3819    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3820    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3821    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3822    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3823    rustc_diagnostic_item = "AtomicU32",
3824    "u32",
3825    "",
3826    atomic_umin, atomic_umax,
3827    4,
3828    u32 AtomicU32
3829}
3830#[cfg(target_has_atomic_load_store = "64")]
3831#[cfg(not(feature = "ferrocene_certified"))]
3832atomic_int! {
3833    cfg(target_has_atomic = "64"),
3834    cfg(target_has_atomic_equal_alignment = "64"),
3835    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3836    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3837    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3838    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3839    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3840    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3841    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3842    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3843    rustc_diagnostic_item = "AtomicI64",
3844    "i64",
3845    "",
3846    atomic_min, atomic_max,
3847    8,
3848    i64 AtomicI64
3849}
3850#[cfg(target_has_atomic_load_store = "64")]
3851#[cfg(not(feature = "ferrocene_certified"))]
3852atomic_int! {
3853    cfg(target_has_atomic = "64"),
3854    cfg(target_has_atomic_equal_alignment = "64"),
3855    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3856    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3857    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3858    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3859    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3860    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3861    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3862    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3863    rustc_diagnostic_item = "AtomicU64",
3864    "u64",
3865    "",
3866    atomic_umin, atomic_umax,
3867    8,
3868    u64 AtomicU64
3869}
3870#[cfg(target_has_atomic_load_store = "128")]
3871#[cfg(not(feature = "ferrocene_certified"))]
3872atomic_int! {
3873    cfg(target_has_atomic = "128"),
3874    cfg(target_has_atomic_equal_alignment = "128"),
3875    unstable(feature = "integer_atomics", issue = "99069"),
3876    unstable(feature = "integer_atomics", issue = "99069"),
3877    unstable(feature = "integer_atomics", issue = "99069"),
3878    unstable(feature = "integer_atomics", issue = "99069"),
3879    unstable(feature = "integer_atomics", issue = "99069"),
3880    unstable(feature = "integer_atomics", issue = "99069"),
3881    rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3882    rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3883    rustc_diagnostic_item = "AtomicI128",
3884    "i128",
3885    "#![feature(integer_atomics)]\n\n",
3886    atomic_min, atomic_max,
3887    16,
3888    i128 AtomicI128
3889}
3890#[cfg(target_has_atomic_load_store = "128")]
3891#[cfg(not(feature = "ferrocene_certified"))]
3892atomic_int! {
3893    cfg(target_has_atomic = "128"),
3894    cfg(target_has_atomic_equal_alignment = "128"),
3895    unstable(feature = "integer_atomics", issue = "99069"),
3896    unstable(feature = "integer_atomics", issue = "99069"),
3897    unstable(feature = "integer_atomics", issue = "99069"),
3898    unstable(feature = "integer_atomics", issue = "99069"),
3899    unstable(feature = "integer_atomics", issue = "99069"),
3900    unstable(feature = "integer_atomics", issue = "99069"),
3901    rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3902    rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3903    rustc_diagnostic_item = "AtomicU128",
3904    "u128",
3905    "#![feature(integer_atomics)]\n\n",
3906    atomic_umin, atomic_umax,
3907    16,
3908    u128 AtomicU128
3909}
3910
3911#[cfg(target_has_atomic_load_store = "ptr")]
3912#[cfg(not(feature = "ferrocene_certified"))]
3913macro_rules! atomic_int_ptr_sized {
3914    ( $($target_pointer_width:literal $align:literal)* ) => { $(
3915        #[cfg(target_pointer_width = $target_pointer_width)]
3916        atomic_int! {
3917            cfg(target_has_atomic = "ptr"),
3918            cfg(target_has_atomic_equal_alignment = "ptr"),
3919            stable(feature = "rust1", since = "1.0.0"),
3920            stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3921            stable(feature = "atomic_debug", since = "1.3.0"),
3922            stable(feature = "atomic_access", since = "1.15.0"),
3923            stable(feature = "atomic_from", since = "1.23.0"),
3924            stable(feature = "atomic_nand", since = "1.27.0"),
3925            rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3926            rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3927            rustc_diagnostic_item = "AtomicIsize",
3928            "isize",
3929            "",
3930            atomic_min, atomic_max,
3931            $align,
3932            isize AtomicIsize
3933        }
3934        #[cfg(target_pointer_width = $target_pointer_width)]
3935        atomic_int! {
3936            cfg(target_has_atomic = "ptr"),
3937            cfg(target_has_atomic_equal_alignment = "ptr"),
3938            stable(feature = "rust1", since = "1.0.0"),
3939            stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3940            stable(feature = "atomic_debug", since = "1.3.0"),
3941            stable(feature = "atomic_access", since = "1.15.0"),
3942            stable(feature = "atomic_from", since = "1.23.0"),
3943            stable(feature = "atomic_nand", since = "1.27.0"),
3944            rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3945            rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3946            rustc_diagnostic_item = "AtomicUsize",
3947            "usize",
3948            "",
3949            atomic_umin, atomic_umax,
3950            $align,
3951            usize AtomicUsize
3952        }
3953
3954        /// An [`AtomicIsize`] initialized to `0`.
3955        #[cfg(target_pointer_width = $target_pointer_width)]
3956        #[stable(feature = "rust1", since = "1.0.0")]
3957        #[deprecated(
3958            since = "1.34.0",
3959            note = "the `new` function is now preferred",
3960            suggestion = "AtomicIsize::new(0)",
3961        )]
3962        pub const ATOMIC_ISIZE_INIT: AtomicIsize = AtomicIsize::new(0);
3963
3964        /// An [`AtomicUsize`] initialized to `0`.
3965        #[cfg(target_pointer_width = $target_pointer_width)]
3966        #[stable(feature = "rust1", since = "1.0.0")]
3967        #[deprecated(
3968            since = "1.34.0",
3969            note = "the `new` function is now preferred",
3970            suggestion = "AtomicUsize::new(0)",
3971        )]
3972        pub const ATOMIC_USIZE_INIT: AtomicUsize = AtomicUsize::new(0);
3973    )* };
3974}
3975
3976#[cfg(target_has_atomic_load_store = "ptr")]
3977#[cfg(not(feature = "ferrocene_certified"))]
3978atomic_int_ptr_sized! {
3979    "16" 2
3980    "32" 4
3981    "64" 8
3982}
3983
3984#[cfg(not(feature = "ferrocene_certified"))]
3985#[inline]
3986#[cfg(target_has_atomic)]
3987fn strongest_failure_ordering(order: Ordering) -> Ordering {
3988    match order {
3989        Release => Relaxed,
3990        Relaxed => Relaxed,
3991        SeqCst => SeqCst,
3992        Acquire => Acquire,
3993        AcqRel => Acquire,
3994    }
3995}
3996
3997#[inline]
3998#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3999unsafe fn atomic_store<T: Copy>(dst: *mut T, val: T, order: Ordering) {
4000    // SAFETY: the caller must uphold the safety contract for `atomic_store`.
4001    unsafe {
4002        match order {
4003            Relaxed => intrinsics::atomic_store::<T, { AO::Relaxed }>(dst, val),
4004            Release => intrinsics::atomic_store::<T, { AO::Release }>(dst, val),
4005            SeqCst => intrinsics::atomic_store::<T, { AO::SeqCst }>(dst, val),
4006            Acquire => panic!("there is no such thing as an acquire store"),
4007            AcqRel => panic!("there is no such thing as an acquire-release store"),
4008        }
4009    }
4010}
4011
4012#[inline]
4013#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4014unsafe fn atomic_load<T: Copy>(dst: *const T, order: Ordering) -> T {
4015    // SAFETY: the caller must uphold the safety contract for `atomic_load`.
4016    unsafe {
4017        match order {
4018            Relaxed => intrinsics::atomic_load::<T, { AO::Relaxed }>(dst),
4019            Acquire => intrinsics::atomic_load::<T, { AO::Acquire }>(dst),
4020            SeqCst => intrinsics::atomic_load::<T, { AO::SeqCst }>(dst),
4021            Release => panic!("there is no such thing as a release load"),
4022            AcqRel => panic!("there is no such thing as an acquire-release load"),
4023        }
4024    }
4025}
4026
4027#[inline]
4028#[cfg(target_has_atomic)]
4029#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4030unsafe fn atomic_swap<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4031    // SAFETY: the caller must uphold the safety contract for `atomic_swap`.
4032    unsafe {
4033        match order {
4034            Relaxed => intrinsics::atomic_xchg::<T, { AO::Relaxed }>(dst, val),
4035            Acquire => intrinsics::atomic_xchg::<T, { AO::Acquire }>(dst, val),
4036            Release => intrinsics::atomic_xchg::<T, { AO::Release }>(dst, val),
4037            AcqRel => intrinsics::atomic_xchg::<T, { AO::AcqRel }>(dst, val),
4038            SeqCst => intrinsics::atomic_xchg::<T, { AO::SeqCst }>(dst, val),
4039        }
4040    }
4041}
4042
4043/// Returns the previous value (like __sync_fetch_and_add).
4044#[inline]
4045#[cfg(target_has_atomic)]
4046#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4047unsafe fn atomic_add<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4048    // SAFETY: the caller must uphold the safety contract for `atomic_add`.
4049    unsafe {
4050        match order {
4051            Relaxed => intrinsics::atomic_xadd::<T, U, { AO::Relaxed }>(dst, val),
4052            Acquire => intrinsics::atomic_xadd::<T, U, { AO::Acquire }>(dst, val),
4053            Release => intrinsics::atomic_xadd::<T, U, { AO::Release }>(dst, val),
4054            AcqRel => intrinsics::atomic_xadd::<T, U, { AO::AcqRel }>(dst, val),
4055            SeqCst => intrinsics::atomic_xadd::<T, U, { AO::SeqCst }>(dst, val),
4056        }
4057    }
4058}
4059
4060/// Returns the previous value (like __sync_fetch_and_sub).
4061#[inline]
4062#[cfg(target_has_atomic)]
4063#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4064unsafe fn atomic_sub<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4065    // SAFETY: the caller must uphold the safety contract for `atomic_sub`.
4066    unsafe {
4067        match order {
4068            Relaxed => intrinsics::atomic_xsub::<T, U, { AO::Relaxed }>(dst, val),
4069            Acquire => intrinsics::atomic_xsub::<T, U, { AO::Acquire }>(dst, val),
4070            Release => intrinsics::atomic_xsub::<T, U, { AO::Release }>(dst, val),
4071            AcqRel => intrinsics::atomic_xsub::<T, U, { AO::AcqRel }>(dst, val),
4072            SeqCst => intrinsics::atomic_xsub::<T, U, { AO::SeqCst }>(dst, val),
4073        }
4074    }
4075}
4076
4077/// Publicly exposed for stdarch; nobody else should use this.
4078#[inline]
4079#[cfg(target_has_atomic)]
4080#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4081#[unstable(feature = "core_intrinsics", issue = "none")]
4082#[doc(hidden)]
4083pub unsafe fn atomic_compare_exchange<T: Copy>(
4084    dst: *mut T,
4085    old: T,
4086    new: T,
4087    success: Ordering,
4088    failure: Ordering,
4089) -> Result<T, T> {
4090    // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange`.
4091    let (val, ok) = unsafe {
4092        match (success, failure) {
4093            (Relaxed, Relaxed) => {
4094                intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::Relaxed }>(dst, old, new)
4095            }
4096            (Relaxed, Acquire) => {
4097                intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::Acquire }>(dst, old, new)
4098            }
4099            (Relaxed, SeqCst) => {
4100                intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::SeqCst }>(dst, old, new)
4101            }
4102            (Acquire, Relaxed) => {
4103                intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::Relaxed }>(dst, old, new)
4104            }
4105            (Acquire, Acquire) => {
4106                intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::Acquire }>(dst, old, new)
4107            }
4108            (Acquire, SeqCst) => {
4109                intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::SeqCst }>(dst, old, new)
4110            }
4111            (Release, Relaxed) => {
4112                intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::Relaxed }>(dst, old, new)
4113            }
4114            (Release, Acquire) => {
4115                intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::Acquire }>(dst, old, new)
4116            }
4117            (Release, SeqCst) => {
4118                intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::SeqCst }>(dst, old, new)
4119            }
4120            (AcqRel, Relaxed) => {
4121                intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::Relaxed }>(dst, old, new)
4122            }
4123            (AcqRel, Acquire) => {
4124                intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::Acquire }>(dst, old, new)
4125            }
4126            (AcqRel, SeqCst) => {
4127                intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::SeqCst }>(dst, old, new)
4128            }
4129            (SeqCst, Relaxed) => {
4130                intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::Relaxed }>(dst, old, new)
4131            }
4132            (SeqCst, Acquire) => {
4133                intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::Acquire }>(dst, old, new)
4134            }
4135            (SeqCst, SeqCst) => {
4136                intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::SeqCst }>(dst, old, new)
4137            }
4138            (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
4139            (_, Release) => panic!("there is no such thing as a release failure ordering"),
4140        }
4141    };
4142    if ok { Ok(val) } else { Err(val) }
4143}
4144
4145#[inline]
4146#[cfg(target_has_atomic)]
4147#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4148unsafe fn atomic_compare_exchange_weak<T: Copy>(
4149    dst: *mut T,
4150    old: T,
4151    new: T,
4152    success: Ordering,
4153    failure: Ordering,
4154) -> Result<T, T> {
4155    // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange_weak`.
4156    let (val, ok) = unsafe {
4157        match (success, failure) {
4158            (Relaxed, Relaxed) => {
4159                intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::Relaxed }>(dst, old, new)
4160            }
4161            (Relaxed, Acquire) => {
4162                intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::Acquire }>(dst, old, new)
4163            }
4164            (Relaxed, SeqCst) => {
4165                intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::SeqCst }>(dst, old, new)
4166            }
4167            (Acquire, Relaxed) => {
4168                intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::Relaxed }>(dst, old, new)
4169            }
4170            (Acquire, Acquire) => {
4171                intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::Acquire }>(dst, old, new)
4172            }
4173            (Acquire, SeqCst) => {
4174                intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::SeqCst }>(dst, old, new)
4175            }
4176            (Release, Relaxed) => {
4177                intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::Relaxed }>(dst, old, new)
4178            }
4179            (Release, Acquire) => {
4180                intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::Acquire }>(dst, old, new)
4181            }
4182            (Release, SeqCst) => {
4183                intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::SeqCst }>(dst, old, new)
4184            }
4185            (AcqRel, Relaxed) => {
4186                intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::Relaxed }>(dst, old, new)
4187            }
4188            (AcqRel, Acquire) => {
4189                intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::Acquire }>(dst, old, new)
4190            }
4191            (AcqRel, SeqCst) => {
4192                intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::SeqCst }>(dst, old, new)
4193            }
4194            (SeqCst, Relaxed) => {
4195                intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::Relaxed }>(dst, old, new)
4196            }
4197            (SeqCst, Acquire) => {
4198                intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::Acquire }>(dst, old, new)
4199            }
4200            (SeqCst, SeqCst) => {
4201                intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::SeqCst }>(dst, old, new)
4202            }
4203            (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
4204            (_, Release) => panic!("there is no such thing as a release failure ordering"),
4205        }
4206    };
4207    if ok { Ok(val) } else { Err(val) }
4208}
4209
4210#[inline]
4211#[cfg(target_has_atomic)]
4212#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4213unsafe fn atomic_and<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4214    // SAFETY: the caller must uphold the safety contract for `atomic_and`
4215    unsafe {
4216        match order {
4217            Relaxed => intrinsics::atomic_and::<T, U, { AO::Relaxed }>(dst, val),
4218            Acquire => intrinsics::atomic_and::<T, U, { AO::Acquire }>(dst, val),
4219            Release => intrinsics::atomic_and::<T, U, { AO::Release }>(dst, val),
4220            AcqRel => intrinsics::atomic_and::<T, U, { AO::AcqRel }>(dst, val),
4221            SeqCst => intrinsics::atomic_and::<T, U, { AO::SeqCst }>(dst, val),
4222        }
4223    }
4224}
4225
4226#[inline]
4227#[cfg(target_has_atomic)]
4228#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4229unsafe fn atomic_nand<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4230    // SAFETY: the caller must uphold the safety contract for `atomic_nand`
4231    unsafe {
4232        match order {
4233            Relaxed => intrinsics::atomic_nand::<T, U, { AO::Relaxed }>(dst, val),
4234            Acquire => intrinsics::atomic_nand::<T, U, { AO::Acquire }>(dst, val),
4235            Release => intrinsics::atomic_nand::<T, U, { AO::Release }>(dst, val),
4236            AcqRel => intrinsics::atomic_nand::<T, U, { AO::AcqRel }>(dst, val),
4237            SeqCst => intrinsics::atomic_nand::<T, U, { AO::SeqCst }>(dst, val),
4238        }
4239    }
4240}
4241
4242#[inline]
4243#[cfg(target_has_atomic)]
4244#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4245unsafe fn atomic_or<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4246    // SAFETY: the caller must uphold the safety contract for `atomic_or`
4247    unsafe {
4248        match order {
4249            SeqCst => intrinsics::atomic_or::<T, U, { AO::SeqCst }>(dst, val),
4250            Acquire => intrinsics::atomic_or::<T, U, { AO::Acquire }>(dst, val),
4251            Release => intrinsics::atomic_or::<T, U, { AO::Release }>(dst, val),
4252            AcqRel => intrinsics::atomic_or::<T, U, { AO::AcqRel }>(dst, val),
4253            Relaxed => intrinsics::atomic_or::<T, U, { AO::Relaxed }>(dst, val),
4254        }
4255    }
4256}
4257
4258#[inline]
4259#[cfg(target_has_atomic)]
4260#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4261unsafe fn atomic_xor<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4262    // SAFETY: the caller must uphold the safety contract for `atomic_xor`
4263    unsafe {
4264        match order {
4265            SeqCst => intrinsics::atomic_xor::<T, U, { AO::SeqCst }>(dst, val),
4266            Acquire => intrinsics::atomic_xor::<T, U, { AO::Acquire }>(dst, val),
4267            Release => intrinsics::atomic_xor::<T, U, { AO::Release }>(dst, val),
4268            AcqRel => intrinsics::atomic_xor::<T, U, { AO::AcqRel }>(dst, val),
4269            Relaxed => intrinsics::atomic_xor::<T, U, { AO::Relaxed }>(dst, val),
4270        }
4271    }
4272}
4273
4274/// Updates `*dst` to the max value of `val` and the old value (signed comparison)
4275#[inline]
4276#[cfg(target_has_atomic)]
4277#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4278#[cfg(not(feature = "ferrocene_certified"))]
4279unsafe fn atomic_max<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4280    // SAFETY: the caller must uphold the safety contract for `atomic_max`
4281    unsafe {
4282        match order {
4283            Relaxed => intrinsics::atomic_max::<T, { AO::Relaxed }>(dst, val),
4284            Acquire => intrinsics::atomic_max::<T, { AO::Acquire }>(dst, val),
4285            Release => intrinsics::atomic_max::<T, { AO::Release }>(dst, val),
4286            AcqRel => intrinsics::atomic_max::<T, { AO::AcqRel }>(dst, val),
4287            SeqCst => intrinsics::atomic_max::<T, { AO::SeqCst }>(dst, val),
4288        }
4289    }
4290}
4291
4292/// Updates `*dst` to the min value of `val` and the old value (signed comparison)
4293#[inline]
4294#[cfg(target_has_atomic)]
4295#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4296#[cfg(not(feature = "ferrocene_certified"))]
4297unsafe fn atomic_min<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4298    // SAFETY: the caller must uphold the safety contract for `atomic_min`
4299    unsafe {
4300        match order {
4301            Relaxed => intrinsics::atomic_min::<T, { AO::Relaxed }>(dst, val),
4302            Acquire => intrinsics::atomic_min::<T, { AO::Acquire }>(dst, val),
4303            Release => intrinsics::atomic_min::<T, { AO::Release }>(dst, val),
4304            AcqRel => intrinsics::atomic_min::<T, { AO::AcqRel }>(dst, val),
4305            SeqCst => intrinsics::atomic_min::<T, { AO::SeqCst }>(dst, val),
4306        }
4307    }
4308}
4309
4310/// Updates `*dst` to the max value of `val` and the old value (unsigned comparison)
4311#[inline]
4312#[cfg(target_has_atomic)]
4313#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4314unsafe fn atomic_umax<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4315    // SAFETY: the caller must uphold the safety contract for `atomic_umax`
4316    unsafe {
4317        match order {
4318            Relaxed => intrinsics::atomic_umax::<T, { AO::Relaxed }>(dst, val),
4319            Acquire => intrinsics::atomic_umax::<T, { AO::Acquire }>(dst, val),
4320            Release => intrinsics::atomic_umax::<T, { AO::Release }>(dst, val),
4321            AcqRel => intrinsics::atomic_umax::<T, { AO::AcqRel }>(dst, val),
4322            SeqCst => intrinsics::atomic_umax::<T, { AO::SeqCst }>(dst, val),
4323        }
4324    }
4325}
4326
4327/// Updates `*dst` to the min value of `val` and the old value (unsigned comparison)
4328#[inline]
4329#[cfg(target_has_atomic)]
4330#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4331unsafe fn atomic_umin<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4332    // SAFETY: the caller must uphold the safety contract for `atomic_umin`
4333    unsafe {
4334        match order {
4335            Relaxed => intrinsics::atomic_umin::<T, { AO::Relaxed }>(dst, val),
4336            Acquire => intrinsics::atomic_umin::<T, { AO::Acquire }>(dst, val),
4337            Release => intrinsics::atomic_umin::<T, { AO::Release }>(dst, val),
4338            AcqRel => intrinsics::atomic_umin::<T, { AO::AcqRel }>(dst, val),
4339            SeqCst => intrinsics::atomic_umin::<T, { AO::SeqCst }>(dst, val),
4340        }
4341    }
4342}
4343
4344/// An atomic fence.
4345///
4346/// Fences create synchronization between themselves and atomic operations or fences in other
4347/// threads. To achieve this, a fence prevents the compiler and CPU from reordering certain types of
4348/// memory operations around it.
4349///
4350/// A fence 'A' which has (at least) [`Release`] ordering semantics, synchronizes
4351/// with a fence 'B' with (at least) [`Acquire`] semantics, if and only if there
4352/// exist operations X and Y, both operating on some atomic object 'm' such
4353/// that A is sequenced before X, Y is sequenced before B and Y observes
4354/// the change to m. This provides a happens-before dependence between A and B.
4355///
4356/// ```text
4357///     Thread 1                                          Thread 2
4358///
4359/// fence(Release);      A --------------
4360/// m.store(3, Relaxed); X ---------    |
4361///                                |    |
4362///                                |    |
4363///                                -------------> Y  if m.load(Relaxed) == 3 {
4364///                                     |-------> B      fence(Acquire);
4365///                                                      ...
4366///                                                  }
4367/// ```
4368///
4369/// Note that in the example above, it is crucial that the accesses to `m` are atomic. Fences cannot
4370/// be used to establish synchronization among non-atomic accesses in different threads. However,
4371/// thanks to the happens-before relationship between A and B, any non-atomic accesses that
4372/// happen-before A are now also properly synchronized with any non-atomic accesses that
4373/// happen-after B.
4374///
4375/// Atomic operations with [`Release`] or [`Acquire`] semantics can also synchronize
4376/// with a fence.
4377///
4378/// A fence which has [`SeqCst`] ordering, in addition to having both [`Acquire`]
4379/// and [`Release`] semantics, participates in the global program order of the
4380/// other [`SeqCst`] operations and/or fences.
4381///
4382/// Accepts [`Acquire`], [`Release`], [`AcqRel`] and [`SeqCst`] orderings.
4383///
4384/// # Panics
4385///
4386/// Panics if `order` is [`Relaxed`].
4387///
4388/// # Examples
4389///
4390/// ```
4391/// use std::sync::atomic::AtomicBool;
4392/// use std::sync::atomic::fence;
4393/// use std::sync::atomic::Ordering;
4394///
4395/// // A mutual exclusion primitive based on spinlock.
4396/// pub struct Mutex {
4397///     flag: AtomicBool,
4398/// }
4399///
4400/// impl Mutex {
4401///     pub fn new() -> Mutex {
4402///         Mutex {
4403///             flag: AtomicBool::new(false),
4404///         }
4405///     }
4406///
4407///     pub fn lock(&self) {
4408///         // Wait until the old value is `false`.
4409///         while self
4410///             .flag
4411///             .compare_exchange_weak(false, true, Ordering::Relaxed, Ordering::Relaxed)
4412///             .is_err()
4413///         {}
4414///         // This fence synchronizes-with store in `unlock`.
4415///         fence(Ordering::Acquire);
4416///     }
4417///
4418///     pub fn unlock(&self) {
4419///         self.flag.store(false, Ordering::Release);
4420///     }
4421/// }
4422/// ```
4423#[inline]
4424#[stable(feature = "rust1", since = "1.0.0")]
4425#[rustc_diagnostic_item = "fence"]
4426#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4427#[cfg(not(feature = "ferrocene_certified"))]
4428pub fn fence(order: Ordering) {
4429    // SAFETY: using an atomic fence is safe.
4430    unsafe {
4431        match order {
4432            Acquire => intrinsics::atomic_fence::<{ AO::Acquire }>(),
4433            Release => intrinsics::atomic_fence::<{ AO::Release }>(),
4434            AcqRel => intrinsics::atomic_fence::<{ AO::AcqRel }>(),
4435            SeqCst => intrinsics::atomic_fence::<{ AO::SeqCst }>(),
4436            Relaxed => panic!("there is no such thing as a relaxed fence"),
4437        }
4438    }
4439}
4440
4441/// A "compiler-only" atomic fence.
4442///
4443/// Like [`fence`], this function establishes synchronization with other atomic operations and
4444/// fences. However, unlike [`fence`], `compiler_fence` only establishes synchronization with
4445/// operations *in the same thread*. This may at first sound rather useless, since code within a
4446/// thread is typically already totally ordered and does not need any further synchronization.
4447/// However, there are cases where code can run on the same thread without being ordered:
4448/// - The most common case is that of a *signal handler*: a signal handler runs in the same thread
4449///   as the code it interrupted, but it is not ordered with respect to that code. `compiler_fence`
4450///   can be used to establish synchronization between a thread and its signal handler, the same way
4451///   that `fence` can be used to establish synchronization across threads.
4452/// - Similar situations can arise in embedded programming with interrupt handlers, or in custom
4453///   implementations of preemptive green threads. In general, `compiler_fence` can establish
4454///   synchronization with code that is guaranteed to run on the same hardware CPU.
4455///
4456/// See [`fence`] for how a fence can be used to achieve synchronization. Note that just like
4457/// [`fence`], synchronization still requires atomic operations to be used in both threads -- it is
4458/// not possible to perform synchronization entirely with fences and non-atomic operations.
4459///
4460/// `compiler_fence` does not emit any machine code, but restricts the kinds of memory re-ordering
4461/// the compiler is allowed to do. `compiler_fence` corresponds to [`atomic_signal_fence`] in C and
4462/// C++.
4463///
4464/// [`atomic_signal_fence`]: https://en.cppreference.com/w/cpp/atomic/atomic_signal_fence
4465///
4466/// # Panics
4467///
4468/// Panics if `order` is [`Relaxed`].
4469///
4470/// # Examples
4471///
4472/// Without the two `compiler_fence` calls, the read of `IMPORTANT_VARIABLE` in `signal_handler`
4473/// is *undefined behavior* due to a data race, despite everything happening in a single thread.
4474/// This is because the signal handler is considered to run concurrently with its associated
4475/// thread, and explicit synchronization is required to pass data between a thread and its
4476/// signal handler. The code below uses two `compiler_fence` calls to establish the usual
4477/// release-acquire synchronization pattern (see [`fence`] for an image).
4478///
4479/// ```
4480/// use std::sync::atomic::AtomicBool;
4481/// use std::sync::atomic::Ordering;
4482/// use std::sync::atomic::compiler_fence;
4483///
4484/// static mut IMPORTANT_VARIABLE: usize = 0;
4485/// static IS_READY: AtomicBool = AtomicBool::new(false);
4486///
4487/// fn main() {
4488///     unsafe { IMPORTANT_VARIABLE = 42 };
4489///     // Marks earlier writes as being released with future relaxed stores.
4490///     compiler_fence(Ordering::Release);
4491///     IS_READY.store(true, Ordering::Relaxed);
4492/// }
4493///
4494/// fn signal_handler() {
4495///     if IS_READY.load(Ordering::Relaxed) {
4496///         // Acquires writes that were released with relaxed stores that we read from.
4497///         compiler_fence(Ordering::Acquire);
4498///         assert_eq!(unsafe { IMPORTANT_VARIABLE }, 42);
4499///     }
4500/// }
4501/// ```
4502#[inline]
4503#[stable(feature = "compiler_fences", since = "1.21.0")]
4504#[rustc_diagnostic_item = "compiler_fence"]
4505#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4506#[cfg(not(feature = "ferrocene_certified"))]
4507pub fn compiler_fence(order: Ordering) {
4508    // SAFETY: using an atomic fence is safe.
4509    unsafe {
4510        match order {
4511            Acquire => intrinsics::atomic_singlethreadfence::<{ AO::Acquire }>(),
4512            Release => intrinsics::atomic_singlethreadfence::<{ AO::Release }>(),
4513            AcqRel => intrinsics::atomic_singlethreadfence::<{ AO::AcqRel }>(),
4514            SeqCst => intrinsics::atomic_singlethreadfence::<{ AO::SeqCst }>(),
4515            Relaxed => panic!("there is no such thing as a relaxed fence"),
4516        }
4517    }
4518}
4519
4520#[cfg(target_has_atomic_load_store = "8")]
4521#[stable(feature = "atomic_debug", since = "1.3.0")]
4522#[cfg(not(feature = "ferrocene_certified"))]
4523impl fmt::Debug for AtomicBool {
4524    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4525        fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
4526    }
4527}
4528
4529#[cfg(target_has_atomic_load_store = "ptr")]
4530#[stable(feature = "atomic_debug", since = "1.3.0")]
4531#[cfg(not(feature = "ferrocene_certified"))]
4532impl<T> fmt::Debug for AtomicPtr<T> {
4533    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4534        fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
4535    }
4536}
4537
4538#[cfg(target_has_atomic_load_store = "ptr")]
4539#[stable(feature = "atomic_pointer", since = "1.24.0")]
4540#[cfg(not(feature = "ferrocene_certified"))]
4541impl<T> fmt::Pointer for AtomicPtr<T> {
4542    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4543        fmt::Pointer::fmt(&self.load(Ordering::Relaxed), f)
4544    }
4545}
4546
4547/// Signals the processor that it is inside a busy-wait spin-loop ("spin lock").
4548///
4549/// This function is deprecated in favor of [`hint::spin_loop`].
4550///
4551/// [`hint::spin_loop`]: crate::hint::spin_loop
4552#[inline]
4553#[stable(feature = "spin_loop_hint", since = "1.24.0")]
4554#[deprecated(since = "1.51.0", note = "use hint::spin_loop instead")]
4555#[cfg(not(feature = "ferrocene_certified"))]
4556pub fn spin_loop_hint() {
4557    spin_loop()
4558}