Skip to main content

core/sync/
atomic.rs

1//! Atomic types
2//!
3//! Atomic types provide primitive shared-memory communication between
4//! threads, and are the building blocks of other concurrent
5//! types.
6//!
7//! This module defines atomic versions of a select number of primitive
8//! types, including [`AtomicBool`], [`AtomicIsize`], [`AtomicUsize`],
9//! [`AtomicI8`], [`AtomicU16`], etc.
10//! Atomic types present operations that, when used correctly, synchronize
11//! updates between threads.
12//!
13//! Atomic variables are safe to share between threads (they implement [`Sync`])
14//! but they do not themselves provide the mechanism for sharing and follow the
15//! [threading model](../../../std/thread/index.html#the-threading-model) of Rust.
16//! The most common way to share an atomic variable is to put it into an [`Arc`][arc] (an
17//! atomically-reference-counted shared pointer).
18//!
19//! [arc]: ../../../std/sync/struct.Arc.html
20//!
21//! Atomic types may be stored in static variables, initialized using
22//! the constant initializers like [`AtomicBool::new`]. Atomic statics
23//! are often used for lazy global initialization.
24//!
25//! ## Memory model for atomic accesses
26//!
27//! Rust atomics currently follow the same rules as [C++20 atomics][cpp], specifically the rules
28//! from the [`intro.races`][cpp-intro.races] section, without the "consume" memory ordering. Since
29//! C++ uses an object-based memory model whereas Rust is access-based, a bit of translation work
30//! has to be done to apply the C++ rules to Rust: whenever C++ talks about "the value of an
31//! object", we understand that to mean the resulting bytes obtained when doing a read. When the C++
32//! standard talks about "the value of an atomic object", this refers to the result of doing an
33//! atomic load (via the operations provided in this module). A "modification of an atomic object"
34//! refers to an atomic store.
35//!
36//! The end result is *almost* equivalent to saying that creating a *shared reference* to one of the
37//! Rust atomic types corresponds to creating an `atomic_ref` in C++, with the `atomic_ref` being
38//! destroyed when the lifetime of the shared reference ends. The main difference is that Rust
39//! permits concurrent atomic and non-atomic reads to the same memory as those cause no issue in the
40//! C++ memory model, they are just forbidden in C++ because memory is partitioned into "atomic
41//! objects" and "non-atomic objects" (with `atomic_ref` temporarily converting a non-atomic object
42//! into an atomic object).
43//!
44//! The most important aspect of this model is that *data races* are undefined behavior. A data race
45//! is defined as conflicting non-synchronized accesses where at least one of the accesses is
46//! non-atomic. Here, accesses are *conflicting* if they affect overlapping regions of memory and at
47//! least one of them is a write. (A `compare_exchange` or `compare_exchange_weak` that does not
48//! succeed is not considered a write.) They are *non-synchronized* if neither of them
49//! *happens-before* the other, according to the happens-before order of the memory model.
50//!
51//! The other possible cause of undefined behavior in the memory model are mixed-size accesses: Rust
52//! inherits the C++ limitation that non-synchronized conflicting atomic accesses may not partially
53//! overlap. In other words, every pair of non-synchronized atomic accesses must be either disjoint,
54//! access the exact same memory (including using the same access size), or both be reads.
55//!
56//! Each atomic access takes an [`Ordering`] which defines how the operation interacts with the
57//! happens-before order. These orderings behave the same as the corresponding [C++20 atomic
58//! orderings][cpp_memory_order]. For more information, see the [nomicon].
59//!
60//! [cpp]: https://en.cppreference.com/w/cpp/atomic
61//! [cpp-intro.races]: https://timsong-cpp.github.io/cppwp/n4868/intro.multithread#intro.races
62//! [cpp_memory_order]: https://en.cppreference.com/w/cpp/atomic/memory_order
63//! [nomicon]: ../../../nomicon/atomics.html
64//!
65//! ```rust,no_run undefined_behavior
66//! use std::sync::atomic::{AtomicU16, AtomicU8, Ordering};
67//! use std::mem::transmute;
68//! use std::thread;
69//!
70//! let atomic = AtomicU16::new(0);
71//!
72//! thread::scope(|s| {
73//!     // This is UB: conflicting non-synchronized accesses, at least one of which is non-atomic.
74//!     s.spawn(|| atomic.store(1, Ordering::Relaxed)); // atomic store
75//!     s.spawn(|| unsafe { atomic.as_ptr().write(2) }); // non-atomic write
76//! });
77//!
78//! thread::scope(|s| {
79//!     // This is fine: the accesses do not conflict (as none of them performs any modification).
80//!     // In C++ this would be disallowed since creating an `atomic_ref` precludes
81//!     // further non-atomic accesses, but Rust does not have that limitation.
82//!     s.spawn(|| atomic.load(Ordering::Relaxed)); // atomic load
83//!     s.spawn(|| unsafe { atomic.as_ptr().read() }); // non-atomic read
84//! });
85//!
86//! thread::scope(|s| {
87//!     // This is fine: `join` synchronizes the code in a way such that the atomic
88//!     // store happens-before the non-atomic write.
89//!     let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed)); // atomic store
90//!     handle.join().expect("thread won't panic"); // synchronize
91//!     s.spawn(|| unsafe { atomic.as_ptr().write(2) }); // non-atomic write
92//! });
93//!
94//! thread::scope(|s| {
95//!     // This is UB: non-synchronized conflicting differently-sized atomic accesses.
96//!     s.spawn(|| atomic.store(1, Ordering::Relaxed));
97//!     s.spawn(|| unsafe {
98//!         let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic);
99//!         differently_sized.store(2, Ordering::Relaxed);
100//!     });
101//! });
102//!
103//! thread::scope(|s| {
104//!     // This is fine: `join` synchronizes the code in a way such that
105//!     // the 1-byte store happens-before the 2-byte store.
106//!     let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed));
107//!     handle.join().expect("thread won't panic");
108//!     s.spawn(|| unsafe {
109//!         let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic);
110//!         differently_sized.store(2, Ordering::Relaxed);
111//!     });
112//! });
113//! ```
114//!
115//! # Portability
116//!
117//! All atomic types in this module are guaranteed to be [lock-free] if they're
118//! available. This means they don't internally acquire a global mutex. Atomic
119//! types and operations are not guaranteed to be wait-free. This means that
120//! operations like `fetch_or` may be implemented with a compare-and-swap loop.
121//!
122//! Atomic operations may be implemented at the instruction layer with
123//! larger-size atomics. For example some platforms use 4-byte atomic
124//! instructions to implement `AtomicI8`. Note that this emulation should not
125//! have an impact on correctness of code, it's just something to be aware of.
126//!
127//! The atomic types in this module might not be available on all platforms. The
128//! atomic types here are all widely available, however, and can generally be
129//! relied upon existing. Some notable exceptions are:
130//!
131//! * PowerPC and MIPS platforms with 32-bit pointers do not have `AtomicU64` or
132//!   `AtomicI64` types.
133//! * Legacy ARM platforms like ARMv4T and ARMv5TE have very limited hardware
134//!   support for atomics. The bare-metal targets disable this module
135//!   entirely, but the Linux targets [use the kernel] to assist (which comes
136//!   with a performance penalty). It's not until ARMv6K onwards that ARM CPUs
137//!   have support for load/store and Compare and Swap (CAS) atomics in hardware.
138//! * ARMv6-M and ARMv8-M baseline targets (`thumbv6m-*` and
139//!   `thumbv8m.base-*`) only provide `load` and `store` operations, and do
140//!   not support Compare and Swap (CAS) operations, such as `swap`,
141//!   `fetch_add`, etc. Full CAS support is available on ARMv7-M and ARMv8-M
142//!   Mainline (`thumbv7m-*`, `thumbv7em*` and `thumbv8m.main-*`).
143//!
144//! [use the kernel]: https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt
145//!
146//! Note that future platforms may be added that also do not have support for
147//! some atomic operations. Maximally portable code will want to be careful
148//! about which atomic types are used. `AtomicUsize` and `AtomicIsize` are
149//! generally the most portable, but even then they're not available everywhere.
150//! For reference, the `std` library requires `AtomicBool`s and pointer-sized atomics, although
151//! `core` does not.
152//!
153//! The `#[cfg(target_has_atomic)]` attribute can be used to conditionally
154//! compile based on the target's supported bit widths. It is a key-value
155//! option set for each supported size, with values "8", "16", "32", "64",
156//! "128", and "ptr" for pointer-sized atomics.
157//!
158//! [lock-free]: https://en.wikipedia.org/wiki/Non-blocking_algorithm
159//!
160//! # Atomic accesses to read-only memory
161//!
162//! In general, *all* atomic accesses on read-only memory are undefined behavior. For instance, attempting
163//! to do a `compare_exchange` that will definitely fail (making it conceptually a read-only
164//! operation) can still cause a segmentation fault if the underlying memory page is mapped read-only. Since
165//! atomic `load`s might be implemented using compare-exchange operations, even a `load` can fault
166//! on read-only memory.
167//!
168//! For the purpose of this section, "read-only memory" is defined as memory that is read-only in
169//! the underlying target, i.e., the pages are mapped with a read-only flag and any attempt to write
170//! will cause a page fault. In particular, an `&u128` reference that points to memory that is
171//! read-write mapped is *not* considered to point to "read-only memory". In Rust, almost all memory
172//! is read-write; the only exceptions are memory created by `const` items or `static` items without
173//! interior mutability, and memory that was specifically marked as read-only by the operating
174//! system via platform-specific APIs.
175//!
176//! As an exception from the general rule stated above, "sufficiently small" atomic loads with
177//! `Ordering::Relaxed` are implemented in a way that works on read-only memory, and are hence not
178//! undefined behavior. The exact size limit for what makes a load "sufficiently small" varies
179//! depending on the target:
180//!
181//! | `target_arch` | Size limit |
182//! |---------------|---------|
183//! | `x86`, `arm`, `loongarch32`, `mips`, `mips32r6`, `powerpc`, `riscv32`, `sparc`, `hexagon` | 4 bytes |
184//! | `x86_64`, `aarch64`, `loongarch64`, `mips64`, `mips64r6`, `powerpc64`, `riscv64`, `sparc64`, `s390x` | 8 bytes |
185//!
186//! Atomics loads that are larger than this limit as well as atomic loads with ordering other
187//! than `Relaxed`, as well as *all* atomic loads on targets not listed in the table, might still be
188//! read-only under certain conditions, but that is not a stable guarantee and should not be relied
189//! upon.
190//!
191//! If you need to do an acquire load on read-only memory, you can do a relaxed load followed by an
192//! acquire fence instead.
193//!
194//! # Examples
195//!
196//! A simple spinlock:
197//!
198//! ```ignore-wasm
199//! use std::sync::Arc;
200//! use std::sync::atomic::{AtomicUsize, Ordering};
201//! use std::{hint, thread};
202//!
203//! fn main() {
204//!     let spinlock = Arc::new(AtomicUsize::new(1));
205//!
206//!     let spinlock_clone = Arc::clone(&spinlock);
207//!
208//!     let thread = thread::spawn(move || {
209//!         spinlock_clone.store(0, Ordering::Release);
210//!     });
211//!
212//!     // Wait for the other thread to release the lock
213//!     while spinlock.load(Ordering::Acquire) != 0 {
214//!         hint::spin_loop();
215//!     }
216//!
217//!     if let Err(panic) = thread.join() {
218//!         println!("Thread had an error: {panic:?}");
219//!     }
220//! }
221//! ```
222//!
223//! Keep a global count of live threads:
224//!
225//! ```
226//! use std::sync::atomic::{AtomicUsize, Ordering};
227//!
228//! static GLOBAL_THREAD_COUNT: AtomicUsize = AtomicUsize::new(0);
229//!
230//! // Note that Relaxed ordering doesn't synchronize anything
231//! // except the global thread counter itself.
232//! let old_thread_count = GLOBAL_THREAD_COUNT.fetch_add(1, Ordering::Relaxed);
233//! // Note that this number may not be true at the moment of printing
234//! // because some other thread may have changed static value already.
235//! println!("live threads: {}", old_thread_count + 1);
236//! ```
237
238#![stable(feature = "rust1", since = "1.0.0")]
239#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(dead_code))]
240#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(unused_imports))]
241// Clippy complains about the pattern of "safe function calling unsafe function taking pointers".
242// This happens with AtomicPtr intrinsics but is fine, as the pointers clippy is concerned about
243// are just normal values that get loaded/stored, but not dereferenced.
244#![allow(clippy::not_unsafe_ptr_arg_deref)]
245
246use self::Ordering::*;
247use crate::cell::UnsafeCell;
248#[cfg(not(feature = "ferrocene_subset"))]
249use crate::hint::spin_loop;
250use crate::intrinsics::AtomicOrdering as AO;
251use crate::mem::transmute;
252use crate::{fmt, intrinsics};
253
254#[unstable(
255    feature = "atomic_internals",
256    reason = "implementation detail which may disappear or be replaced at any time",
257    issue = "none"
258)]
259#[expect(missing_debug_implementations)]
260mod private {
261    pub(super) trait Sealed {}
262
263    #[cfg(target_has_atomic_load_store = "8")]
264    #[repr(C, align(1))]
265    pub struct Align1<T>(T);
266    #[cfg(target_has_atomic_load_store = "16")]
267    #[repr(C, align(2))]
268    pub struct Align2<T>(T);
269    #[cfg(target_has_atomic_load_store = "32")]
270    #[repr(C, align(4))]
271    pub struct Align4<T>(T);
272    #[cfg(target_has_atomic_load_store = "64")]
273    #[repr(C, align(8))]
274    pub struct Align8<T>(T);
275    #[cfg(target_has_atomic_load_store = "128")]
276    #[repr(C, align(16))]
277    pub struct Align16<T>(T);
278}
279
280/// A marker trait for primitive types which can be modified atomically.
281///
282/// This is an implementation detail for <code>[Atomic]\<T></code> which may disappear or be replaced at any time.
283//
284// # Safety
285//
286// Types implementing this trait must be primitives that can be modified atomically.
287//
288// The associated `Self::Storage` type must have the same size, but may have fewer validity
289// invariants or a higher alignment requirement than `Self`.
290#[unstable(
291    feature = "atomic_internals",
292    reason = "implementation detail which may disappear or be replaced at any time",
293    issue = "none"
294)]
295#[expect(private_bounds)]
296pub unsafe trait AtomicPrimitive: Sized + Copy + private::Sealed {
297    /// Temporary implementation detail.
298    type Storage: Sized;
299}
300
301macro impl_atomic_primitive(
302    [$($T:ident)?] $Primitive:ty as $Storage:ident<$Operand:ty>, size($size:literal)
303) {
304    impl $(<$T>)? private::Sealed for $Primitive {}
305
306    #[unstable(
307        feature = "atomic_internals",
308        reason = "implementation detail which may disappear or be replaced at any time",
309        issue = "none"
310    )]
311    #[cfg(target_has_atomic_load_store = $size)]
312    unsafe impl $(<$T>)? AtomicPrimitive for $Primitive {
313        type Storage = private::$Storage<$Operand>;
314    }
315}
316
317impl_atomic_primitive!([] bool as Align1<u8>, size("8"));
318impl_atomic_primitive!([] i8 as Align1<i8>, size("8"));
319impl_atomic_primitive!([] u8 as Align1<u8>, size("8"));
320impl_atomic_primitive!([] i16 as Align2<i16>, size("16"));
321impl_atomic_primitive!([] u16 as Align2<u16>, size("16"));
322impl_atomic_primitive!([] i32 as Align4<i32>, size("32"));
323impl_atomic_primitive!([] u32 as Align4<u32>, size("32"));
324impl_atomic_primitive!([] i64 as Align8<i64>, size("64"));
325impl_atomic_primitive!([] u64 as Align8<u64>, size("64"));
326impl_atomic_primitive!([] i128 as Align16<i128>, size("128"));
327impl_atomic_primitive!([] u128 as Align16<u128>, size("128"));
328
329#[cfg(target_pointer_width = "16")]
330impl_atomic_primitive!([] isize as Align2<isize>, size("ptr"));
331#[cfg(target_pointer_width = "32")]
332impl_atomic_primitive!([] isize as Align4<isize>, size("ptr"));
333#[cfg(target_pointer_width = "64")]
334impl_atomic_primitive!([] isize as Align8<isize>, size("ptr"));
335
336#[cfg(target_pointer_width = "16")]
337impl_atomic_primitive!([] usize as Align2<usize>, size("ptr"));
338#[cfg(target_pointer_width = "32")]
339impl_atomic_primitive!([] usize as Align4<usize>, size("ptr"));
340#[cfg(target_pointer_width = "64")]
341impl_atomic_primitive!([] usize as Align8<usize>, size("ptr"));
342
343#[cfg(target_pointer_width = "16")]
344impl_atomic_primitive!([T] *mut T as Align2<*mut T>, size("ptr"));
345#[cfg(target_pointer_width = "32")]
346impl_atomic_primitive!([T] *mut T as Align4<*mut T>, size("ptr"));
347#[cfg(target_pointer_width = "64")]
348impl_atomic_primitive!([T] *mut T as Align8<*mut T>, size("ptr"));
349
350/// A memory location which can be safely modified from multiple threads.
351///
352/// This has the same size and bit validity as the underlying type `T`. However,
353/// the alignment of this type is always equal to its size, even on targets where
354/// `T` has alignment less than its size.
355///
356/// For more about the differences between atomic types and non-atomic types as
357/// well as information about the portability of this type, please see the
358/// [module-level documentation].
359///
360/// **Note:** This type is only available on platforms that support atomic loads
361/// and stores of `T`.
362///
363/// [module-level documentation]: crate::sync::atomic
364#[unstable(feature = "generic_atomic", issue = "130539")]
365#[repr(C)]
366#[rustc_diagnostic_item = "Atomic"]
367pub struct Atomic<T: AtomicPrimitive> {
368    v: UnsafeCell<T::Storage>,
369}
370
371#[stable(feature = "rust1", since = "1.0.0")]
372unsafe impl<T: AtomicPrimitive> Send for Atomic<T> {}
373#[stable(feature = "rust1", since = "1.0.0")]
374unsafe impl<T: AtomicPrimitive> Sync for Atomic<T> {}
375
376// Some architectures don't have byte-sized atomics, which results in LLVM
377// emulating them using a LL/SC loop. However for AtomicBool we can take
378// advantage of the fact that it only ever contains 0 or 1 and use atomic OR/AND
379// instead, which LLVM can emulate using a larger atomic OR/AND operation.
380//
381// This list should only contain architectures which have word-sized atomic-or/
382// atomic-and instructions but don't natively support byte-sized atomics.
383#[cfg(target_has_atomic = "8")]
384const EMULATE_ATOMIC_BOOL: bool = cfg!(any(
385    target_arch = "riscv32",
386    target_arch = "riscv64",
387    target_arch = "loongarch32",
388    target_arch = "loongarch64"
389));
390
391/// A boolean type which can be safely shared between threads.
392///
393/// This type has the same size, alignment, and bit validity as a [`bool`].
394///
395/// **Note**: This type is only available on platforms that support atomic
396/// loads and stores of `u8`.
397#[cfg(target_has_atomic_load_store = "8")]
398#[stable(feature = "rust1", since = "1.0.0")]
399pub type AtomicBool = Atomic<bool>;
400
401#[cfg(target_has_atomic_load_store = "8")]
402#[stable(feature = "rust1", since = "1.0.0")]
403#[cfg(not(feature = "ferrocene_subset"))]
404impl Default for AtomicBool {
405    /// Creates an `AtomicBool` initialized to `false`.
406    #[inline]
407    fn default() -> Self {
408        Self::new(false)
409    }
410}
411
412/// A raw pointer type which can be safely shared between threads.
413///
414/// This type has the same size and bit validity as a `*mut T`.
415///
416/// **Note**: This type is only available on platforms that support atomic
417/// loads and stores of pointers. Its size depends on the target pointer's size.
418#[cfg(target_has_atomic_load_store = "ptr")]
419#[stable(feature = "rust1", since = "1.0.0")]
420#[cfg(not(feature = "ferrocene_subset"))]
421pub type AtomicPtr<T> = Atomic<*mut T>;
422
423#[cfg(target_has_atomic_load_store = "ptr")]
424#[stable(feature = "rust1", since = "1.0.0")]
425#[cfg(not(feature = "ferrocene_subset"))]
426impl<T> Default for AtomicPtr<T> {
427    /// Creates a null `AtomicPtr<T>`.
428    fn default() -> AtomicPtr<T> {
429        AtomicPtr::new(crate::ptr::null_mut())
430    }
431}
432
433/// Atomic memory orderings
434///
435/// Memory orderings specify the way atomic operations synchronize memory.
436/// In its weakest [`Ordering::Relaxed`], only the memory directly touched by the
437/// operation is synchronized. On the other hand, a store-load pair of [`Ordering::SeqCst`]
438/// operations synchronize other memory while additionally preserving a total order of such
439/// operations across all threads.
440///
441/// Rust's memory orderings are [the same as those of
442/// C++20](https://en.cppreference.com/w/cpp/atomic/memory_order).
443///
444/// For more information see the [nomicon].
445///
446/// [nomicon]: ../../../nomicon/atomics.html
447#[stable(feature = "rust1", since = "1.0.0")]
448#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
449#[non_exhaustive]
450#[rustc_diagnostic_item = "Ordering"]
451pub enum Ordering {
452    /// No ordering constraints, only atomic operations.
453    ///
454    /// Corresponds to [`memory_order_relaxed`] in C++20.
455    ///
456    /// [`memory_order_relaxed`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Relaxed_ordering
457    #[stable(feature = "rust1", since = "1.0.0")]
458    Relaxed,
459    /// When coupled with a store, all previous operations become ordered
460    /// before any load of this value with [`Acquire`] (or stronger) ordering.
461    /// In particular, all previous writes become visible to all threads
462    /// that perform an [`Acquire`] (or stronger) load of this value.
463    ///
464    /// Notice that using this ordering for an operation that combines loads
465    /// and stores leads to a [`Relaxed`] load operation!
466    ///
467    /// This ordering is only applicable for operations that can perform a store.
468    ///
469    /// Corresponds to [`memory_order_release`] in C++20.
470    ///
471    /// [`memory_order_release`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
472    #[stable(feature = "rust1", since = "1.0.0")]
473    Release,
474    /// When coupled with a load, if the loaded value was written by a store operation with
475    /// [`Release`] (or stronger) ordering, then all subsequent operations
476    /// become ordered after that store. In particular, all subsequent loads will see data
477    /// written before the store.
478    ///
479    /// Notice that using this ordering for an operation that combines loads
480    /// and stores leads to a [`Relaxed`] store operation!
481    ///
482    /// This ordering is only applicable for operations that can perform a load.
483    ///
484    /// Corresponds to [`memory_order_acquire`] in C++20.
485    ///
486    /// [`memory_order_acquire`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
487    #[stable(feature = "rust1", since = "1.0.0")]
488    Acquire,
489    /// Has the effects of both [`Acquire`] and [`Release`] together:
490    /// For loads it uses [`Acquire`] ordering. For stores it uses the [`Release`] ordering.
491    ///
492    /// Notice that in the case of `compare_and_swap`, it is possible that the operation ends up
493    /// not performing any store and hence it has just [`Acquire`] ordering. However,
494    /// `AcqRel` will never perform [`Relaxed`] accesses.
495    ///
496    /// This ordering is only applicable for operations that combine both loads and stores.
497    ///
498    /// Corresponds to [`memory_order_acq_rel`] in C++20.
499    ///
500    /// [`memory_order_acq_rel`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
501    #[stable(feature = "rust1", since = "1.0.0")]
502    AcqRel,
503    /// Like [`Acquire`]/[`Release`]/[`AcqRel`] (for load, store, and load-with-store
504    /// operations, respectively) with the additional guarantee that all threads see all
505    /// sequentially consistent operations in the same order.
506    ///
507    /// Corresponds to [`memory_order_seq_cst`] in C++20.
508    ///
509    /// [`memory_order_seq_cst`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Sequentially-consistent_ordering
510    #[stable(feature = "rust1", since = "1.0.0")]
511    SeqCst,
512}
513
514/// An [`AtomicBool`] initialized to `false`.
515#[cfg(target_has_atomic_load_store = "8")]
516#[stable(feature = "rust1", since = "1.0.0")]
517#[deprecated(
518    since = "1.34.0",
519    note = "the `new` function is now preferred",
520    suggestion = "AtomicBool::new(false)"
521)]
522#[cfg(not(feature = "ferrocene_subset"))]
523pub const ATOMIC_BOOL_INIT: AtomicBool = AtomicBool::new(false);
524
525#[cfg(target_has_atomic_load_store = "8")]
526impl AtomicBool {
527    /// Creates a new `AtomicBool`.
528    ///
529    /// # Examples
530    ///
531    /// ```
532    /// use std::sync::atomic::AtomicBool;
533    ///
534    /// let atomic_true = AtomicBool::new(true);
535    /// let atomic_false = AtomicBool::new(false);
536    /// ```
537    #[inline]
538    #[stable(feature = "rust1", since = "1.0.0")]
539    #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
540    #[must_use]
541    pub const fn new(v: bool) -> AtomicBool {
542        // SAFETY:
543        // `Atomic<T>` is essentially a transparent wrapper around `T`.
544        unsafe { transmute(v) }
545    }
546
547    /// Creates a new `AtomicBool` from a pointer.
548    ///
549    /// # Examples
550    ///
551    /// ```
552    /// use std::sync::atomic::{self, AtomicBool};
553    ///
554    /// // Get a pointer to an allocated value
555    /// let ptr: *mut bool = Box::into_raw(Box::new(false));
556    ///
557    /// assert!(ptr.cast::<AtomicBool>().is_aligned());
558    ///
559    /// {
560    ///     // Create an atomic view of the allocated value
561    ///     let atomic = unsafe { AtomicBool::from_ptr(ptr) };
562    ///
563    ///     // Use `atomic` for atomic operations, possibly share it with other threads
564    ///     atomic.store(true, atomic::Ordering::Relaxed);
565    /// }
566    ///
567    /// // It's ok to non-atomically access the value behind `ptr`,
568    /// // since the reference to the atomic ended its lifetime in the block above
569    /// assert_eq!(unsafe { *ptr }, true);
570    ///
571    /// // Deallocate the value
572    /// unsafe { drop(Box::from_raw(ptr)) }
573    /// ```
574    ///
575    /// # Safety
576    ///
577    /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that this is always true, since
578    ///   `align_of::<AtomicBool>() == 1`).
579    /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
580    /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
581    ///   allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different
582    ///   sizes, without synchronization.
583    ///
584    /// [valid]: crate::ptr#safety
585    /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
586    #[inline]
587    #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
588    #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
589    #[cfg(not(feature = "ferrocene_subset"))]
590    pub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a AtomicBool {
591        // SAFETY: guaranteed by the caller
592        unsafe { &*ptr.cast() }
593    }
594
595    /// Returns a mutable reference to the underlying [`bool`].
596    ///
597    /// This is safe because the mutable reference guarantees that no other threads are
598    /// concurrently accessing the atomic data.
599    ///
600    /// # Examples
601    ///
602    /// ```
603    /// use std::sync::atomic::{AtomicBool, Ordering};
604    ///
605    /// let mut some_bool = AtomicBool::new(true);
606    /// assert_eq!(*some_bool.get_mut(), true);
607    /// *some_bool.get_mut() = false;
608    /// assert_eq!(some_bool.load(Ordering::SeqCst), false);
609    /// ```
610    #[inline]
611    #[stable(feature = "atomic_access", since = "1.15.0")]
612    #[cfg(not(feature = "ferrocene_subset"))]
613    pub fn get_mut(&mut self) -> &mut bool {
614        // SAFETY: the mutable reference guarantees unique ownership.
615        unsafe { &mut *self.as_ptr() }
616    }
617
618    /// Gets atomic access to a `&mut bool`.
619    ///
620    /// # Examples
621    ///
622    /// ```
623    /// #![feature(atomic_from_mut)]
624    /// use std::sync::atomic::{AtomicBool, Ordering};
625    ///
626    /// let mut some_bool = true;
627    /// let a = AtomicBool::from_mut(&mut some_bool);
628    /// a.store(false, Ordering::Relaxed);
629    /// assert_eq!(some_bool, false);
630    /// ```
631    #[inline]
632    #[cfg(target_has_atomic_equal_alignment = "8")]
633    #[unstable(feature = "atomic_from_mut", issue = "76314")]
634    #[cfg(not(feature = "ferrocene_subset"))]
635    pub fn from_mut(v: &mut bool) -> &mut Self {
636        // SAFETY: the mutable reference guarantees unique ownership, and
637        // alignment of both `bool` and `Self` is 1.
638        unsafe { &mut *(v as *mut bool as *mut Self) }
639    }
640
641    /// Gets non-atomic access to a `&mut [AtomicBool]` slice.
642    ///
643    /// This is safe because the mutable reference guarantees that no other threads are
644    /// concurrently accessing the atomic data.
645    ///
646    /// # Examples
647    ///
648    /// ```ignore-wasm
649    /// #![feature(atomic_from_mut)]
650    /// use std::sync::atomic::{AtomicBool, Ordering};
651    ///
652    /// let mut some_bools = [const { AtomicBool::new(false) }; 10];
653    ///
654    /// let view: &mut [bool] = AtomicBool::get_mut_slice(&mut some_bools);
655    /// assert_eq!(view, [false; 10]);
656    /// view[..5].copy_from_slice(&[true; 5]);
657    ///
658    /// std::thread::scope(|s| {
659    ///     for t in &some_bools[..5] {
660    ///         s.spawn(move || assert_eq!(t.load(Ordering::Relaxed), true));
661    ///     }
662    ///
663    ///     for f in &some_bools[5..] {
664    ///         s.spawn(move || assert_eq!(f.load(Ordering::Relaxed), false));
665    ///     }
666    /// });
667    /// ```
668    #[inline]
669    #[unstable(feature = "atomic_from_mut", issue = "76314")]
670    #[cfg(not(feature = "ferrocene_subset"))]
671    pub fn get_mut_slice(this: &mut [Self]) -> &mut [bool] {
672        // SAFETY: the mutable reference guarantees unique ownership.
673        unsafe { &mut *(this as *mut [Self] as *mut [bool]) }
674    }
675
676    /// Gets atomic access to a `&mut [bool]` slice.
677    ///
678    /// # Examples
679    ///
680    /// ```rust,ignore-wasm
681    /// #![feature(atomic_from_mut)]
682    /// use std::sync::atomic::{AtomicBool, Ordering};
683    ///
684    /// let mut some_bools = [false; 10];
685    /// let a = &*AtomicBool::from_mut_slice(&mut some_bools);
686    /// std::thread::scope(|s| {
687    ///     for i in 0..a.len() {
688    ///         s.spawn(move || a[i].store(true, Ordering::Relaxed));
689    ///     }
690    /// });
691    /// assert_eq!(some_bools, [true; 10]);
692    /// ```
693    #[inline]
694    #[cfg(target_has_atomic_equal_alignment = "8")]
695    #[unstable(feature = "atomic_from_mut", issue = "76314")]
696    #[cfg(not(feature = "ferrocene_subset"))]
697    pub fn from_mut_slice(v: &mut [bool]) -> &mut [Self] {
698        // SAFETY: the mutable reference guarantees unique ownership, and
699        // alignment of both `bool` and `Self` is 1.
700        unsafe { &mut *(v as *mut [bool] as *mut [Self]) }
701    }
702
703    /// Consumes the atomic and returns the contained value.
704    ///
705    /// This is safe because passing `self` by value guarantees that no other threads are
706    /// concurrently accessing the atomic data.
707    ///
708    /// # Examples
709    ///
710    /// ```
711    /// use std::sync::atomic::AtomicBool;
712    ///
713    /// let some_bool = AtomicBool::new(true);
714    /// assert_eq!(some_bool.into_inner(), true);
715    /// ```
716    #[inline]
717    #[stable(feature = "atomic_access", since = "1.15.0")]
718    #[rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0")]
719    #[cfg(not(feature = "ferrocene_subset"))]
720    pub const fn into_inner(self) -> bool {
721        // SAFETY:
722        // * `Atomic<T>` is essentially a transparent wrapper around `T`.
723        // * all operations on `Atomic<bool>` ensure that `T::Storage` remains
724        //   a valid `bool`.
725        unsafe { transmute(self) }
726    }
727
728    /// Loads a value from the bool.
729    ///
730    /// `load` takes an [`Ordering`] argument which describes the memory ordering
731    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
732    ///
733    /// # Panics
734    ///
735    /// Panics if `order` is [`Release`] or [`AcqRel`].
736    ///
737    /// # Examples
738    ///
739    /// ```
740    /// use std::sync::atomic::{AtomicBool, Ordering};
741    ///
742    /// let some_bool = AtomicBool::new(true);
743    ///
744    /// assert_eq!(some_bool.load(Ordering::Relaxed), true);
745    /// ```
746    #[inline]
747    #[stable(feature = "rust1", since = "1.0.0")]
748    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
749    pub fn load(&self, order: Ordering) -> bool {
750        // SAFETY: any data races are prevented by atomic intrinsics and the raw
751        // pointer passed in is valid because we got it from a reference.
752        unsafe { atomic_load(self.v.get().cast::<u8>(), order) != 0 }
753    }
754
755    /// Stores a value into the bool.
756    ///
757    /// `store` takes an [`Ordering`] argument which describes the memory ordering
758    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
759    ///
760    /// # Panics
761    ///
762    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
763    ///
764    /// # Examples
765    ///
766    /// ```
767    /// use std::sync::atomic::{AtomicBool, Ordering};
768    ///
769    /// let some_bool = AtomicBool::new(true);
770    ///
771    /// some_bool.store(false, Ordering::Relaxed);
772    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
773    /// ```
774    #[inline]
775    #[stable(feature = "rust1", since = "1.0.0")]
776    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
777    #[rustc_should_not_be_called_on_const_items]
778    pub fn store(&self, val: bool, order: Ordering) {
779        // SAFETY: any data races are prevented by atomic intrinsics and the raw
780        // pointer passed in is valid because we got it from a reference.
781        unsafe {
782            atomic_store(self.v.get().cast::<u8>(), val as u8, order);
783        }
784    }
785
786    /// Stores a value into the bool, returning the previous value.
787    ///
788    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
789    /// of this operation. All ordering modes are possible. Note that using
790    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
791    /// using [`Release`] makes the load part [`Relaxed`].
792    ///
793    /// **Note:** This method is only available on platforms that support atomic
794    /// operations on `u8`.
795    ///
796    /// # Examples
797    ///
798    /// ```
799    /// use std::sync::atomic::{AtomicBool, Ordering};
800    ///
801    /// let some_bool = AtomicBool::new(true);
802    ///
803    /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true);
804    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
805    /// ```
806    #[inline]
807    #[stable(feature = "rust1", since = "1.0.0")]
808    #[cfg(target_has_atomic = "8")]
809    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
810    #[rustc_should_not_be_called_on_const_items]
811    pub fn swap(&self, val: bool, order: Ordering) -> bool {
812        if EMULATE_ATOMIC_BOOL {
813            #[ferrocene::annotation(
814                "Cannot be covered as this code does not run in any of the platforms for which we track coverage"
815            )]
816            if val { self.fetch_or(true, order) } else { self.fetch_and(false, order) }
817        } else {
818            // SAFETY: data races are prevented by atomic intrinsics.
819            unsafe { atomic_swap(self.v.get().cast::<u8>(), val as u8, order) != 0 }
820        }
821    }
822
823    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
824    ///
825    /// The return value is always the previous value. If it is equal to `current`, then the value
826    /// was updated.
827    ///
828    /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
829    /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
830    /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
831    /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
832    /// happens, and using [`Release`] makes the load part [`Relaxed`].
833    ///
834    /// **Note:** This method is only available on platforms that support atomic
835    /// operations on `u8`.
836    ///
837    /// # Migrating to `compare_exchange` and `compare_exchange_weak`
838    ///
839    /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
840    /// memory orderings:
841    ///
842    /// Original | Success | Failure
843    /// -------- | ------- | -------
844    /// Relaxed  | Relaxed | Relaxed
845    /// Acquire  | Acquire | Acquire
846    /// Release  | Release | Relaxed
847    /// AcqRel   | AcqRel  | Acquire
848    /// SeqCst   | SeqCst  | SeqCst
849    ///
850    /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
851    /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
852    /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
853    /// rather than to infer success vs failure based on the value that was read.
854    ///
855    /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
856    /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
857    /// which allows the compiler to generate better assembly code when the compare and swap
858    /// is used in a loop.
859    ///
860    /// # Examples
861    ///
862    /// ```
863    /// use std::sync::atomic::{AtomicBool, Ordering};
864    ///
865    /// let some_bool = AtomicBool::new(true);
866    ///
867    /// assert_eq!(some_bool.compare_and_swap(true, false, Ordering::Relaxed), true);
868    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
869    ///
870    /// assert_eq!(some_bool.compare_and_swap(true, true, Ordering::Relaxed), false);
871    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
872    /// ```
873    #[cfg(not(feature = "ferrocene_subset"))]
874    #[inline]
875    #[stable(feature = "rust1", since = "1.0.0")]
876    #[deprecated(
877        since = "1.50.0",
878        note = "Use `compare_exchange` or `compare_exchange_weak` instead"
879    )]
880    #[cfg(target_has_atomic = "8")]
881    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
882    #[rustc_should_not_be_called_on_const_items]
883    pub fn compare_and_swap(&self, current: bool, new: bool, order: Ordering) -> bool {
884        match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
885            Ok(x) => x,
886            Err(x) => x,
887        }
888    }
889
890    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
891    ///
892    /// The return value is a result indicating whether the new value was written and containing
893    /// the previous value. On success this value is guaranteed to be equal to `current`.
894    ///
895    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
896    /// ordering of this operation. `success` describes the required ordering for the
897    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
898    /// `failure` describes the required ordering for the load operation that takes place when
899    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
900    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
901    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
902    ///
903    /// **Note:** This method is only available on platforms that support atomic
904    /// operations on `u8`.
905    ///
906    /// # Examples
907    ///
908    /// ```
909    /// use std::sync::atomic::{AtomicBool, Ordering};
910    ///
911    /// let some_bool = AtomicBool::new(true);
912    ///
913    /// assert_eq!(some_bool.compare_exchange(true,
914    ///                                       false,
915    ///                                       Ordering::Acquire,
916    ///                                       Ordering::Relaxed),
917    ///            Ok(true));
918    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
919    ///
920    /// assert_eq!(some_bool.compare_exchange(true, true,
921    ///                                       Ordering::SeqCst,
922    ///                                       Ordering::Acquire),
923    ///            Err(false));
924    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
925    /// ```
926    ///
927    /// # Considerations
928    ///
929    /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
930    /// of CAS operations. In particular, a load of the value followed by a successful
931    /// `compare_exchange` with the previous load *does not ensure* that other threads have not
932    /// changed the value in the interim. This is usually important when the *equality* check in
933    /// the `compare_exchange` is being used to check the *identity* of a value, but equality
934    /// does not necessarily imply identity. In this case, `compare_exchange` can lead to the
935    /// [ABA problem].
936    ///
937    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
938    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
939    #[inline]
940    #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
941    #[doc(alias = "compare_and_swap")]
942    #[cfg(target_has_atomic = "8")]
943    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
944    #[rustc_should_not_be_called_on_const_items]
945    pub fn compare_exchange(
946        &self,
947        current: bool,
948        new: bool,
949        success: Ordering,
950        failure: Ordering,
951    ) -> Result<bool, bool> {
952        if EMULATE_ATOMIC_BOOL {
953            #[ferrocene::annotation(
954                "Cannot be covered as this code does not run in any of the platforms for which we track coverage"
955            )]
956            {
957                // Pick the strongest ordering from success and failure.
958                let order = match (success, failure) {
959                    (SeqCst, _) => SeqCst,
960                    (_, SeqCst) => SeqCst,
961                    (AcqRel, _) => AcqRel,
962                    (_, AcqRel) => {
963                        panic!("there is no such thing as an acquire-release failure ordering")
964                    }
965                    (Release, Acquire) => AcqRel,
966                    (Acquire, _) => Acquire,
967                    (_, Acquire) => Acquire,
968                    (Release, Relaxed) => Release,
969                    (_, Release) => panic!("there is no such thing as a release failure ordering"),
970                    (Relaxed, Relaxed) => Relaxed,
971                };
972                let old = if current == new {
973                    // This is a no-op, but we still need to perform the operation
974                    // for memory ordering reasons.
975                    self.fetch_or(false, order)
976                } else {
977                    // This sets the value to the new one and returns the old one.
978                    self.swap(new, order)
979                };
980                if old == current { Ok(old) } else { Err(old) }
981            }
982        } else {
983            // SAFETY: data races are prevented by atomic intrinsics.
984            match unsafe {
985                atomic_compare_exchange(
986                    self.v.get().cast::<u8>(),
987                    current as u8,
988                    new as u8,
989                    success,
990                    failure,
991                )
992            } {
993                Ok(x) => Ok(x != 0),
994                Err(x) => Err(x != 0),
995            }
996        }
997    }
998
999    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
1000    ///
1001    /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the
1002    /// comparison succeeds, which can result in more efficient code on some platforms. The
1003    /// return value is a result indicating whether the new value was written and containing the
1004    /// previous value.
1005    ///
1006    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1007    /// ordering of this operation. `success` describes the required ordering for the
1008    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1009    /// `failure` describes the required ordering for the load operation that takes place when
1010    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1011    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1012    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1013    ///
1014    /// **Note:** This method is only available on platforms that support atomic
1015    /// operations on `u8`.
1016    ///
1017    /// # Examples
1018    ///
1019    /// ```
1020    /// use std::sync::atomic::{AtomicBool, Ordering};
1021    ///
1022    /// let val = AtomicBool::new(false);
1023    ///
1024    /// let new = true;
1025    /// let mut old = val.load(Ordering::Relaxed);
1026    /// loop {
1027    ///     match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1028    ///         Ok(_) => break,
1029    ///         Err(x) => old = x,
1030    ///     }
1031    /// }
1032    /// ```
1033    ///
1034    /// # Considerations
1035    ///
1036    /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
1037    /// of CAS operations. In particular, a load of the value followed by a successful
1038    /// `compare_exchange` with the previous load *does not ensure* that other threads have not
1039    /// changed the value in the interim. This is usually important when the *equality* check in
1040    /// the `compare_exchange` is being used to check the *identity* of a value, but equality
1041    /// does not necessarily imply identity. In this case, `compare_exchange` can lead to the
1042    /// [ABA problem].
1043    ///
1044    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1045    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1046    #[inline]
1047    #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1048    #[doc(alias = "compare_and_swap")]
1049    #[cfg(target_has_atomic = "8")]
1050    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1051    #[cfg(not(feature = "ferrocene_subset"))]
1052    #[rustc_should_not_be_called_on_const_items]
1053    pub fn compare_exchange_weak(
1054        &self,
1055        current: bool,
1056        new: bool,
1057        success: Ordering,
1058        failure: Ordering,
1059    ) -> Result<bool, bool> {
1060        if EMULATE_ATOMIC_BOOL {
1061            return self.compare_exchange(current, new, success, failure);
1062        }
1063
1064        // SAFETY: data races are prevented by atomic intrinsics.
1065        match unsafe {
1066            atomic_compare_exchange_weak(
1067                self.v.get().cast::<u8>(),
1068                current as u8,
1069                new as u8,
1070                success,
1071                failure,
1072            )
1073        } {
1074            Ok(x) => Ok(x != 0),
1075            Err(x) => Err(x != 0),
1076        }
1077    }
1078
1079    /// Logical "and" with a boolean value.
1080    ///
1081    /// Performs a logical "and" operation on the current value and the argument `val`, and sets
1082    /// the new value to the result.
1083    ///
1084    /// Returns the previous value.
1085    ///
1086    /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
1087    /// of this operation. All ordering modes are possible. Note that using
1088    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1089    /// using [`Release`] makes the load part [`Relaxed`].
1090    ///
1091    /// **Note:** This method is only available on platforms that support atomic
1092    /// operations on `u8`.
1093    ///
1094    /// # Examples
1095    ///
1096    /// ```
1097    /// use std::sync::atomic::{AtomicBool, Ordering};
1098    ///
1099    /// let foo = AtomicBool::new(true);
1100    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
1101    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1102    ///
1103    /// let foo = AtomicBool::new(true);
1104    /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
1105    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1106    ///
1107    /// let foo = AtomicBool::new(false);
1108    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
1109    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1110    /// ```
1111    #[inline]
1112    #[stable(feature = "rust1", since = "1.0.0")]
1113    #[cfg(target_has_atomic = "8")]
1114    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1115    #[rustc_should_not_be_called_on_const_items]
1116    pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
1117        // SAFETY: data races are prevented by atomic intrinsics.
1118        unsafe { atomic_and(self.v.get().cast::<u8>(), val as u8, order) != 0 }
1119    }
1120
1121    /// Logical "nand" with a boolean value.
1122    ///
1123    /// Performs a logical "nand" operation on the current value and the argument `val`, and sets
1124    /// the new value to the result.
1125    ///
1126    /// Returns the previous value.
1127    ///
1128    /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
1129    /// of this operation. All ordering modes are possible. Note that using
1130    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1131    /// using [`Release`] makes the load part [`Relaxed`].
1132    ///
1133    /// **Note:** This method is only available on platforms that support atomic
1134    /// operations on `u8`.
1135    ///
1136    /// # Examples
1137    ///
1138    /// ```
1139    /// use std::sync::atomic::{AtomicBool, Ordering};
1140    ///
1141    /// let foo = AtomicBool::new(true);
1142    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
1143    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1144    ///
1145    /// let foo = AtomicBool::new(true);
1146    /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
1147    /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
1148    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1149    ///
1150    /// let foo = AtomicBool::new(false);
1151    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
1152    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1153    /// ```
1154    #[inline]
1155    #[stable(feature = "rust1", since = "1.0.0")]
1156    #[cfg(target_has_atomic = "8")]
1157    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1158    #[cfg(not(feature = "ferrocene_subset"))]
1159    #[rustc_should_not_be_called_on_const_items]
1160    pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
1161        // We can't use atomic_nand here because it can result in a bool with
1162        // an invalid value. This happens because the atomic operation is done
1163        // with an 8-bit integer internally, which would set the upper 7 bits.
1164        // So we just use fetch_xor or swap instead.
1165        if val {
1166            // !(x & true) == !x
1167            // We must invert the bool.
1168            self.fetch_xor(true, order)
1169        } else {
1170            // !(x & false) == true
1171            // We must set the bool to true.
1172            self.swap(true, order)
1173        }
1174    }
1175
1176    /// Logical "or" with a boolean value.
1177    ///
1178    /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1179    /// new value to the result.
1180    ///
1181    /// Returns the previous value.
1182    ///
1183    /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
1184    /// of this operation. All ordering modes are possible. Note that using
1185    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1186    /// using [`Release`] makes the load part [`Relaxed`].
1187    ///
1188    /// **Note:** This method is only available on platforms that support atomic
1189    /// operations on `u8`.
1190    ///
1191    /// # Examples
1192    ///
1193    /// ```
1194    /// use std::sync::atomic::{AtomicBool, Ordering};
1195    ///
1196    /// let foo = AtomicBool::new(true);
1197    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
1198    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1199    ///
1200    /// let foo = AtomicBool::new(false);
1201    /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), false);
1202    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1203    ///
1204    /// let foo = AtomicBool::new(false);
1205    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
1206    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1207    /// ```
1208    #[inline]
1209    #[stable(feature = "rust1", since = "1.0.0")]
1210    #[cfg(target_has_atomic = "8")]
1211    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1212    #[rustc_should_not_be_called_on_const_items]
1213    pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
1214        // SAFETY: data races are prevented by atomic intrinsics.
1215        unsafe { atomic_or(self.v.get().cast::<u8>(), val as u8, order) != 0 }
1216    }
1217
1218    /// Logical "xor" with a boolean value.
1219    ///
1220    /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1221    /// the new value to the result.
1222    ///
1223    /// Returns the previous value.
1224    ///
1225    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
1226    /// of this operation. All ordering modes are possible. Note that using
1227    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1228    /// using [`Release`] makes the load part [`Relaxed`].
1229    ///
1230    /// **Note:** This method is only available on platforms that support atomic
1231    /// operations on `u8`.
1232    ///
1233    /// # Examples
1234    ///
1235    /// ```
1236    /// use std::sync::atomic::{AtomicBool, Ordering};
1237    ///
1238    /// let foo = AtomicBool::new(true);
1239    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
1240    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1241    ///
1242    /// let foo = AtomicBool::new(true);
1243    /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
1244    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1245    ///
1246    /// let foo = AtomicBool::new(false);
1247    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
1248    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1249    /// ```
1250    #[inline]
1251    #[stable(feature = "rust1", since = "1.0.0")]
1252    #[cfg(target_has_atomic = "8")]
1253    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1254    #[cfg(not(feature = "ferrocene_subset"))]
1255    #[rustc_should_not_be_called_on_const_items]
1256    pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
1257        // SAFETY: data races are prevented by atomic intrinsics.
1258        unsafe { atomic_xor(self.v.get().cast::<u8>(), val as u8, order) != 0 }
1259    }
1260
1261    /// Logical "not" with a boolean value.
1262    ///
1263    /// Performs a logical "not" operation on the current value, and sets
1264    /// the new value to the result.
1265    ///
1266    /// Returns the previous value.
1267    ///
1268    /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering
1269    /// of this operation. All ordering modes are possible. Note that using
1270    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1271    /// using [`Release`] makes the load part [`Relaxed`].
1272    ///
1273    /// **Note:** This method is only available on platforms that support atomic
1274    /// operations on `u8`.
1275    ///
1276    /// # Examples
1277    ///
1278    /// ```
1279    /// use std::sync::atomic::{AtomicBool, Ordering};
1280    ///
1281    /// let foo = AtomicBool::new(true);
1282    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
1283    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1284    ///
1285    /// let foo = AtomicBool::new(false);
1286    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
1287    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1288    /// ```
1289    #[inline]
1290    #[stable(feature = "atomic_bool_fetch_not", since = "1.81.0")]
1291    #[cfg(target_has_atomic = "8")]
1292    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1293    #[cfg(not(feature = "ferrocene_subset"))]
1294    #[rustc_should_not_be_called_on_const_items]
1295    pub fn fetch_not(&self, order: Ordering) -> bool {
1296        self.fetch_xor(true, order)
1297    }
1298
1299    /// Returns a mutable pointer to the underlying [`bool`].
1300    ///
1301    /// Doing non-atomic reads and writes on the resulting boolean can be a data race.
1302    /// This method is mostly useful for FFI, where the function signature may use
1303    /// `*mut bool` instead of `&AtomicBool`.
1304    ///
1305    /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
1306    /// atomic types work with interior mutability. All modifications of an atomic change the value
1307    /// through a shared reference, and can do so safely as long as they use atomic operations. Any
1308    /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the
1309    /// requirements of the [memory model].
1310    ///
1311    /// # Examples
1312    ///
1313    /// ```ignore (extern-declaration)
1314    /// # fn main() {
1315    /// use std::sync::atomic::AtomicBool;
1316    ///
1317    /// extern "C" {
1318    ///     fn my_atomic_op(arg: *mut bool);
1319    /// }
1320    ///
1321    /// let mut atomic = AtomicBool::new(true);
1322    /// unsafe {
1323    ///     my_atomic_op(atomic.as_ptr());
1324    /// }
1325    /// # }
1326    /// ```
1327    ///
1328    /// [memory model]: self#memory-model-for-atomic-accesses
1329    #[inline]
1330    #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
1331    #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
1332    #[rustc_never_returns_null_ptr]
1333    #[cfg(not(feature = "ferrocene_subset"))]
1334    #[rustc_should_not_be_called_on_const_items]
1335    pub const fn as_ptr(&self) -> *mut bool {
1336        self.v.get().cast()
1337    }
1338
1339    /// An alias for [`AtomicBool::try_update`].
1340    #[inline]
1341    #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
1342    #[cfg(target_has_atomic = "8")]
1343    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1344    #[cfg(not(feature = "ferrocene_subset"))]
1345    #[rustc_should_not_be_called_on_const_items]
1346    #[deprecated(
1347        since = "1.99.0",
1348        note = "renamed to `try_update` for consistency",
1349        suggestion = "try_update"
1350    )]
1351    pub fn fetch_update<F>(
1352        &self,
1353        set_order: Ordering,
1354        fetch_order: Ordering,
1355        f: F,
1356    ) -> Result<bool, bool>
1357    where
1358        F: FnMut(bool) -> Option<bool>,
1359    {
1360        self.try_update(set_order, fetch_order, f)
1361    }
1362
1363    /// Fetches the value, and applies a function to it that returns an optional
1364    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1365    /// returned `Some(_)`, else `Err(previous_value)`.
1366    ///
1367    /// See also: [`update`](`AtomicBool::update`).
1368    ///
1369    /// Note: This may call the function multiple times if the value has been
1370    /// changed from other threads in the meantime, as long as the function
1371    /// returns `Some(_)`, but the function will have been applied only once to
1372    /// the stored value.
1373    ///
1374    /// `try_update` takes two [`Ordering`] arguments to describe the memory
1375    /// ordering of this operation. The first describes the required ordering for
1376    /// when the operation finally succeeds while the second describes the
1377    /// required ordering for loads. These correspond to the success and failure
1378    /// orderings of [`AtomicBool::compare_exchange`] respectively.
1379    ///
1380    /// Using [`Acquire`] as success ordering makes the store part of this
1381    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1382    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1383    /// [`Acquire`] or [`Relaxed`].
1384    ///
1385    /// **Note:** This method is only available on platforms that support atomic
1386    /// operations on `u8`.
1387    ///
1388    /// # Considerations
1389    ///
1390    /// This method is not magic; it is not provided by the hardware, and does not act like a
1391    /// critical section or mutex.
1392    ///
1393    /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
1394    /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem].
1395    ///
1396    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1397    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1398    ///
1399    /// # Examples
1400    ///
1401    /// ```rust
1402    /// use std::sync::atomic::{AtomicBool, Ordering};
1403    ///
1404    /// let x = AtomicBool::new(false);
1405    /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1406    /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1407    /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1408    /// assert_eq!(x.load(Ordering::SeqCst), false);
1409    /// ```
1410    #[inline]
1411    #[stable(feature = "atomic_try_update", since = "CURRENT_RUSTC_VERSION")]
1412    #[cfg(target_has_atomic = "8")]
1413    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1414    #[cfg(not(feature = "ferrocene_subset"))]
1415    #[rustc_should_not_be_called_on_const_items]
1416    pub fn try_update(
1417        &self,
1418        set_order: Ordering,
1419        fetch_order: Ordering,
1420        mut f: impl FnMut(bool) -> Option<bool>,
1421    ) -> Result<bool, bool> {
1422        let mut prev = self.load(fetch_order);
1423        while let Some(next) = f(prev) {
1424            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1425                x @ Ok(_) => return x,
1426                Err(next_prev) => prev = next_prev,
1427            }
1428        }
1429        Err(prev)
1430    }
1431
1432    /// Fetches the value, applies a function to it that it return a new value.
1433    /// The new value is stored and the old value is returned.
1434    ///
1435    /// See also: [`try_update`](`AtomicBool::try_update`).
1436    ///
1437    /// Note: This may call the function multiple times if the value has been changed from other threads in
1438    /// the meantime, but the function will have been applied only once to the stored value.
1439    ///
1440    /// `update` takes two [`Ordering`] arguments to describe the memory
1441    /// ordering of this operation. The first describes the required ordering for
1442    /// when the operation finally succeeds while the second describes the
1443    /// required ordering for loads. These correspond to the success and failure
1444    /// orderings of [`AtomicBool::compare_exchange`] respectively.
1445    ///
1446    /// Using [`Acquire`] as success ordering makes the store part
1447    /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
1448    /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1449    ///
1450    /// **Note:** This method is only available on platforms that support atomic operations on `u8`.
1451    ///
1452    /// # Considerations
1453    ///
1454    /// This method is not magic; it is not provided by the hardware, and does not act like a
1455    /// critical section or mutex.
1456    ///
1457    /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
1458    /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem].
1459    ///
1460    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1461    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1462    ///
1463    /// # Examples
1464    ///
1465    /// ```rust
1466    ///
1467    /// use std::sync::atomic::{AtomicBool, Ordering};
1468    ///
1469    /// let x = AtomicBool::new(false);
1470    /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| !x), false);
1471    /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| !x), true);
1472    /// assert_eq!(x.load(Ordering::SeqCst), false);
1473    /// ```
1474    #[inline]
1475    #[stable(feature = "atomic_try_update", since = "CURRENT_RUSTC_VERSION")]
1476    #[cfg(target_has_atomic = "8")]
1477    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1478    #[cfg(not(feature = "ferrocene_subset"))]
1479    #[rustc_should_not_be_called_on_const_items]
1480    pub fn update(
1481        &self,
1482        set_order: Ordering,
1483        fetch_order: Ordering,
1484        mut f: impl FnMut(bool) -> bool,
1485    ) -> bool {
1486        let mut prev = self.load(fetch_order);
1487        loop {
1488            match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
1489                Ok(x) => break x,
1490                Err(next_prev) => prev = next_prev,
1491            }
1492        }
1493    }
1494}
1495
1496#[cfg(target_has_atomic_load_store = "ptr")]
1497#[cfg(not(feature = "ferrocene_subset"))]
1498impl<T> AtomicPtr<T> {
1499    /// Creates a new `AtomicPtr`.
1500    ///
1501    /// # Examples
1502    ///
1503    /// ```
1504    /// use std::sync::atomic::AtomicPtr;
1505    ///
1506    /// let ptr = &mut 5;
1507    /// let atomic_ptr = AtomicPtr::new(ptr);
1508    /// ```
1509    #[inline]
1510    #[stable(feature = "rust1", since = "1.0.0")]
1511    #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
1512    pub const fn new(p: *mut T) -> AtomicPtr<T> {
1513        // SAFETY:
1514        // `Atomic<T>` is essentially a transparent wrapper around `T`.
1515        unsafe { transmute(p) }
1516    }
1517
1518    /// Creates a new `AtomicPtr` from a pointer.
1519    ///
1520    /// # Examples
1521    ///
1522    /// ```
1523    /// use std::sync::atomic::{self, AtomicPtr};
1524    ///
1525    /// // Get a pointer to an allocated value
1526    /// let ptr: *mut *mut u8 = Box::into_raw(Box::new(std::ptr::null_mut()));
1527    ///
1528    /// assert!(ptr.cast::<AtomicPtr<u8>>().is_aligned());
1529    ///
1530    /// {
1531    ///     // Create an atomic view of the allocated value
1532    ///     let atomic = unsafe { AtomicPtr::from_ptr(ptr) };
1533    ///
1534    ///     // Use `atomic` for atomic operations, possibly share it with other threads
1535    ///     atomic.store(std::ptr::NonNull::dangling().as_ptr(), atomic::Ordering::Relaxed);
1536    /// }
1537    ///
1538    /// // It's ok to non-atomically access the value behind `ptr`,
1539    /// // since the reference to the atomic ended its lifetime in the block above
1540    /// assert!(!unsafe { *ptr }.is_null());
1541    ///
1542    /// // Deallocate the value
1543    /// unsafe { drop(Box::from_raw(ptr)) }
1544    /// ```
1545    ///
1546    /// # Safety
1547    ///
1548    /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this
1549    ///   can be bigger than `align_of::<*mut T>()`).
1550    /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
1551    /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
1552    ///   allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different
1553    ///   sizes, without synchronization.
1554    ///
1555    /// [valid]: crate::ptr#safety
1556    /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
1557    #[inline]
1558    #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
1559    #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
1560    pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a AtomicPtr<T> {
1561        // SAFETY: guaranteed by the caller
1562        unsafe { &*ptr.cast() }
1563    }
1564
1565    /// Creates a new `AtomicPtr` initialized with a null pointer.
1566    ///
1567    /// # Examples
1568    ///
1569    /// ```
1570    /// #![feature(atomic_ptr_null)]
1571    /// use std::sync::atomic::{AtomicPtr, Ordering};
1572    ///
1573    /// let atomic_ptr = AtomicPtr::<()>::null();
1574    /// assert!(atomic_ptr.load(Ordering::Relaxed).is_null());
1575    /// ```
1576    #[inline]
1577    #[must_use]
1578    #[unstable(feature = "atomic_ptr_null", issue = "150733")]
1579    pub const fn null() -> AtomicPtr<T> {
1580        AtomicPtr::new(crate::ptr::null_mut())
1581    }
1582
1583    /// Returns a mutable reference to the underlying pointer.
1584    ///
1585    /// This is safe because the mutable reference guarantees that no other threads are
1586    /// concurrently accessing the atomic data.
1587    ///
1588    /// # Examples
1589    ///
1590    /// ```
1591    /// use std::sync::atomic::{AtomicPtr, Ordering};
1592    ///
1593    /// let mut data = 10;
1594    /// let mut atomic_ptr = AtomicPtr::new(&mut data);
1595    /// let mut other_data = 5;
1596    /// *atomic_ptr.get_mut() = &mut other_data;
1597    /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
1598    /// ```
1599    #[inline]
1600    #[stable(feature = "atomic_access", since = "1.15.0")]
1601    pub fn get_mut(&mut self) -> &mut *mut T {
1602        // SAFETY:
1603        // `Atomic<T>` is essentially a transparent wrapper around `T`.
1604        unsafe { &mut *self.as_ptr() }
1605    }
1606
1607    /// Gets atomic access to a pointer.
1608    ///
1609    /// **Note:** This function is only available on targets where `AtomicPtr<T>` has the same alignment as `*const T`
1610    ///
1611    /// # Examples
1612    ///
1613    /// ```
1614    /// #![feature(atomic_from_mut)]
1615    /// use std::sync::atomic::{AtomicPtr, Ordering};
1616    ///
1617    /// let mut data = 123;
1618    /// let mut some_ptr = &mut data as *mut i32;
1619    /// let a = AtomicPtr::from_mut(&mut some_ptr);
1620    /// let mut other_data = 456;
1621    /// a.store(&mut other_data, Ordering::Relaxed);
1622    /// assert_eq!(unsafe { *some_ptr }, 456);
1623    /// ```
1624    #[inline]
1625    #[cfg(target_has_atomic_equal_alignment = "ptr")]
1626    #[unstable(feature = "atomic_from_mut", issue = "76314")]
1627    pub fn from_mut(v: &mut *mut T) -> &mut Self {
1628        let [] = [(); align_of::<AtomicPtr<()>>() - align_of::<*mut ()>()];
1629        // SAFETY:
1630        //  - the mutable reference guarantees unique ownership.
1631        //  - the alignment of `*mut T` and `Self` is the same on all platforms
1632        //    supported by rust, as verified above.
1633        unsafe { &mut *(v as *mut *mut T as *mut Self) }
1634    }
1635
1636    /// Gets non-atomic access to a `&mut [AtomicPtr]` slice.
1637    ///
1638    /// This is safe because the mutable reference guarantees that no other threads are
1639    /// concurrently accessing the atomic data.
1640    ///
1641    /// # Examples
1642    ///
1643    /// ```ignore-wasm
1644    /// #![feature(atomic_from_mut)]
1645    /// use std::ptr::null_mut;
1646    /// use std::sync::atomic::{AtomicPtr, Ordering};
1647    ///
1648    /// let mut some_ptrs = [const { AtomicPtr::new(null_mut::<String>()) }; 10];
1649    ///
1650    /// let view: &mut [*mut String] = AtomicPtr::get_mut_slice(&mut some_ptrs);
1651    /// assert_eq!(view, [null_mut::<String>(); 10]);
1652    /// view
1653    ///     .iter_mut()
1654    ///     .enumerate()
1655    ///     .for_each(|(i, ptr)| *ptr = Box::into_raw(Box::new(format!("iteration#{i}"))));
1656    ///
1657    /// std::thread::scope(|s| {
1658    ///     for ptr in &some_ptrs {
1659    ///         s.spawn(move || {
1660    ///             let ptr = ptr.load(Ordering::Relaxed);
1661    ///             assert!(!ptr.is_null());
1662    ///
1663    ///             let name = unsafe { Box::from_raw(ptr) };
1664    ///             println!("Hello, {name}!");
1665    ///         });
1666    ///     }
1667    /// });
1668    /// ```
1669    #[inline]
1670    #[unstable(feature = "atomic_from_mut", issue = "76314")]
1671    pub fn get_mut_slice(this: &mut [Self]) -> &mut [*mut T] {
1672        // SAFETY: the mutable reference guarantees unique ownership.
1673        unsafe { &mut *(this as *mut [Self] as *mut [*mut T]) }
1674    }
1675
1676    /// Gets atomic access to a slice of pointers.
1677    ///
1678    /// **Note:** This function is only available on targets where `AtomicPtr<T>` has the same alignment as `*const T`
1679    ///
1680    /// # Examples
1681    ///
1682    /// ```ignore-wasm
1683    /// #![feature(atomic_from_mut)]
1684    /// use std::ptr::null_mut;
1685    /// use std::sync::atomic::{AtomicPtr, Ordering};
1686    ///
1687    /// let mut some_ptrs = [null_mut::<String>(); 10];
1688    /// let a = &*AtomicPtr::from_mut_slice(&mut some_ptrs);
1689    /// std::thread::scope(|s| {
1690    ///     for i in 0..a.len() {
1691    ///         s.spawn(move || {
1692    ///             let name = Box::new(format!("thread{i}"));
1693    ///             a[i].store(Box::into_raw(name), Ordering::Relaxed);
1694    ///         });
1695    ///     }
1696    /// });
1697    /// for p in some_ptrs {
1698    ///     assert!(!p.is_null());
1699    ///     let name = unsafe { Box::from_raw(p) };
1700    ///     println!("Hello, {name}!");
1701    /// }
1702    /// ```
1703    #[inline]
1704    #[cfg(target_has_atomic_equal_alignment = "ptr")]
1705    #[unstable(feature = "atomic_from_mut", issue = "76314")]
1706    pub fn from_mut_slice(v: &mut [*mut T]) -> &mut [Self] {
1707        // SAFETY:
1708        //  - the mutable reference guarantees unique ownership.
1709        //  - the alignment of `*mut T` and `Self` is the same on all platforms
1710        //    supported by rust, as verified above.
1711        unsafe { &mut *(v as *mut [*mut T] as *mut [Self]) }
1712    }
1713
1714    /// Consumes the atomic and returns the contained value.
1715    ///
1716    /// This is safe because passing `self` by value guarantees that no other threads are
1717    /// concurrently accessing the atomic data.
1718    ///
1719    /// # Examples
1720    ///
1721    /// ```
1722    /// use std::sync::atomic::AtomicPtr;
1723    ///
1724    /// let mut data = 5;
1725    /// let atomic_ptr = AtomicPtr::new(&mut data);
1726    /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
1727    /// ```
1728    #[inline]
1729    #[stable(feature = "atomic_access", since = "1.15.0")]
1730    #[rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0")]
1731    pub const fn into_inner(self) -> *mut T {
1732        // SAFETY:
1733        // `Atomic<T>` is essentially a transparent wrapper around `T`.
1734        unsafe { transmute(self) }
1735    }
1736
1737    /// Loads a value from the pointer.
1738    ///
1739    /// `load` takes an [`Ordering`] argument which describes the memory ordering
1740    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
1741    ///
1742    /// # Panics
1743    ///
1744    /// Panics if `order` is [`Release`] or [`AcqRel`].
1745    ///
1746    /// # Examples
1747    ///
1748    /// ```
1749    /// use std::sync::atomic::{AtomicPtr, Ordering};
1750    ///
1751    /// let ptr = &mut 5;
1752    /// let some_ptr = AtomicPtr::new(ptr);
1753    ///
1754    /// let value = some_ptr.load(Ordering::Relaxed);
1755    /// ```
1756    #[inline]
1757    #[stable(feature = "rust1", since = "1.0.0")]
1758    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1759    pub fn load(&self, order: Ordering) -> *mut T {
1760        // SAFETY: data races are prevented by atomic intrinsics.
1761        unsafe { atomic_load(self.as_ptr(), order) }
1762    }
1763
1764    /// Stores a value into the pointer.
1765    ///
1766    /// `store` takes an [`Ordering`] argument which describes the memory ordering
1767    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
1768    ///
1769    /// # Panics
1770    ///
1771    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
1772    ///
1773    /// # Examples
1774    ///
1775    /// ```
1776    /// use std::sync::atomic::{AtomicPtr, Ordering};
1777    ///
1778    /// let ptr = &mut 5;
1779    /// let some_ptr = AtomicPtr::new(ptr);
1780    ///
1781    /// let other_ptr = &mut 10;
1782    ///
1783    /// some_ptr.store(other_ptr, Ordering::Relaxed);
1784    /// ```
1785    #[inline]
1786    #[stable(feature = "rust1", since = "1.0.0")]
1787    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1788    #[rustc_should_not_be_called_on_const_items]
1789    pub fn store(&self, ptr: *mut T, order: Ordering) {
1790        // SAFETY: data races are prevented by atomic intrinsics.
1791        unsafe {
1792            atomic_store(self.as_ptr(), ptr, order);
1793        }
1794    }
1795
1796    /// Stores a value into the pointer, returning the previous value.
1797    ///
1798    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
1799    /// of this operation. All ordering modes are possible. Note that using
1800    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1801    /// using [`Release`] makes the load part [`Relaxed`].
1802    ///
1803    /// **Note:** This method is only available on platforms that support atomic
1804    /// operations on pointers.
1805    ///
1806    /// # Examples
1807    ///
1808    /// ```
1809    /// use std::sync::atomic::{AtomicPtr, Ordering};
1810    ///
1811    /// let ptr = &mut 5;
1812    /// let some_ptr = AtomicPtr::new(ptr);
1813    ///
1814    /// let other_ptr = &mut 10;
1815    ///
1816    /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
1817    /// ```
1818    #[inline]
1819    #[stable(feature = "rust1", since = "1.0.0")]
1820    #[cfg(target_has_atomic = "ptr")]
1821    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1822    #[rustc_should_not_be_called_on_const_items]
1823    pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
1824        // SAFETY: data races are prevented by atomic intrinsics.
1825        unsafe { atomic_swap(self.as_ptr(), ptr, order) }
1826    }
1827
1828    /// Stores a value into the pointer if the current value is the same as the `current` value.
1829    ///
1830    /// The return value is always the previous value. If it is equal to `current`, then the value
1831    /// was updated.
1832    ///
1833    /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
1834    /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
1835    /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
1836    /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
1837    /// happens, and using [`Release`] makes the load part [`Relaxed`].
1838    ///
1839    /// **Note:** This method is only available on platforms that support atomic
1840    /// operations on pointers.
1841    ///
1842    /// # Migrating to `compare_exchange` and `compare_exchange_weak`
1843    ///
1844    /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
1845    /// memory orderings:
1846    ///
1847    /// Original | Success | Failure
1848    /// -------- | ------- | -------
1849    /// Relaxed  | Relaxed | Relaxed
1850    /// Acquire  | Acquire | Acquire
1851    /// Release  | Release | Relaxed
1852    /// AcqRel   | AcqRel  | Acquire
1853    /// SeqCst   | SeqCst  | SeqCst
1854    ///
1855    /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
1856    /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
1857    /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
1858    /// rather than to infer success vs failure based on the value that was read.
1859    ///
1860    /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
1861    /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
1862    /// which allows the compiler to generate better assembly code when the compare and swap
1863    /// is used in a loop.
1864    ///
1865    /// # Examples
1866    ///
1867    /// ```
1868    /// use std::sync::atomic::{AtomicPtr, Ordering};
1869    ///
1870    /// let ptr = &mut 5;
1871    /// let some_ptr = AtomicPtr::new(ptr);
1872    ///
1873    /// let other_ptr = &mut 10;
1874    ///
1875    /// let value = some_ptr.compare_and_swap(ptr, other_ptr, Ordering::Relaxed);
1876    /// ```
1877    #[inline]
1878    #[stable(feature = "rust1", since = "1.0.0")]
1879    #[deprecated(
1880        since = "1.50.0",
1881        note = "Use `compare_exchange` or `compare_exchange_weak` instead"
1882    )]
1883    #[cfg(target_has_atomic = "ptr")]
1884    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1885    #[rustc_should_not_be_called_on_const_items]
1886    pub fn compare_and_swap(&self, current: *mut T, new: *mut T, order: Ordering) -> *mut T {
1887        match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
1888            Ok(x) => x,
1889            Err(x) => x,
1890        }
1891    }
1892
1893    /// Stores a value into the pointer if the current value is the same as the `current` value.
1894    ///
1895    /// The return value is a result indicating whether the new value was written and containing
1896    /// the previous value. On success this value is guaranteed to be equal to `current`.
1897    ///
1898    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
1899    /// ordering of this operation. `success` describes the required ordering for the
1900    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1901    /// `failure` describes the required ordering for the load operation that takes place when
1902    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1903    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1904    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1905    ///
1906    /// **Note:** This method is only available on platforms that support atomic
1907    /// operations on pointers.
1908    ///
1909    /// # Examples
1910    ///
1911    /// ```
1912    /// use std::sync::atomic::{AtomicPtr, Ordering};
1913    ///
1914    /// let ptr = &mut 5;
1915    /// let some_ptr = AtomicPtr::new(ptr);
1916    ///
1917    /// let other_ptr = &mut 10;
1918    ///
1919    /// let value = some_ptr.compare_exchange(ptr, other_ptr,
1920    ///                                       Ordering::SeqCst, Ordering::Relaxed);
1921    /// ```
1922    ///
1923    /// # Considerations
1924    ///
1925    /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
1926    /// of CAS operations. In particular, a load of the value followed by a successful
1927    /// `compare_exchange` with the previous load *does not ensure* that other threads have not
1928    /// changed the value in the interim. This is usually important when the *equality* check in
1929    /// the `compare_exchange` is being used to check the *identity* of a value, but equality
1930    /// does not necessarily imply identity. This is a particularly common case for pointers, as
1931    /// a pointer holding the same address does not imply that the same object exists at that
1932    /// address! In this case, `compare_exchange` can lead to the [ABA problem].
1933    ///
1934    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1935    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1936    #[inline]
1937    #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1938    #[cfg(target_has_atomic = "ptr")]
1939    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1940    #[rustc_should_not_be_called_on_const_items]
1941    pub fn compare_exchange(
1942        &self,
1943        current: *mut T,
1944        new: *mut T,
1945        success: Ordering,
1946        failure: Ordering,
1947    ) -> Result<*mut T, *mut T> {
1948        // SAFETY: data races are prevented by atomic intrinsics.
1949        unsafe { atomic_compare_exchange(self.as_ptr(), current, new, success, failure) }
1950    }
1951
1952    /// Stores a value into the pointer if the current value is the same as the `current` value.
1953    ///
1954    /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the
1955    /// comparison succeeds, which can result in more efficient code on some platforms. The
1956    /// return value is a result indicating whether the new value was written and containing the
1957    /// previous value.
1958    ///
1959    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1960    /// ordering of this operation. `success` describes the required ordering for the
1961    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1962    /// `failure` describes the required ordering for the load operation that takes place when
1963    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1964    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1965    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1966    ///
1967    /// **Note:** This method is only available on platforms that support atomic
1968    /// operations on pointers.
1969    ///
1970    /// # Examples
1971    ///
1972    /// ```
1973    /// use std::sync::atomic::{AtomicPtr, Ordering};
1974    ///
1975    /// let some_ptr = AtomicPtr::new(&mut 5);
1976    ///
1977    /// let new = &mut 10;
1978    /// let mut old = some_ptr.load(Ordering::Relaxed);
1979    /// loop {
1980    ///     match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1981    ///         Ok(_) => break,
1982    ///         Err(x) => old = x,
1983    ///     }
1984    /// }
1985    /// ```
1986    ///
1987    /// # Considerations
1988    ///
1989    /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
1990    /// of CAS operations. In particular, a load of the value followed by a successful
1991    /// `compare_exchange` with the previous load *does not ensure* that other threads have not
1992    /// changed the value in the interim. This is usually important when the *equality* check in
1993    /// the `compare_exchange` is being used to check the *identity* of a value, but equality
1994    /// does not necessarily imply identity. This is a particularly common case for pointers, as
1995    /// a pointer holding the same address does not imply that the same object exists at that
1996    /// address! In this case, `compare_exchange` can lead to the [ABA problem].
1997    ///
1998    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1999    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
2000    #[inline]
2001    #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
2002    #[cfg(target_has_atomic = "ptr")]
2003    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2004    #[rustc_should_not_be_called_on_const_items]
2005    pub fn compare_exchange_weak(
2006        &self,
2007        current: *mut T,
2008        new: *mut T,
2009        success: Ordering,
2010        failure: Ordering,
2011    ) -> Result<*mut T, *mut T> {
2012        // SAFETY: This intrinsic is unsafe because it operates on a raw pointer
2013        // but we know for sure that the pointer is valid (we just got it from
2014        // an `UnsafeCell` that we have by reference) and the atomic operation
2015        // itself allows us to safely mutate the `UnsafeCell` contents.
2016        unsafe { atomic_compare_exchange_weak(self.as_ptr(), current, new, success, failure) }
2017    }
2018
2019    /// An alias for [`AtomicPtr::try_update`].
2020    #[inline]
2021    #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
2022    #[cfg(target_has_atomic = "ptr")]
2023    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2024    #[rustc_should_not_be_called_on_const_items]
2025    #[deprecated(
2026        since = "1.99.0",
2027        note = "renamed to `try_update` for consistency",
2028        suggestion = "try_update"
2029    )]
2030    pub fn fetch_update<F>(
2031        &self,
2032        set_order: Ordering,
2033        fetch_order: Ordering,
2034        f: F,
2035    ) -> Result<*mut T, *mut T>
2036    where
2037        F: FnMut(*mut T) -> Option<*mut T>,
2038    {
2039        self.try_update(set_order, fetch_order, f)
2040    }
2041    /// Fetches the value, and applies a function to it that returns an optional
2042    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
2043    /// returned `Some(_)`, else `Err(previous_value)`.
2044    ///
2045    /// See also: [`update`](`AtomicPtr::update`).
2046    ///
2047    /// Note: This may call the function multiple times if the value has been
2048    /// changed from other threads in the meantime, as long as the function
2049    /// returns `Some(_)`, but the function will have been applied only once to
2050    /// the stored value.
2051    ///
2052    /// `try_update` takes two [`Ordering`] arguments to describe the memory
2053    /// ordering of this operation. The first describes the required ordering for
2054    /// when the operation finally succeeds while the second describes the
2055    /// required ordering for loads. These correspond to the success and failure
2056    /// orderings of [`AtomicPtr::compare_exchange`] respectively.
2057    ///
2058    /// Using [`Acquire`] as success ordering makes the store part of this
2059    /// operation [`Relaxed`], and using [`Release`] makes the final successful
2060    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
2061    /// [`Acquire`] or [`Relaxed`].
2062    ///
2063    /// **Note:** This method is only available on platforms that support atomic
2064    /// operations on pointers.
2065    ///
2066    /// # Considerations
2067    ///
2068    /// This method is not magic; it is not provided by the hardware, and does not act like a
2069    /// critical section or mutex.
2070    ///
2071    /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
2072    /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem],
2073    /// which is a particularly common pitfall for pointers!
2074    ///
2075    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2076    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
2077    ///
2078    /// # Examples
2079    ///
2080    /// ```rust
2081    /// use std::sync::atomic::{AtomicPtr, Ordering};
2082    ///
2083    /// let ptr: *mut _ = &mut 5;
2084    /// let some_ptr = AtomicPtr::new(ptr);
2085    ///
2086    /// let new: *mut _ = &mut 10;
2087    /// assert_eq!(some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
2088    /// let result = some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
2089    ///     if x == ptr {
2090    ///         Some(new)
2091    ///     } else {
2092    ///         None
2093    ///     }
2094    /// });
2095    /// assert_eq!(result, Ok(ptr));
2096    /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2097    /// ```
2098    #[inline]
2099    #[stable(feature = "atomic_try_update", since = "CURRENT_RUSTC_VERSION")]
2100    #[cfg(target_has_atomic = "ptr")]
2101    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2102    #[rustc_should_not_be_called_on_const_items]
2103    pub fn try_update(
2104        &self,
2105        set_order: Ordering,
2106        fetch_order: Ordering,
2107        mut f: impl FnMut(*mut T) -> Option<*mut T>,
2108    ) -> Result<*mut T, *mut T> {
2109        let mut prev = self.load(fetch_order);
2110        while let Some(next) = f(prev) {
2111            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
2112                x @ Ok(_) => return x,
2113                Err(next_prev) => prev = next_prev,
2114            }
2115        }
2116        Err(prev)
2117    }
2118
2119    /// Fetches the value, applies a function to it that it return a new value.
2120    /// The new value is stored and the old value is returned.
2121    ///
2122    /// See also: [`try_update`](`AtomicPtr::try_update`).
2123    ///
2124    /// Note: This may call the function multiple times if the value has been changed from other threads in
2125    /// the meantime, but the function will have been applied only once to the stored value.
2126    ///
2127    /// `update` takes two [`Ordering`] arguments to describe the memory
2128    /// ordering of this operation. The first describes the required ordering for
2129    /// when the operation finally succeeds while the second describes the
2130    /// required ordering for loads. These correspond to the success and failure
2131    /// orderings of [`AtomicPtr::compare_exchange`] respectively.
2132    ///
2133    /// Using [`Acquire`] as success ordering makes the store part
2134    /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
2135    /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2136    ///
2137    /// **Note:** This method is only available on platforms that support atomic
2138    /// operations on pointers.
2139    ///
2140    /// # Considerations
2141    ///
2142    /// This method is not magic; it is not provided by the hardware, and does not act like a
2143    /// critical section or mutex.
2144    ///
2145    /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
2146    /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem],
2147    /// which is a particularly common pitfall for pointers!
2148    ///
2149    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2150    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
2151    ///
2152    /// # Examples
2153    ///
2154    /// ```rust
2155    ///
2156    /// use std::sync::atomic::{AtomicPtr, Ordering};
2157    ///
2158    /// let ptr: *mut _ = &mut 5;
2159    /// let some_ptr = AtomicPtr::new(ptr);
2160    ///
2161    /// let new: *mut _ = &mut 10;
2162    /// let result = some_ptr.update(Ordering::SeqCst, Ordering::SeqCst, |_| new);
2163    /// assert_eq!(result, ptr);
2164    /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2165    /// ```
2166    #[inline]
2167    #[stable(feature = "atomic_try_update", since = "CURRENT_RUSTC_VERSION")]
2168    #[cfg(target_has_atomic = "8")]
2169    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2170    #[rustc_should_not_be_called_on_const_items]
2171    pub fn update(
2172        &self,
2173        set_order: Ordering,
2174        fetch_order: Ordering,
2175        mut f: impl FnMut(*mut T) -> *mut T,
2176    ) -> *mut T {
2177        let mut prev = self.load(fetch_order);
2178        loop {
2179            match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
2180                Ok(x) => break x,
2181                Err(next_prev) => prev = next_prev,
2182            }
2183        }
2184    }
2185
2186    /// Offsets the pointer's address by adding `val` (in units of `T`),
2187    /// returning the previous pointer.
2188    ///
2189    /// This is equivalent to using [`wrapping_add`] to atomically perform the
2190    /// equivalent of `ptr = ptr.wrapping_add(val);`.
2191    ///
2192    /// This method operates in units of `T`, which means that it cannot be used
2193    /// to offset the pointer by an amount which is not a multiple of
2194    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2195    /// work with a deliberately misaligned pointer. In such cases, you may use
2196    /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead.
2197    ///
2198    /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the
2199    /// memory ordering of this operation. All ordering modes are possible. Note
2200    /// that using [`Acquire`] makes the store part of this operation
2201    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2202    ///
2203    /// **Note**: This method is only available on platforms that support atomic
2204    /// operations on [`AtomicPtr`].
2205    ///
2206    /// [`wrapping_add`]: pointer::wrapping_add
2207    ///
2208    /// # Examples
2209    ///
2210    /// ```
2211    /// use core::sync::atomic::{AtomicPtr, Ordering};
2212    ///
2213    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2214    /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
2215    /// // Note: units of `size_of::<i64>()`.
2216    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
2217    /// ```
2218    #[inline]
2219    #[cfg(target_has_atomic = "ptr")]
2220    #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2221    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2222    #[rustc_should_not_be_called_on_const_items]
2223    pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T {
2224        self.fetch_byte_add(val.wrapping_mul(size_of::<T>()), order)
2225    }
2226
2227    /// Offsets the pointer's address by subtracting `val` (in units of `T`),
2228    /// returning the previous pointer.
2229    ///
2230    /// This is equivalent to using [`wrapping_sub`] to atomically perform the
2231    /// equivalent of `ptr = ptr.wrapping_sub(val);`.
2232    ///
2233    /// This method operates in units of `T`, which means that it cannot be used
2234    /// to offset the pointer by an amount which is not a multiple of
2235    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2236    /// work with a deliberately misaligned pointer. In such cases, you may use
2237    /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead.
2238    ///
2239    /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory
2240    /// ordering of this operation. All ordering modes are possible. Note that
2241    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2242    /// and using [`Release`] makes the load part [`Relaxed`].
2243    ///
2244    /// **Note**: This method is only available on platforms that support atomic
2245    /// operations on [`AtomicPtr`].
2246    ///
2247    /// [`wrapping_sub`]: pointer::wrapping_sub
2248    ///
2249    /// # Examples
2250    ///
2251    /// ```
2252    /// use core::sync::atomic::{AtomicPtr, Ordering};
2253    ///
2254    /// let array = [1i32, 2i32];
2255    /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
2256    ///
2257    /// assert!(core::ptr::eq(
2258    ///     atom.fetch_ptr_sub(1, Ordering::Relaxed),
2259    ///     &array[1],
2260    /// ));
2261    /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
2262    /// ```
2263    #[inline]
2264    #[cfg(target_has_atomic = "ptr")]
2265    #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2266    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2267    #[rustc_should_not_be_called_on_const_items]
2268    pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T {
2269        self.fetch_byte_sub(val.wrapping_mul(size_of::<T>()), order)
2270    }
2271
2272    /// Offsets the pointer's address by adding `val` *bytes*, returning the
2273    /// previous pointer.
2274    ///
2275    /// This is equivalent to using [`wrapping_byte_add`] to atomically
2276    /// perform `ptr = ptr.wrapping_byte_add(val)`.
2277    ///
2278    /// `fetch_byte_add` takes an [`Ordering`] argument which describes the
2279    /// memory ordering of this operation. All ordering modes are possible. Note
2280    /// that using [`Acquire`] makes the store part of this operation
2281    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2282    ///
2283    /// **Note**: This method is only available on platforms that support atomic
2284    /// operations on [`AtomicPtr`].
2285    ///
2286    /// [`wrapping_byte_add`]: pointer::wrapping_byte_add
2287    ///
2288    /// # Examples
2289    ///
2290    /// ```
2291    /// use core::sync::atomic::{AtomicPtr, Ordering};
2292    ///
2293    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2294    /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
2295    /// // Note: in units of bytes, not `size_of::<i64>()`.
2296    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
2297    /// ```
2298    #[inline]
2299    #[cfg(target_has_atomic = "ptr")]
2300    #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2301    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2302    #[rustc_should_not_be_called_on_const_items]
2303    pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T {
2304        // SAFETY: data races are prevented by atomic intrinsics.
2305        unsafe { atomic_add(self.as_ptr(), val, order).cast() }
2306    }
2307
2308    /// Offsets the pointer's address by subtracting `val` *bytes*, returning the
2309    /// previous pointer.
2310    ///
2311    /// This is equivalent to using [`wrapping_byte_sub`] to atomically
2312    /// perform `ptr = ptr.wrapping_byte_sub(val)`.
2313    ///
2314    /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the
2315    /// memory ordering of this operation. All ordering modes are possible. Note
2316    /// that using [`Acquire`] makes the store part of this operation
2317    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2318    ///
2319    /// **Note**: This method is only available on platforms that support atomic
2320    /// operations on [`AtomicPtr`].
2321    ///
2322    /// [`wrapping_byte_sub`]: pointer::wrapping_byte_sub
2323    ///
2324    /// # Examples
2325    ///
2326    /// ```
2327    /// use core::sync::atomic::{AtomicPtr, Ordering};
2328    ///
2329    /// let mut arr = [0i64, 1];
2330    /// let atom = AtomicPtr::<i64>::new(&raw mut arr[1]);
2331    /// assert_eq!(atom.fetch_byte_sub(8, Ordering::Relaxed).addr(), (&raw const arr[1]).addr());
2332    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), (&raw const arr[0]).addr());
2333    /// ```
2334    #[inline]
2335    #[cfg(target_has_atomic = "ptr")]
2336    #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2337    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2338    #[rustc_should_not_be_called_on_const_items]
2339    pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T {
2340        // SAFETY: data races are prevented by atomic intrinsics.
2341        unsafe { atomic_sub(self.as_ptr(), val, order).cast() }
2342    }
2343
2344    /// Performs a bitwise "or" operation on the address of the current pointer,
2345    /// and the argument `val`, and stores a pointer with provenance of the
2346    /// current pointer and the resulting address.
2347    ///
2348    /// This is equivalent to using [`map_addr`] to atomically perform
2349    /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged
2350    /// pointer schemes to atomically set tag bits.
2351    ///
2352    /// **Caveat**: This operation returns the previous value. To compute the
2353    /// stored value without losing provenance, you may use [`map_addr`]. For
2354    /// example: `a.fetch_or(val).map_addr(|a| a | val)`.
2355    ///
2356    /// `fetch_or` takes an [`Ordering`] argument which describes the memory
2357    /// ordering of this operation. All ordering modes are possible. Note that
2358    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2359    /// and using [`Release`] makes the load part [`Relaxed`].
2360    ///
2361    /// **Note**: This method is only available on platforms that support atomic
2362    /// operations on [`AtomicPtr`].
2363    ///
2364    /// This API and its claimed semantics are part of the Strict Provenance
2365    /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2366    /// details.
2367    ///
2368    /// [`map_addr`]: pointer::map_addr
2369    ///
2370    /// # Examples
2371    ///
2372    /// ```
2373    /// use core::sync::atomic::{AtomicPtr, Ordering};
2374    ///
2375    /// let pointer = &mut 3i64 as *mut i64;
2376    ///
2377    /// let atom = AtomicPtr::<i64>::new(pointer);
2378    /// // Tag the bottom bit of the pointer.
2379    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
2380    /// // Extract and untag.
2381    /// let tagged = atom.load(Ordering::Relaxed);
2382    /// assert_eq!(tagged.addr() & 1, 1);
2383    /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2384    /// ```
2385    #[inline]
2386    #[cfg(target_has_atomic = "ptr")]
2387    #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2388    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2389    #[rustc_should_not_be_called_on_const_items]
2390    pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T {
2391        // SAFETY: data races are prevented by atomic intrinsics.
2392        unsafe { atomic_or(self.as_ptr(), val, order).cast() }
2393    }
2394
2395    /// Performs a bitwise "and" operation on the address of the current
2396    /// pointer, and the argument `val`, and stores a pointer with provenance of
2397    /// the current pointer and the resulting address.
2398    ///
2399    /// This is equivalent to using [`map_addr`] to atomically perform
2400    /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged
2401    /// pointer schemes to atomically unset tag bits.
2402    ///
2403    /// **Caveat**: This operation returns the previous value. To compute the
2404    /// stored value without losing provenance, you may use [`map_addr`]. For
2405    /// example: `a.fetch_and(val).map_addr(|a| a & val)`.
2406    ///
2407    /// `fetch_and` takes an [`Ordering`] argument which describes the memory
2408    /// ordering of this operation. All ordering modes are possible. Note that
2409    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2410    /// and using [`Release`] makes the load part [`Relaxed`].
2411    ///
2412    /// **Note**: This method is only available on platforms that support atomic
2413    /// operations on [`AtomicPtr`].
2414    ///
2415    /// This API and its claimed semantics are part of the Strict Provenance
2416    /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2417    /// details.
2418    ///
2419    /// [`map_addr`]: pointer::map_addr
2420    ///
2421    /// # Examples
2422    ///
2423    /// ```
2424    /// use core::sync::atomic::{AtomicPtr, Ordering};
2425    ///
2426    /// let pointer = &mut 3i64 as *mut i64;
2427    /// // A tagged pointer
2428    /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2429    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
2430    /// // Untag, and extract the previously tagged pointer.
2431    /// let untagged = atom.fetch_and(!1, Ordering::Relaxed)
2432    ///     .map_addr(|a| a & !1);
2433    /// assert_eq!(untagged, pointer);
2434    /// ```
2435    #[inline]
2436    #[cfg(target_has_atomic = "ptr")]
2437    #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2438    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2439    #[rustc_should_not_be_called_on_const_items]
2440    pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T {
2441        // SAFETY: data races are prevented by atomic intrinsics.
2442        unsafe { atomic_and(self.as_ptr(), val, order).cast() }
2443    }
2444
2445    /// Performs a bitwise "xor" operation on the address of the current
2446    /// pointer, and the argument `val`, and stores a pointer with provenance of
2447    /// the current pointer and the resulting address.
2448    ///
2449    /// This is equivalent to using [`map_addr`] to atomically perform
2450    /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged
2451    /// pointer schemes to atomically toggle tag bits.
2452    ///
2453    /// **Caveat**: This operation returns the previous value. To compute the
2454    /// stored value without losing provenance, you may use [`map_addr`]. For
2455    /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`.
2456    ///
2457    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory
2458    /// ordering of this operation. All ordering modes are possible. Note that
2459    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2460    /// and using [`Release`] makes the load part [`Relaxed`].
2461    ///
2462    /// **Note**: This method is only available on platforms that support atomic
2463    /// operations on [`AtomicPtr`].
2464    ///
2465    /// This API and its claimed semantics are part of the Strict Provenance
2466    /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2467    /// details.
2468    ///
2469    /// [`map_addr`]: pointer::map_addr
2470    ///
2471    /// # Examples
2472    ///
2473    /// ```
2474    /// use core::sync::atomic::{AtomicPtr, Ordering};
2475    ///
2476    /// let pointer = &mut 3i64 as *mut i64;
2477    /// let atom = AtomicPtr::<i64>::new(pointer);
2478    ///
2479    /// // Toggle a tag bit on the pointer.
2480    /// atom.fetch_xor(1, Ordering::Relaxed);
2481    /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2482    /// ```
2483    #[inline]
2484    #[cfg(target_has_atomic = "ptr")]
2485    #[stable(feature = "strict_provenance_atomic_ptr", since = "1.91.0")]
2486    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2487    #[rustc_should_not_be_called_on_const_items]
2488    pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T {
2489        // SAFETY: data races are prevented by atomic intrinsics.
2490        unsafe { atomic_xor(self.as_ptr(), val, order).cast() }
2491    }
2492
2493    /// Returns a mutable pointer to the underlying pointer.
2494    ///
2495    /// Doing non-atomic reads and writes on the resulting pointer can be a data race.
2496    /// This method is mostly useful for FFI, where the function signature may use
2497    /// `*mut *mut T` instead of `&AtomicPtr<T>`.
2498    ///
2499    /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
2500    /// atomic types work with interior mutability. All modifications of an atomic change the value
2501    /// through a shared reference, and can do so safely as long as they use atomic operations. Any
2502    /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the
2503    /// requirements of the [memory model].
2504    ///
2505    /// # Examples
2506    ///
2507    /// ```ignore (extern-declaration)
2508    /// use std::sync::atomic::AtomicPtr;
2509    ///
2510    /// extern "C" {
2511    ///     fn my_atomic_op(arg: *mut *mut u32);
2512    /// }
2513    ///
2514    /// let mut value = 17;
2515    /// let atomic = AtomicPtr::new(&mut value);
2516    ///
2517    /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
2518    /// unsafe {
2519    ///     my_atomic_op(atomic.as_ptr());
2520    /// }
2521    /// ```
2522    ///
2523    /// [memory model]: self#memory-model-for-atomic-accesses
2524    #[inline]
2525    #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
2526    #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
2527    #[rustc_never_returns_null_ptr]
2528    pub const fn as_ptr(&self) -> *mut *mut T {
2529        self.v.get().cast()
2530    }
2531}
2532
2533#[cfg(target_has_atomic_load_store = "8")]
2534#[stable(feature = "atomic_bool_from", since = "1.24.0")]
2535#[rustc_const_unstable(feature = "const_convert", issue = "143773")]
2536#[cfg(not(feature = "ferrocene_subset"))]
2537impl const From<bool> for AtomicBool {
2538    /// Converts a `bool` into an `AtomicBool`.
2539    ///
2540    /// # Examples
2541    ///
2542    /// ```
2543    /// use std::sync::atomic::AtomicBool;
2544    /// let atomic_bool = AtomicBool::from(true);
2545    /// assert_eq!(format!("{atomic_bool:?}"), "true")
2546    /// ```
2547    #[inline]
2548    fn from(b: bool) -> Self {
2549        Self::new(b)
2550    }
2551}
2552
2553#[cfg(target_has_atomic_load_store = "ptr")]
2554#[stable(feature = "atomic_from", since = "1.23.0")]
2555#[rustc_const_unstable(feature = "const_convert", issue = "143773")]
2556#[cfg(not(feature = "ferrocene_subset"))]
2557impl<T> const From<*mut T> for AtomicPtr<T> {
2558    /// Converts a `*mut T` into an `AtomicPtr<T>`.
2559    #[inline]
2560    fn from(p: *mut T) -> Self {
2561        Self::new(p)
2562    }
2563}
2564
2565#[allow(unused_macros)] // This macro ends up being unused on some architectures.
2566macro_rules! if_8_bit {
2567    (u8, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($yes)*)?) };
2568    (i8, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($yes)*)?) };
2569    ($_:ident, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($no)*)?) };
2570}
2571
2572#[cfg(target_has_atomic_load_store)]
2573macro_rules! atomic_int {
2574    ($cfg_cas:meta,
2575     $cfg_align:meta,
2576     $stable:meta,
2577     $stable_cxchg:meta,
2578     $stable_debug:meta,
2579     $stable_access:meta,
2580     $stable_from:meta,
2581     $stable_nand:meta,
2582     $const_stable_new:meta,
2583     $const_stable_into_inner:meta,
2584     $s_int_type:literal,
2585     $extra_feature:expr,
2586     $min_fn:ident, $max_fn:ident,
2587     $align:expr,
2588     $int_type:ident $atomic_type:ident) => {
2589        /// An integer type which can be safely shared between threads.
2590        ///
2591        /// This type has the same
2592        #[doc = if_8_bit!(
2593            $int_type,
2594            yes = ["size, alignment, and bit validity"],
2595            no = ["size and bit validity"],
2596        )]
2597        /// as the underlying integer type, [`
2598        #[doc = $s_int_type]
2599        /// `].
2600        #[doc = if_8_bit! {
2601            $int_type,
2602            no = [
2603                "However, the alignment of this type is always equal to its ",
2604                "size, even on targets where [`", $s_int_type, "`] has a ",
2605                "lesser alignment."
2606            ],
2607        }]
2608        ///
2609        /// For more about the differences between atomic types and
2610        /// non-atomic types as well as information about the portability of
2611        /// this type, please see the [module-level documentation].
2612        ///
2613        /// **Note:** This type is only available on platforms that support
2614        /// atomic loads and stores of [`
2615        #[doc = $s_int_type]
2616        /// `].
2617        ///
2618        /// [module-level documentation]: crate::sync::atomic
2619        #[$stable]
2620        pub type $atomic_type = Atomic<$int_type>;
2621
2622        #[$stable]
2623        impl Default for $atomic_type {
2624            #[inline]
2625            fn default() -> Self {
2626                Self::new(Default::default())
2627            }
2628        }
2629
2630        #[$stable_from]
2631        #[rustc_const_unstable(feature = "const_convert", issue = "143773")]
2632        impl const From<$int_type> for $atomic_type {
2633            #[doc = concat!("Converts an `", stringify!($int_type), "` into an `", stringify!($atomic_type), "`.")]
2634            #[inline]
2635            fn from(v: $int_type) -> Self { Self::new(v) }
2636        }
2637
2638        #[$stable_debug]
2639        impl fmt::Debug for $atomic_type {
2640            fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
2641                fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
2642            }
2643        }
2644
2645        impl $atomic_type {
2646            /// Creates a new atomic integer.
2647            ///
2648            /// # Examples
2649            ///
2650            /// ```
2651            #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2652            ///
2653            #[doc = concat!("let atomic_forty_two = ", stringify!($atomic_type), "::new(42);")]
2654            /// ```
2655            #[inline]
2656            #[$stable]
2657            #[$const_stable_new]
2658            #[must_use]
2659            pub const fn new(v: $int_type) -> Self {
2660                // SAFETY:
2661                // `Atomic<T>` is essentially a transparent wrapper around `T`.
2662                unsafe { transmute(v) }
2663            }
2664
2665            /// Creates a new reference to an atomic integer from a pointer.
2666            ///
2667            /// # Examples
2668            ///
2669            /// ```
2670            #[doc = concat!($extra_feature, "use std::sync::atomic::{self, ", stringify!($atomic_type), "};")]
2671            ///
2672            /// // Get a pointer to an allocated value
2673            #[doc = concat!("let ptr: *mut ", stringify!($int_type), " = Box::into_raw(Box::new(0));")]
2674            ///
2675            #[doc = concat!("assert!(ptr.cast::<", stringify!($atomic_type), ">().is_aligned());")]
2676            ///
2677            /// {
2678            ///     // Create an atomic view of the allocated value
2679            // SAFETY: this is a doc comment, tidy, it can't hurt you (also guaranteed by the construction of `ptr` and the assert above)
2680            #[doc = concat!("    let atomic = unsafe {", stringify!($atomic_type), "::from_ptr(ptr) };")]
2681            ///
2682            ///     // Use `atomic` for atomic operations, possibly share it with other threads
2683            ///     atomic.store(1, atomic::Ordering::Relaxed);
2684            /// }
2685            ///
2686            /// // It's ok to non-atomically access the value behind `ptr`,
2687            /// // since the reference to the atomic ended its lifetime in the block above
2688            /// assert_eq!(unsafe { *ptr }, 1);
2689            ///
2690            /// // Deallocate the value
2691            /// unsafe { drop(Box::from_raw(ptr)) }
2692            /// ```
2693            ///
2694            /// # Safety
2695            ///
2696            /// * `ptr` must be aligned to
2697            #[doc = concat!("  `align_of::<", stringify!($atomic_type), ">()`")]
2698            #[doc = if_8_bit!{
2699                $int_type,
2700                yes = [
2701                    "  (note that this is always true, since `align_of::<",
2702                    stringify!($atomic_type), ">() == 1`)."
2703                ],
2704                no = [
2705                    "  (note that on some platforms this can be bigger than `align_of::<",
2706                    stringify!($int_type), ">()`)."
2707                ],
2708            }]
2709            /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2710            /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
2711            ///   allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different
2712            ///   sizes, without synchronization.
2713            ///
2714            /// [valid]: crate::ptr#safety
2715            /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
2716            #[inline]
2717            #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
2718            #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
2719            pub const unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a $atomic_type {
2720                // SAFETY: guaranteed by the caller
2721                unsafe { &*ptr.cast() }
2722            }
2723
2724            /// Returns a mutable reference to the underlying integer.
2725            ///
2726            /// This is safe because the mutable reference guarantees that no other threads are
2727            /// concurrently accessing the atomic data.
2728            ///
2729            /// # Examples
2730            ///
2731            /// ```
2732            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2733            ///
2734            #[doc = concat!("let mut some_var = ", stringify!($atomic_type), "::new(10);")]
2735            /// assert_eq!(*some_var.get_mut(), 10);
2736            /// *some_var.get_mut() = 5;
2737            /// assert_eq!(some_var.load(Ordering::SeqCst), 5);
2738            /// ```
2739            #[inline]
2740            #[$stable_access]
2741            pub fn get_mut(&mut self) -> &mut $int_type {
2742                // SAFETY:
2743                // `Atomic<T>` is essentially a transparent wrapper around `T`.
2744                unsafe { &mut *self.as_ptr() }
2745            }
2746
2747            #[doc = concat!("Get atomic access to a `&mut ", stringify!($int_type), "`.")]
2748            ///
2749            #[doc = if_8_bit! {
2750                $int_type,
2751                no = [
2752                    "**Note:** This function is only available on targets where `",
2753                    stringify!($atomic_type), "` has the same alignment as `", stringify!($int_type), "`."
2754                ],
2755            }]
2756            ///
2757            /// # Examples
2758            ///
2759            /// ```
2760            /// #![feature(atomic_from_mut)]
2761            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2762            ///
2763            /// let mut some_int = 123;
2764            #[doc = concat!("let a = ", stringify!($atomic_type), "::from_mut(&mut some_int);")]
2765            /// a.store(100, Ordering::Relaxed);
2766            /// assert_eq!(some_int, 100);
2767            /// ```
2768            ///
2769            #[inline]
2770            #[$cfg_align]
2771            #[unstable(feature = "atomic_from_mut", issue = "76314")]
2772            pub fn from_mut(v: &mut $int_type) -> &mut Self {
2773                let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2774                // SAFETY:
2775                //  - the mutable reference guarantees unique ownership.
2776                //  - the alignment of `$int_type` and `Self` is the
2777                //    same, as promised by $cfg_align and verified above.
2778                unsafe { &mut *(v as *mut $int_type as *mut Self) }
2779            }
2780
2781            #[doc = concat!("Get non-atomic access to a `&mut [", stringify!($atomic_type), "]` slice")]
2782            ///
2783            /// This is safe because the mutable reference guarantees that no other threads are
2784            /// concurrently accessing the atomic data.
2785            ///
2786            /// # Examples
2787            ///
2788            /// ```ignore-wasm
2789            /// #![feature(atomic_from_mut)]
2790            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2791            ///
2792            #[doc = concat!("let mut some_ints = [const { ", stringify!($atomic_type), "::new(0) }; 10];")]
2793            ///
2794            #[doc = concat!("let view: &mut [", stringify!($int_type), "] = ", stringify!($atomic_type), "::get_mut_slice(&mut some_ints);")]
2795            /// assert_eq!(view, [0; 10]);
2796            /// view
2797            ///     .iter_mut()
2798            ///     .enumerate()
2799            ///     .for_each(|(idx, int)| *int = idx as _);
2800            ///
2801            /// std::thread::scope(|s| {
2802            ///     some_ints
2803            ///         .iter()
2804            ///         .enumerate()
2805            ///         .for_each(|(idx, int)| {
2806            ///             s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
2807            ///         })
2808            /// });
2809            /// ```
2810            #[inline]
2811            #[unstable(feature = "atomic_from_mut", issue = "76314")]
2812            pub fn get_mut_slice(this: &mut [Self]) -> &mut [$int_type] {
2813                // SAFETY: the mutable reference guarantees unique ownership.
2814                unsafe { &mut *(this as *mut [Self] as *mut [$int_type]) }
2815            }
2816
2817            #[doc = concat!("Get atomic access to a `&mut [", stringify!($int_type), "]` slice.")]
2818            ///
2819            #[doc = if_8_bit! {
2820                $int_type,
2821                no = [
2822                    "**Note:** This function is only available on targets where `",
2823                    stringify!($atomic_type), "` has the same alignment as `", stringify!($int_type), "`."
2824                ],
2825            }]
2826            ///
2827            /// # Examples
2828            ///
2829            /// ```ignore-wasm
2830            /// #![feature(atomic_from_mut)]
2831            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2832            ///
2833            /// let mut some_ints = [0; 10];
2834            #[doc = concat!("let a = &*", stringify!($atomic_type), "::from_mut_slice(&mut some_ints);")]
2835            /// std::thread::scope(|s| {
2836            ///     for i in 0..a.len() {
2837            ///         s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
2838            ///     }
2839            /// });
2840            /// for (i, n) in some_ints.into_iter().enumerate() {
2841            ///     assert_eq!(i, n as usize);
2842            /// }
2843            /// ```
2844            #[inline]
2845            #[$cfg_align]
2846            #[unstable(feature = "atomic_from_mut", issue = "76314")]
2847            pub fn from_mut_slice(v: &mut [$int_type]) -> &mut [Self] {
2848                let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2849                // SAFETY:
2850                //  - the mutable reference guarantees unique ownership.
2851                //  - the alignment of `$int_type` and `Self` is the
2852                //    same, as promised by $cfg_align and verified above.
2853                unsafe { &mut *(v as *mut [$int_type] as *mut [Self]) }
2854            }
2855
2856            /// Consumes the atomic and returns the contained value.
2857            ///
2858            /// This is safe because passing `self` by value guarantees that no other threads are
2859            /// concurrently accessing the atomic data.
2860            ///
2861            /// # Examples
2862            ///
2863            /// ```
2864            #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2865            ///
2866            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2867            /// assert_eq!(some_var.into_inner(), 5);
2868            /// ```
2869            #[inline]
2870            #[$stable_access]
2871            #[$const_stable_into_inner]
2872            pub const fn into_inner(self) -> $int_type {
2873                // SAFETY:
2874                // `Atomic<T>` is essentially a transparent wrapper around `T`.
2875                unsafe { transmute(self) }
2876            }
2877
2878            /// Loads a value from the atomic integer.
2879            ///
2880            /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2881            /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
2882            ///
2883            /// # Panics
2884            ///
2885            /// Panics if `order` is [`Release`] or [`AcqRel`].
2886            ///
2887            /// # Examples
2888            ///
2889            /// ```
2890            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2891            ///
2892            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2893            ///
2894            /// assert_eq!(some_var.load(Ordering::Relaxed), 5);
2895            /// ```
2896            #[inline]
2897            #[$stable]
2898            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2899            pub fn load(&self, order: Ordering) -> $int_type {
2900                // SAFETY: data races are prevented by atomic intrinsics.
2901                unsafe { atomic_load(self.as_ptr(), order) }
2902            }
2903
2904            /// Stores a value into the atomic integer.
2905            ///
2906            /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2907            ///  Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
2908            ///
2909            /// # Panics
2910            ///
2911            /// Panics if `order` is [`Acquire`] or [`AcqRel`].
2912            ///
2913            /// # Examples
2914            ///
2915            /// ```
2916            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2917            ///
2918            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2919            ///
2920            /// some_var.store(10, Ordering::Relaxed);
2921            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2922            /// ```
2923            #[inline]
2924            #[$stable]
2925            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2926            #[rustc_should_not_be_called_on_const_items]
2927            pub fn store(&self, val: $int_type, order: Ordering) {
2928                // SAFETY: data races are prevented by atomic intrinsics.
2929                unsafe { atomic_store(self.as_ptr(), val, order); }
2930            }
2931
2932            /// Stores a value into the atomic integer, returning the previous value.
2933            ///
2934            /// `swap` takes an [`Ordering`] argument which describes the memory ordering
2935            /// of this operation. All ordering modes are possible. Note that using
2936            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2937            /// using [`Release`] makes the load part [`Relaxed`].
2938            ///
2939            /// **Note**: This method is only available on platforms that support atomic operations on
2940            #[doc = concat!("[`", $s_int_type, "`].")]
2941            ///
2942            /// # Examples
2943            ///
2944            /// ```
2945            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2946            ///
2947            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2948            ///
2949            /// assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
2950            /// ```
2951            #[inline]
2952            #[$stable]
2953            #[$cfg_cas]
2954            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2955            #[rustc_should_not_be_called_on_const_items]
2956            pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
2957                // SAFETY: data races are prevented by atomic intrinsics.
2958                unsafe { atomic_swap(self.as_ptr(), val, order) }
2959            }
2960
2961            /// Stores a value into the atomic integer if the current value is the same as
2962            /// the `current` value.
2963            ///
2964            /// The return value is always the previous value. If it is equal to `current`, then the
2965            /// value was updated.
2966            ///
2967            /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
2968            /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
2969            /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
2970            /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
2971            /// happens, and using [`Release`] makes the load part [`Relaxed`].
2972            ///
2973            /// **Note**: This method is only available on platforms that support atomic operations on
2974            #[doc = concat!("[`", $s_int_type, "`].")]
2975            ///
2976            /// # Migrating to `compare_exchange` and `compare_exchange_weak`
2977            ///
2978            /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
2979            /// memory orderings:
2980            ///
2981            /// Original | Success | Failure
2982            /// -------- | ------- | -------
2983            /// Relaxed  | Relaxed | Relaxed
2984            /// Acquire  | Acquire | Acquire
2985            /// Release  | Release | Relaxed
2986            /// AcqRel   | AcqRel  | Acquire
2987            /// SeqCst   | SeqCst  | SeqCst
2988            ///
2989            /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
2990            /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
2991            /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
2992            /// rather than to infer success vs failure based on the value that was read.
2993            ///
2994            /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
2995            /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
2996            /// which allows the compiler to generate better assembly code when the compare and swap
2997            /// is used in a loop.
2998            ///
2999            /// # Examples
3000            ///
3001            /// ```
3002            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3003            ///
3004            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
3005            ///
3006            /// assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
3007            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
3008            ///
3009            /// assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
3010            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
3011            /// ```
3012            #[inline]
3013            #[$stable]
3014            #[deprecated(
3015                since = "1.50.0",
3016                note = "Use `compare_exchange` or `compare_exchange_weak` instead")
3017            ]
3018            #[$cfg_cas]
3019            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3020            #[rustc_should_not_be_called_on_const_items]
3021            pub fn compare_and_swap(&self,
3022                                    current: $int_type,
3023                                    new: $int_type,
3024                                    order: Ordering) -> $int_type {
3025                match self.compare_exchange(current,
3026                                            new,
3027                                            order,
3028                                            strongest_failure_ordering(order)) {
3029                    Ok(x) => x,
3030                    Err(x) => x,
3031                }
3032            }
3033
3034            /// Stores a value into the atomic integer if the current value is the same as
3035            /// the `current` value.
3036            ///
3037            /// The return value is a result indicating whether the new value was written and
3038            /// containing the previous value. On success this value is guaranteed to be equal to
3039            /// `current`.
3040            ///
3041            /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
3042            /// ordering of this operation. `success` describes the required ordering for the
3043            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
3044            /// `failure` describes the required ordering for the load operation that takes place when
3045            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
3046            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
3047            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3048            ///
3049            /// **Note**: This method is only available on platforms that support atomic operations on
3050            #[doc = concat!("[`", $s_int_type, "`].")]
3051            ///
3052            /// # Examples
3053            ///
3054            /// ```
3055            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3056            ///
3057            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
3058            ///
3059            /// assert_eq!(some_var.compare_exchange(5, 10,
3060            ///                                      Ordering::Acquire,
3061            ///                                      Ordering::Relaxed),
3062            ///            Ok(5));
3063            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
3064            ///
3065            /// assert_eq!(some_var.compare_exchange(6, 12,
3066            ///                                      Ordering::SeqCst,
3067            ///                                      Ordering::Acquire),
3068            ///            Err(10));
3069            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
3070            /// ```
3071            ///
3072            /// # Considerations
3073            ///
3074            /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
3075            /// of CAS operations. In particular, a load of the value followed by a successful
3076            /// `compare_exchange` with the previous load *does not ensure* that other threads have not
3077            /// changed the value in the interim! This is usually important when the *equality* check in
3078            /// the `compare_exchange` is being used to check the *identity* of a value, but equality
3079            /// does not necessarily imply identity. This is a particularly common case for pointers, as
3080            /// a pointer holding the same address does not imply that the same object exists at that
3081            /// address! In this case, `compare_exchange` can lead to the [ABA problem].
3082            ///
3083            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3084            /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3085            #[inline]
3086            #[$stable_cxchg]
3087            #[$cfg_cas]
3088            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3089            #[rustc_should_not_be_called_on_const_items]
3090            pub fn compare_exchange(&self,
3091                                    current: $int_type,
3092                                    new: $int_type,
3093                                    success: Ordering,
3094                                    failure: Ordering) -> Result<$int_type, $int_type> {
3095                // SAFETY: data races are prevented by atomic intrinsics.
3096                unsafe { atomic_compare_exchange(self.as_ptr(), current, new, success, failure) }
3097            }
3098
3099            /// Stores a value into the atomic integer if the current value is the same as
3100            /// the `current` value.
3101            ///
3102            #[doc = concat!("Unlike [`", stringify!($atomic_type), "::compare_exchange`],")]
3103            /// this function is allowed to spuriously fail even
3104            /// when the comparison succeeds, which can result in more efficient code on some
3105            /// platforms. The return value is a result indicating whether the new value was
3106            /// written and containing the previous value.
3107            ///
3108            /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
3109            /// ordering of this operation. `success` describes the required ordering for the
3110            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
3111            /// `failure` describes the required ordering for the load operation that takes place when
3112            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
3113            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
3114            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3115            ///
3116            /// **Note**: This method is only available on platforms that support atomic operations on
3117            #[doc = concat!("[`", $s_int_type, "`].")]
3118            ///
3119            /// # Examples
3120            ///
3121            /// ```
3122            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3123            ///
3124            #[doc = concat!("let val = ", stringify!($atomic_type), "::new(4);")]
3125            ///
3126            /// let mut old = val.load(Ordering::Relaxed);
3127            /// loop {
3128            ///     let new = old * 2;
3129            ///     match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
3130            ///         Ok(_) => break,
3131            ///         Err(x) => old = x,
3132            ///     }
3133            /// }
3134            /// ```
3135            ///
3136            /// # Considerations
3137            ///
3138            /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
3139            /// of CAS operations. In particular, a load of the value followed by a successful
3140            /// `compare_exchange` with the previous load *does not ensure* that other threads have not
3141            /// changed the value in the interim. This is usually important when the *equality* check in
3142            /// the `compare_exchange` is being used to check the *identity* of a value, but equality
3143            /// does not necessarily imply identity. This is a particularly common case for pointers, as
3144            /// a pointer holding the same address does not imply that the same object exists at that
3145            /// address! In this case, `compare_exchange` can lead to the [ABA problem].
3146            ///
3147            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3148            /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3149            #[inline]
3150            #[$stable_cxchg]
3151            #[$cfg_cas]
3152            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3153            #[rustc_should_not_be_called_on_const_items]
3154            pub fn compare_exchange_weak(&self,
3155                                         current: $int_type,
3156                                         new: $int_type,
3157                                         success: Ordering,
3158                                         failure: Ordering) -> Result<$int_type, $int_type> {
3159                // SAFETY: data races are prevented by atomic intrinsics.
3160                unsafe {
3161                    atomic_compare_exchange_weak(self.as_ptr(), current, new, success, failure)
3162                }
3163            }
3164
3165            /// Adds to the current value, returning the previous value.
3166            ///
3167            /// This operation wraps around on overflow.
3168            ///
3169            /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering
3170            /// of this operation. All ordering modes are possible. Note that using
3171            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3172            /// using [`Release`] makes the load part [`Relaxed`].
3173            ///
3174            /// **Note**: This method is only available on platforms that support atomic operations on
3175            #[doc = concat!("[`", $s_int_type, "`].")]
3176            ///
3177            /// # Examples
3178            ///
3179            /// ```
3180            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3181            ///
3182            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0);")]
3183            /// assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
3184            /// assert_eq!(foo.load(Ordering::SeqCst), 10);
3185            /// ```
3186            #[inline]
3187            #[$stable]
3188            #[$cfg_cas]
3189            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3190            #[rustc_should_not_be_called_on_const_items]
3191            pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
3192                // SAFETY: data races are prevented by atomic intrinsics.
3193                unsafe { atomic_add(self.as_ptr(), val, order) }
3194            }
3195
3196            /// Subtracts from the current value, returning the previous value.
3197            ///
3198            /// This operation wraps around on overflow.
3199            ///
3200            /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
3201            /// of this operation. All ordering modes are possible. Note that using
3202            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3203            /// using [`Release`] makes the load part [`Relaxed`].
3204            ///
3205            /// **Note**: This method is only available on platforms that support atomic operations on
3206            #[doc = concat!("[`", $s_int_type, "`].")]
3207            ///
3208            /// # Examples
3209            ///
3210            /// ```
3211            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3212            ///
3213            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(20);")]
3214            /// assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
3215            /// assert_eq!(foo.load(Ordering::SeqCst), 10);
3216            /// ```
3217            #[inline]
3218            #[$stable]
3219            #[$cfg_cas]
3220            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3221            #[rustc_should_not_be_called_on_const_items]
3222            pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
3223                // SAFETY: data races are prevented by atomic intrinsics.
3224                unsafe { atomic_sub(self.as_ptr(), val, order) }
3225            }
3226
3227            /// Bitwise "and" with the current value.
3228            ///
3229            /// Performs a bitwise "and" operation on the current value and the argument `val`, and
3230            /// sets the new value to the result.
3231            ///
3232            /// Returns the previous value.
3233            ///
3234            /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
3235            /// of this operation. All ordering modes are possible. Note that using
3236            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3237            /// using [`Release`] makes the load part [`Relaxed`].
3238            ///
3239            /// **Note**: This method is only available on platforms that support atomic operations on
3240            #[doc = concat!("[`", $s_int_type, "`].")]
3241            ///
3242            /// # Examples
3243            ///
3244            /// ```
3245            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3246            ///
3247            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3248            /// assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3249            /// assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3250            /// ```
3251            #[inline]
3252            #[$stable]
3253            #[$cfg_cas]
3254            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3255            #[rustc_should_not_be_called_on_const_items]
3256            pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
3257                // SAFETY: data races are prevented by atomic intrinsics.
3258                unsafe { atomic_and(self.as_ptr(), val, order) }
3259            }
3260
3261            /// Bitwise "nand" with the current value.
3262            ///
3263            /// Performs a bitwise "nand" operation on the current value and the argument `val`, and
3264            /// sets the new value to the result.
3265            ///
3266            /// Returns the previous value.
3267            ///
3268            /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
3269            /// of this operation. All ordering modes are possible. Note that using
3270            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3271            /// using [`Release`] makes the load part [`Relaxed`].
3272            ///
3273            /// **Note**: This method is only available on platforms that support atomic operations on
3274            #[doc = concat!("[`", $s_int_type, "`].")]
3275            ///
3276            /// # Examples
3277            ///
3278            /// ```
3279            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3280            ///
3281            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0x13);")]
3282            /// assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
3283            /// assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
3284            /// ```
3285            #[inline]
3286            #[$stable_nand]
3287            #[$cfg_cas]
3288            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3289            #[rustc_should_not_be_called_on_const_items]
3290            pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
3291                // SAFETY: data races are prevented by atomic intrinsics.
3292                unsafe { atomic_nand(self.as_ptr(), val, order) }
3293            }
3294
3295            /// Bitwise "or" with the current value.
3296            ///
3297            /// Performs a bitwise "or" operation on the current value and the argument `val`, and
3298            /// sets the new value to the result.
3299            ///
3300            /// Returns the previous value.
3301            ///
3302            /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
3303            /// of this operation. All ordering modes are possible. Note that using
3304            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3305            /// using [`Release`] makes the load part [`Relaxed`].
3306            ///
3307            /// **Note**: This method is only available on platforms that support atomic operations on
3308            #[doc = concat!("[`", $s_int_type, "`].")]
3309            ///
3310            /// # Examples
3311            ///
3312            /// ```
3313            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3314            ///
3315            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3316            /// assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3317            /// assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3318            /// ```
3319            #[inline]
3320            #[$stable]
3321            #[$cfg_cas]
3322            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3323            #[rustc_should_not_be_called_on_const_items]
3324            pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
3325                // SAFETY: data races are prevented by atomic intrinsics.
3326                unsafe { atomic_or(self.as_ptr(), val, order) }
3327            }
3328
3329            /// Bitwise "xor" with the current value.
3330            ///
3331            /// Performs a bitwise "xor" operation on the current value and the argument `val`, and
3332            /// sets the new value to the result.
3333            ///
3334            /// Returns the previous value.
3335            ///
3336            /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
3337            /// of this operation. All ordering modes are possible. Note that using
3338            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3339            /// using [`Release`] makes the load part [`Relaxed`].
3340            ///
3341            /// **Note**: This method is only available on platforms that support atomic operations on
3342            #[doc = concat!("[`", $s_int_type, "`].")]
3343            ///
3344            /// # Examples
3345            ///
3346            /// ```
3347            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3348            ///
3349            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3350            /// assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
3351            /// assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3352            /// ```
3353            #[inline]
3354            #[$stable]
3355            #[$cfg_cas]
3356            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3357            #[rustc_should_not_be_called_on_const_items]
3358            pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
3359                // SAFETY: data races are prevented by atomic intrinsics.
3360                unsafe { atomic_xor(self.as_ptr(), val, order) }
3361            }
3362
3363            /// An alias for
3364            #[doc = concat!("[`", stringify!($atomic_type), "::try_update`]")]
3365            /// .
3366            #[inline]
3367            #[stable(feature = "no_more_cas", since = "1.45.0")]
3368            #[$cfg_cas]
3369            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3370            #[rustc_should_not_be_called_on_const_items]
3371            #[deprecated(
3372                since = "1.99.0",
3373                note = "renamed to `try_update` for consistency",
3374                suggestion = "try_update"
3375            )]
3376            pub fn fetch_update<F>(&self,
3377                                   set_order: Ordering,
3378                                   fetch_order: Ordering,
3379                                   f: F) -> Result<$int_type, $int_type>
3380            where F: FnMut($int_type) -> Option<$int_type> {
3381                self.try_update(set_order, fetch_order, f)
3382            }
3383
3384            /// Fetches the value, and applies a function to it that returns an optional
3385            /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3386            /// `Err(previous_value)`.
3387            ///
3388            #[doc = concat!("See also: [`update`](`", stringify!($atomic_type), "::update`).")]
3389            ///
3390            /// Note: This may call the function multiple times if the value has been changed from other threads in
3391            /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3392            /// only once to the stored value.
3393            ///
3394            /// `try_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3395            /// The first describes the required ordering for when the operation finally succeeds while the second
3396            /// describes the required ordering for loads. These correspond to the success and failure orderings of
3397            #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3398            /// respectively.
3399            ///
3400            /// Using [`Acquire`] as success ordering makes the store part
3401            /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3402            /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3403            ///
3404            /// **Note**: This method is only available on platforms that support atomic operations on
3405            #[doc = concat!("[`", $s_int_type, "`].")]
3406            ///
3407            /// # Considerations
3408            ///
3409            /// This method is not magic; it is not provided by the hardware, and does not act like a
3410            /// critical section or mutex.
3411            ///
3412            /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
3413            /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem]
3414            /// if this atomic integer is an index or more generally if knowledge of only the *bitwise value*
3415            /// of the atomic is not in and of itself sufficient to ensure any required preconditions.
3416            ///
3417            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3418            /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3419            ///
3420            /// # Examples
3421            ///
3422            /// ```rust
3423            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3424            ///
3425            #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3426            /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3427            /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3428            /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3429            /// assert_eq!(x.load(Ordering::SeqCst), 9);
3430            /// ```
3431            #[inline]
3432            #[stable(feature = "atomic_try_update", since = "CURRENT_RUSTC_VERSION")]
3433            #[$cfg_cas]
3434            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3435            #[rustc_should_not_be_called_on_const_items]
3436            pub fn try_update(
3437                &self,
3438                set_order: Ordering,
3439                fetch_order: Ordering,
3440                mut f: impl FnMut($int_type) -> Option<$int_type>,
3441            ) -> Result<$int_type, $int_type> {
3442                let mut prev = self.load(fetch_order);
3443                while let Some(next) = f(prev) {
3444                    match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
3445                        x @ Ok(_) => return x,
3446                        Err(next_prev) => prev = next_prev
3447                    }
3448                }
3449                Err(prev)
3450            }
3451
3452            /// Fetches the value, applies a function to it that it return a new value.
3453            /// The new value is stored and the old value is returned.
3454            ///
3455            #[doc = concat!("See also: [`try_update`](`", stringify!($atomic_type), "::try_update`).")]
3456            ///
3457            /// Note: This may call the function multiple times if the value has been changed from other threads in
3458            /// the meantime, but the function will have been applied only once to the stored value.
3459            ///
3460            /// `update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3461            /// The first describes the required ordering for when the operation finally succeeds while the second
3462            /// describes the required ordering for loads. These correspond to the success and failure orderings of
3463            #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3464            /// respectively.
3465            ///
3466            /// Using [`Acquire`] as success ordering makes the store part
3467            /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3468            /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3469            ///
3470            /// **Note**: This method is only available on platforms that support atomic operations on
3471            #[doc = concat!("[`", $s_int_type, "`].")]
3472            ///
3473            /// # Considerations
3474            ///
3475            /// [CAS operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3476            /// This method is not magic; it is not provided by the hardware, and does not act like a
3477            /// critical section or mutex.
3478            ///
3479            /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
3480            /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem]
3481            /// if this atomic integer is an index or more generally if knowledge of only the *bitwise value*
3482            /// of the atomic is not in and of itself sufficient to ensure any required preconditions.
3483            ///
3484            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3485            /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3486            ///
3487            /// # Examples
3488            ///
3489            /// ```rust
3490            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3491            ///
3492            #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3493            /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| x + 1), 7);
3494            /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| x + 1), 8);
3495            /// assert_eq!(x.load(Ordering::SeqCst), 9);
3496            /// ```
3497            #[inline]
3498            #[stable(feature = "atomic_try_update", since = "CURRENT_RUSTC_VERSION")]
3499            #[$cfg_cas]
3500            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3501            #[rustc_should_not_be_called_on_const_items]
3502            pub fn update(
3503                &self,
3504                set_order: Ordering,
3505                fetch_order: Ordering,
3506                mut f: impl FnMut($int_type) -> $int_type,
3507            ) -> $int_type {
3508                let mut prev = self.load(fetch_order);
3509                loop {
3510                    match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
3511                        Ok(x) => break x,
3512                        Err(next_prev) => prev = next_prev,
3513                    }
3514                }
3515            }
3516
3517            /// Maximum with the current value.
3518            ///
3519            /// Finds the maximum of the current value and the argument `val`, and
3520            /// sets the new value to the result.
3521            ///
3522            /// Returns the previous value.
3523            ///
3524            /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering
3525            /// of this operation. All ordering modes are possible. Note that using
3526            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3527            /// using [`Release`] makes the load part [`Relaxed`].
3528            ///
3529            /// **Note**: This method is only available on platforms that support atomic operations on
3530            #[doc = concat!("[`", $s_int_type, "`].")]
3531            ///
3532            /// # Examples
3533            ///
3534            /// ```
3535            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3536            ///
3537            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3538            /// assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
3539            /// assert_eq!(foo.load(Ordering::SeqCst), 42);
3540            /// ```
3541            ///
3542            /// If you want to obtain the maximum value in one step, you can use the following:
3543            ///
3544            /// ```
3545            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3546            ///
3547            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3548            /// let bar = 42;
3549            /// let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
3550            /// assert!(max_foo == 42);
3551            /// ```
3552            #[inline]
3553            #[stable(feature = "atomic_min_max", since = "1.45.0")]
3554            #[$cfg_cas]
3555            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3556            #[rustc_should_not_be_called_on_const_items]
3557            pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
3558                // SAFETY: data races are prevented by atomic intrinsics.
3559                unsafe { $max_fn(self.as_ptr(), val, order) }
3560            }
3561
3562            /// Minimum with the current value.
3563            ///
3564            /// Finds the minimum of the current value and the argument `val`, and
3565            /// sets the new value to the result.
3566            ///
3567            /// Returns the previous value.
3568            ///
3569            /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering
3570            /// of this operation. All ordering modes are possible. Note that using
3571            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3572            /// using [`Release`] makes the load part [`Relaxed`].
3573            ///
3574            /// **Note**: This method is only available on platforms that support atomic operations on
3575            #[doc = concat!("[`", $s_int_type, "`].")]
3576            ///
3577            /// # Examples
3578            ///
3579            /// ```
3580            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3581            ///
3582            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3583            /// assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
3584            /// assert_eq!(foo.load(Ordering::Relaxed), 23);
3585            /// assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
3586            /// assert_eq!(foo.load(Ordering::Relaxed), 22);
3587            /// ```
3588            ///
3589            /// If you want to obtain the minimum value in one step, you can use the following:
3590            ///
3591            /// ```
3592            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3593            ///
3594            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3595            /// let bar = 12;
3596            /// let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
3597            /// assert_eq!(min_foo, 12);
3598            /// ```
3599            #[inline]
3600            #[stable(feature = "atomic_min_max", since = "1.45.0")]
3601            #[$cfg_cas]
3602            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3603            #[rustc_should_not_be_called_on_const_items]
3604            pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
3605                // SAFETY: data races are prevented by atomic intrinsics.
3606                unsafe { $min_fn(self.as_ptr(), val, order) }
3607            }
3608
3609            /// Returns a mutable pointer to the underlying integer.
3610            ///
3611            /// Doing non-atomic reads and writes on the resulting integer can be a data race.
3612            /// This method is mostly useful for FFI, where the function signature may use
3613            #[doc = concat!("`*mut ", stringify!($int_type), "` instead of `&", stringify!($atomic_type), "`.")]
3614            ///
3615            /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
3616            /// atomic types work with interior mutability. All modifications of an atomic change the value
3617            /// through a shared reference, and can do so safely as long as they use atomic operations. Any
3618            /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the
3619            /// requirements of the [memory model].
3620            ///
3621            /// # Examples
3622            ///
3623            /// ```ignore (extern-declaration)
3624            /// # fn main() {
3625            #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
3626            ///
3627            /// extern "C" {
3628            #[doc = concat!("    fn my_atomic_op(arg: *mut ", stringify!($int_type), ");")]
3629            /// }
3630            ///
3631            #[doc = concat!("let atomic = ", stringify!($atomic_type), "::new(1);")]
3632            ///
3633            /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
3634            /// unsafe {
3635            ///     my_atomic_op(atomic.as_ptr());
3636            /// }
3637            /// # }
3638            /// ```
3639            ///
3640            /// [memory model]: self#memory-model-for-atomic-accesses
3641            #[inline]
3642            #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
3643            #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
3644            #[rustc_never_returns_null_ptr]
3645            pub const fn as_ptr(&self) -> *mut $int_type {
3646                self.v.get().cast()
3647            }
3648        }
3649    }
3650}
3651
3652#[cfg(target_has_atomic_load_store = "8")]
3653#[cfg(not(feature = "ferrocene_subset"))]
3654atomic_int! {
3655    cfg(target_has_atomic = "8"),
3656    cfg(target_has_atomic_equal_alignment = "8"),
3657    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3658    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3659    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3660    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3661    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3662    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3663    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3664    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3665    "i8",
3666    "",
3667    atomic_min, atomic_max,
3668    1,
3669    i8 AtomicI8
3670}
3671#[cfg(target_has_atomic_load_store = "8")]
3672atomic_int! {
3673    cfg(target_has_atomic = "8"),
3674    cfg(target_has_atomic_equal_alignment = "8"),
3675    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3676    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3677    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3678    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3679    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3680    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3681    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3682    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3683    "u8",
3684    "",
3685    atomic_umin, atomic_umax,
3686    1,
3687    u8 AtomicU8
3688}
3689#[cfg(target_has_atomic_load_store = "16")]
3690#[cfg(not(feature = "ferrocene_subset"))]
3691atomic_int! {
3692    cfg(target_has_atomic = "16"),
3693    cfg(target_has_atomic_equal_alignment = "16"),
3694    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3695    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3696    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3697    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3698    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3699    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3700    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3701    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3702    "i16",
3703    "",
3704    atomic_min, atomic_max,
3705    2,
3706    i16 AtomicI16
3707}
3708#[cfg(target_has_atomic_load_store = "16")]
3709atomic_int! {
3710    cfg(target_has_atomic = "16"),
3711    cfg(target_has_atomic_equal_alignment = "16"),
3712    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3713    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3714    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3715    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3716    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3717    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3718    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3719    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3720    "u16",
3721    "",
3722    atomic_umin, atomic_umax,
3723    2,
3724    u16 AtomicU16
3725}
3726#[cfg(target_has_atomic_load_store = "32")]
3727#[cfg(not(feature = "ferrocene_subset"))]
3728atomic_int! {
3729    cfg(target_has_atomic = "32"),
3730    cfg(target_has_atomic_equal_alignment = "32"),
3731    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3732    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3733    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3734    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3735    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3736    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3737    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3738    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3739    "i32",
3740    "",
3741    atomic_min, atomic_max,
3742    4,
3743    i32 AtomicI32
3744}
3745#[cfg(target_has_atomic_load_store = "32")]
3746atomic_int! {
3747    cfg(target_has_atomic = "32"),
3748    cfg(target_has_atomic_equal_alignment = "32"),
3749    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3750    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3751    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3752    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3753    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3754    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3755    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3756    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3757    "u32",
3758    "",
3759    atomic_umin, atomic_umax,
3760    4,
3761    u32 AtomicU32
3762}
3763#[cfg(target_has_atomic_load_store = "64")]
3764#[cfg(not(feature = "ferrocene_subset"))]
3765atomic_int! {
3766    cfg(target_has_atomic = "64"),
3767    cfg(target_has_atomic_equal_alignment = "64"),
3768    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3769    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3770    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3771    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3772    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3773    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3774    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3775    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3776    "i64",
3777    "",
3778    atomic_min, atomic_max,
3779    8,
3780    i64 AtomicI64
3781}
3782#[cfg(target_has_atomic_load_store = "64")]
3783atomic_int! {
3784    cfg(target_has_atomic = "64"),
3785    cfg(target_has_atomic_equal_alignment = "64"),
3786    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3787    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3788    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3789    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3790    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3791    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3792    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3793    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3794    "u64",
3795    "",
3796    atomic_umin, atomic_umax,
3797    8,
3798    u64 AtomicU64
3799}
3800#[cfg(target_has_atomic_load_store = "128")]
3801#[cfg(not(feature = "ferrocene_subset"))]
3802atomic_int! {
3803    cfg(target_has_atomic = "128"),
3804    cfg(target_has_atomic_equal_alignment = "128"),
3805    unstable(feature = "integer_atomics", issue = "99069"),
3806    unstable(feature = "integer_atomics", issue = "99069"),
3807    unstable(feature = "integer_atomics", issue = "99069"),
3808    unstable(feature = "integer_atomics", issue = "99069"),
3809    unstable(feature = "integer_atomics", issue = "99069"),
3810    unstable(feature = "integer_atomics", issue = "99069"),
3811    rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3812    rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3813    "i128",
3814    "#![feature(integer_atomics)]\n\n",
3815    atomic_min, atomic_max,
3816    16,
3817    i128 AtomicI128
3818}
3819#[cfg(target_has_atomic_load_store = "128")]
3820#[cfg(not(feature = "ferrocene_subset"))]
3821atomic_int! {
3822    cfg(target_has_atomic = "128"),
3823    cfg(target_has_atomic_equal_alignment = "128"),
3824    unstable(feature = "integer_atomics", issue = "99069"),
3825    unstable(feature = "integer_atomics", issue = "99069"),
3826    unstable(feature = "integer_atomics", issue = "99069"),
3827    unstable(feature = "integer_atomics", issue = "99069"),
3828    unstable(feature = "integer_atomics", issue = "99069"),
3829    unstable(feature = "integer_atomics", issue = "99069"),
3830    rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3831    rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3832    "u128",
3833    "#![feature(integer_atomics)]\n\n",
3834    atomic_umin, atomic_umax,
3835    16,
3836    u128 AtomicU128
3837}
3838
3839#[cfg(target_has_atomic_load_store = "ptr")]
3840macro_rules! atomic_int_ptr_sized {
3841    ( $($target_pointer_width:literal $align:literal)* ) => { $(
3842        #[cfg(target_pointer_width = $target_pointer_width)]
3843        #[cfg(not(feature = "ferrocene_subset"))]
3844        atomic_int! {
3845            cfg(target_has_atomic = "ptr"),
3846            cfg(target_has_atomic_equal_alignment = "ptr"),
3847            stable(feature = "rust1", since = "1.0.0"),
3848            stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3849            stable(feature = "atomic_debug", since = "1.3.0"),
3850            stable(feature = "atomic_access", since = "1.15.0"),
3851            stable(feature = "atomic_from", since = "1.23.0"),
3852            stable(feature = "atomic_nand", since = "1.27.0"),
3853            rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3854            rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3855            "isize",
3856            "",
3857            atomic_min, atomic_max,
3858            $align,
3859            isize AtomicIsize
3860        }
3861        #[cfg(target_pointer_width = $target_pointer_width)]
3862        atomic_int! {
3863            cfg(target_has_atomic = "ptr"),
3864            cfg(target_has_atomic_equal_alignment = "ptr"),
3865            stable(feature = "rust1", since = "1.0.0"),
3866            stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3867            stable(feature = "atomic_debug", since = "1.3.0"),
3868            stable(feature = "atomic_access", since = "1.15.0"),
3869            stable(feature = "atomic_from", since = "1.23.0"),
3870            stable(feature = "atomic_nand", since = "1.27.0"),
3871            rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3872            rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3873            "usize",
3874            "",
3875            atomic_umin, atomic_umax,
3876            $align,
3877            usize AtomicUsize
3878        }
3879
3880        /// An [`AtomicIsize`] initialized to `0`.
3881        #[cfg(target_pointer_width = $target_pointer_width)]
3882        #[stable(feature = "rust1", since = "1.0.0")]
3883        #[deprecated(
3884            since = "1.34.0",
3885            note = "the `new` function is now preferred",
3886            suggestion = "AtomicIsize::new(0)",
3887        )]
3888        #[cfg(not(feature = "ferrocene_subset"))]
3889        pub const ATOMIC_ISIZE_INIT: AtomicIsize = AtomicIsize::new(0);
3890
3891        /// An [`AtomicUsize`] initialized to `0`.
3892        #[cfg(target_pointer_width = $target_pointer_width)]
3893        #[stable(feature = "rust1", since = "1.0.0")]
3894        #[deprecated(
3895            since = "1.34.0",
3896            note = "the `new` function is now preferred",
3897            suggestion = "AtomicUsize::new(0)",
3898        )]
3899        pub const ATOMIC_USIZE_INIT: AtomicUsize = AtomicUsize::new(0);
3900    )* };
3901}
3902
3903#[cfg(target_has_atomic_load_store = "ptr")]
3904atomic_int_ptr_sized! {
3905    "16" 2
3906    "32" 4
3907    "64" 8
3908}
3909
3910#[inline]
3911#[cfg(target_has_atomic)]
3912fn strongest_failure_ordering(order: Ordering) -> Ordering {
3913    match order {
3914        Release => Relaxed,
3915        Relaxed => Relaxed,
3916        SeqCst => SeqCst,
3917        Acquire => Acquire,
3918        AcqRel => Acquire,
3919    }
3920}
3921
3922#[inline]
3923#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3924unsafe fn atomic_store<T: Copy>(dst: *mut T, val: T, order: Ordering) {
3925    // SAFETY: the caller must uphold the safety contract for `atomic_store`.
3926    unsafe {
3927        match order {
3928            Relaxed => intrinsics::atomic_store::<T, { AO::Relaxed }>(dst, val),
3929            Release => intrinsics::atomic_store::<T, { AO::Release }>(dst, val),
3930            SeqCst => intrinsics::atomic_store::<T, { AO::SeqCst }>(dst, val),
3931            Acquire => panic!("there is no such thing as an acquire store"),
3932            AcqRel => panic!("there is no such thing as an acquire-release store"),
3933        }
3934    }
3935}
3936
3937#[inline]
3938#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3939unsafe fn atomic_load<T: Copy>(dst: *const T, order: Ordering) -> T {
3940    // SAFETY: the caller must uphold the safety contract for `atomic_load`.
3941    unsafe {
3942        match order {
3943            Relaxed => intrinsics::atomic_load::<T, { AO::Relaxed }>(dst),
3944            Acquire => intrinsics::atomic_load::<T, { AO::Acquire }>(dst),
3945            SeqCst => intrinsics::atomic_load::<T, { AO::SeqCst }>(dst),
3946            Release => panic!("there is no such thing as a release load"),
3947            AcqRel => panic!("there is no such thing as an acquire-release load"),
3948        }
3949    }
3950}
3951
3952#[inline]
3953#[cfg(target_has_atomic)]
3954#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3955unsafe fn atomic_swap<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3956    // SAFETY: the caller must uphold the safety contract for `atomic_swap`.
3957    unsafe {
3958        match order {
3959            Relaxed => intrinsics::atomic_xchg::<T, { AO::Relaxed }>(dst, val),
3960            Acquire => intrinsics::atomic_xchg::<T, { AO::Acquire }>(dst, val),
3961            Release => intrinsics::atomic_xchg::<T, { AO::Release }>(dst, val),
3962            AcqRel => intrinsics::atomic_xchg::<T, { AO::AcqRel }>(dst, val),
3963            SeqCst => intrinsics::atomic_xchg::<T, { AO::SeqCst }>(dst, val),
3964        }
3965    }
3966}
3967
3968/// Returns the previous value (like __sync_fetch_and_add).
3969#[inline]
3970#[cfg(target_has_atomic)]
3971#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3972unsafe fn atomic_add<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
3973    // SAFETY: the caller must uphold the safety contract for `atomic_add`.
3974    unsafe {
3975        match order {
3976            Relaxed => intrinsics::atomic_xadd::<T, U, { AO::Relaxed }>(dst, val),
3977            Acquire => intrinsics::atomic_xadd::<T, U, { AO::Acquire }>(dst, val),
3978            Release => intrinsics::atomic_xadd::<T, U, { AO::Release }>(dst, val),
3979            AcqRel => intrinsics::atomic_xadd::<T, U, { AO::AcqRel }>(dst, val),
3980            SeqCst => intrinsics::atomic_xadd::<T, U, { AO::SeqCst }>(dst, val),
3981        }
3982    }
3983}
3984
3985/// Returns the previous value (like __sync_fetch_and_sub).
3986#[inline]
3987#[cfg(target_has_atomic)]
3988#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3989unsafe fn atomic_sub<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
3990    // SAFETY: the caller must uphold the safety contract for `atomic_sub`.
3991    unsafe {
3992        match order {
3993            Relaxed => intrinsics::atomic_xsub::<T, U, { AO::Relaxed }>(dst, val),
3994            Acquire => intrinsics::atomic_xsub::<T, U, { AO::Acquire }>(dst, val),
3995            Release => intrinsics::atomic_xsub::<T, U, { AO::Release }>(dst, val),
3996            AcqRel => intrinsics::atomic_xsub::<T, U, { AO::AcqRel }>(dst, val),
3997            SeqCst => intrinsics::atomic_xsub::<T, U, { AO::SeqCst }>(dst, val),
3998        }
3999    }
4000}
4001
4002/// Publicly exposed for stdarch; nobody else should use this.
4003#[inline]
4004#[cfg(target_has_atomic)]
4005#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4006#[unstable(feature = "core_intrinsics", issue = "none")]
4007#[doc(hidden)]
4008pub unsafe fn atomic_compare_exchange<T: Copy>(
4009    dst: *mut T,
4010    old: T,
4011    new: T,
4012    success: Ordering,
4013    failure: Ordering,
4014) -> Result<T, T> {
4015    // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange`.
4016    let (val, ok) = unsafe {
4017        match (success, failure) {
4018            (Relaxed, Relaxed) => {
4019                intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::Relaxed }>(dst, old, new)
4020            }
4021            (Relaxed, Acquire) => {
4022                intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::Acquire }>(dst, old, new)
4023            }
4024            (Relaxed, SeqCst) => {
4025                intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::SeqCst }>(dst, old, new)
4026            }
4027            (Acquire, Relaxed) => {
4028                intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::Relaxed }>(dst, old, new)
4029            }
4030            (Acquire, Acquire) => {
4031                intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::Acquire }>(dst, old, new)
4032            }
4033            (Acquire, SeqCst) => {
4034                intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::SeqCst }>(dst, old, new)
4035            }
4036            (Release, Relaxed) => {
4037                intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::Relaxed }>(dst, old, new)
4038            }
4039            (Release, Acquire) => {
4040                intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::Acquire }>(dst, old, new)
4041            }
4042            (Release, SeqCst) => {
4043                intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::SeqCst }>(dst, old, new)
4044            }
4045            (AcqRel, Relaxed) => {
4046                intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::Relaxed }>(dst, old, new)
4047            }
4048            (AcqRel, Acquire) => {
4049                intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::Acquire }>(dst, old, new)
4050            }
4051            (AcqRel, SeqCst) => {
4052                intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::SeqCst }>(dst, old, new)
4053            }
4054            (SeqCst, Relaxed) => {
4055                intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::Relaxed }>(dst, old, new)
4056            }
4057            (SeqCst, Acquire) => {
4058                intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::Acquire }>(dst, old, new)
4059            }
4060            (SeqCst, SeqCst) => {
4061                intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::SeqCst }>(dst, old, new)
4062            }
4063            (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
4064            (_, Release) => panic!("there is no such thing as a release failure ordering"),
4065        }
4066    };
4067    if ok { Ok(val) } else { Err(val) }
4068}
4069
4070#[inline]
4071#[cfg(target_has_atomic)]
4072#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4073unsafe fn atomic_compare_exchange_weak<T: Copy>(
4074    dst: *mut T,
4075    old: T,
4076    new: T,
4077    success: Ordering,
4078    failure: Ordering,
4079) -> Result<T, T> {
4080    // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange_weak`.
4081    let (val, ok) = unsafe {
4082        match (success, failure) {
4083            (Relaxed, Relaxed) => {
4084                intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::Relaxed }>(dst, old, new)
4085            }
4086            (Relaxed, Acquire) => {
4087                intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::Acquire }>(dst, old, new)
4088            }
4089            (Relaxed, SeqCst) => {
4090                intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::SeqCst }>(dst, old, new)
4091            }
4092            (Acquire, Relaxed) => {
4093                intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::Relaxed }>(dst, old, new)
4094            }
4095            (Acquire, Acquire) => {
4096                intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::Acquire }>(dst, old, new)
4097            }
4098            (Acquire, SeqCst) => {
4099                intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::SeqCst }>(dst, old, new)
4100            }
4101            (Release, Relaxed) => {
4102                intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::Relaxed }>(dst, old, new)
4103            }
4104            (Release, Acquire) => {
4105                intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::Acquire }>(dst, old, new)
4106            }
4107            (Release, SeqCst) => {
4108                intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::SeqCst }>(dst, old, new)
4109            }
4110            (AcqRel, Relaxed) => {
4111                intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::Relaxed }>(dst, old, new)
4112            }
4113            (AcqRel, Acquire) => {
4114                intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::Acquire }>(dst, old, new)
4115            }
4116            (AcqRel, SeqCst) => {
4117                intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::SeqCst }>(dst, old, new)
4118            }
4119            (SeqCst, Relaxed) => {
4120                intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::Relaxed }>(dst, old, new)
4121            }
4122            (SeqCst, Acquire) => {
4123                intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::Acquire }>(dst, old, new)
4124            }
4125            (SeqCst, SeqCst) => {
4126                intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::SeqCst }>(dst, old, new)
4127            }
4128            (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
4129            (_, Release) => panic!("there is no such thing as a release failure ordering"),
4130        }
4131    };
4132    if ok { Ok(val) } else { Err(val) }
4133}
4134
4135#[inline]
4136#[cfg(target_has_atomic)]
4137#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4138unsafe fn atomic_and<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4139    // SAFETY: the caller must uphold the safety contract for `atomic_and`
4140    unsafe {
4141        match order {
4142            Relaxed => intrinsics::atomic_and::<T, U, { AO::Relaxed }>(dst, val),
4143            Acquire => intrinsics::atomic_and::<T, U, { AO::Acquire }>(dst, val),
4144            Release => intrinsics::atomic_and::<T, U, { AO::Release }>(dst, val),
4145            AcqRel => intrinsics::atomic_and::<T, U, { AO::AcqRel }>(dst, val),
4146            SeqCst => intrinsics::atomic_and::<T, U, { AO::SeqCst }>(dst, val),
4147        }
4148    }
4149}
4150
4151#[inline]
4152#[cfg(target_has_atomic)]
4153#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4154unsafe fn atomic_nand<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4155    // SAFETY: the caller must uphold the safety contract for `atomic_nand`
4156    unsafe {
4157        match order {
4158            Relaxed => intrinsics::atomic_nand::<T, U, { AO::Relaxed }>(dst, val),
4159            Acquire => intrinsics::atomic_nand::<T, U, { AO::Acquire }>(dst, val),
4160            Release => intrinsics::atomic_nand::<T, U, { AO::Release }>(dst, val),
4161            AcqRel => intrinsics::atomic_nand::<T, U, { AO::AcqRel }>(dst, val),
4162            SeqCst => intrinsics::atomic_nand::<T, U, { AO::SeqCst }>(dst, val),
4163        }
4164    }
4165}
4166
4167#[inline]
4168#[cfg(target_has_atomic)]
4169#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4170unsafe fn atomic_or<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4171    // SAFETY: the caller must uphold the safety contract for `atomic_or`
4172    unsafe {
4173        match order {
4174            SeqCst => intrinsics::atomic_or::<T, U, { AO::SeqCst }>(dst, val),
4175            Acquire => intrinsics::atomic_or::<T, U, { AO::Acquire }>(dst, val),
4176            Release => intrinsics::atomic_or::<T, U, { AO::Release }>(dst, val),
4177            AcqRel => intrinsics::atomic_or::<T, U, { AO::AcqRel }>(dst, val),
4178            Relaxed => intrinsics::atomic_or::<T, U, { AO::Relaxed }>(dst, val),
4179        }
4180    }
4181}
4182
4183#[inline]
4184#[cfg(target_has_atomic)]
4185#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4186unsafe fn atomic_xor<T: Copy, U: Copy>(dst: *mut T, val: U, order: Ordering) -> T {
4187    // SAFETY: the caller must uphold the safety contract for `atomic_xor`
4188    unsafe {
4189        match order {
4190            SeqCst => intrinsics::atomic_xor::<T, U, { AO::SeqCst }>(dst, val),
4191            Acquire => intrinsics::atomic_xor::<T, U, { AO::Acquire }>(dst, val),
4192            Release => intrinsics::atomic_xor::<T, U, { AO::Release }>(dst, val),
4193            AcqRel => intrinsics::atomic_xor::<T, U, { AO::AcqRel }>(dst, val),
4194            Relaxed => intrinsics::atomic_xor::<T, U, { AO::Relaxed }>(dst, val),
4195        }
4196    }
4197}
4198
4199/// Updates `*dst` to the max value of `val` and the old value (signed comparison)
4200#[inline]
4201#[cfg(target_has_atomic)]
4202#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4203#[cfg(not(feature = "ferrocene_subset"))]
4204unsafe fn atomic_max<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4205    // SAFETY: the caller must uphold the safety contract for `atomic_max`
4206    unsafe {
4207        match order {
4208            Relaxed => intrinsics::atomic_max::<T, { AO::Relaxed }>(dst, val),
4209            Acquire => intrinsics::atomic_max::<T, { AO::Acquire }>(dst, val),
4210            Release => intrinsics::atomic_max::<T, { AO::Release }>(dst, val),
4211            AcqRel => intrinsics::atomic_max::<T, { AO::AcqRel }>(dst, val),
4212            SeqCst => intrinsics::atomic_max::<T, { AO::SeqCst }>(dst, val),
4213        }
4214    }
4215}
4216
4217/// Updates `*dst` to the min value of `val` and the old value (signed comparison)
4218#[inline]
4219#[cfg(target_has_atomic)]
4220#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4221#[cfg(not(feature = "ferrocene_subset"))]
4222unsafe fn atomic_min<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4223    // SAFETY: the caller must uphold the safety contract for `atomic_min`
4224    unsafe {
4225        match order {
4226            Relaxed => intrinsics::atomic_min::<T, { AO::Relaxed }>(dst, val),
4227            Acquire => intrinsics::atomic_min::<T, { AO::Acquire }>(dst, val),
4228            Release => intrinsics::atomic_min::<T, { AO::Release }>(dst, val),
4229            AcqRel => intrinsics::atomic_min::<T, { AO::AcqRel }>(dst, val),
4230            SeqCst => intrinsics::atomic_min::<T, { AO::SeqCst }>(dst, val),
4231        }
4232    }
4233}
4234
4235/// Updates `*dst` to the max value of `val` and the old value (unsigned comparison)
4236#[inline]
4237#[cfg(target_has_atomic)]
4238#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4239unsafe fn atomic_umax<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4240    // SAFETY: the caller must uphold the safety contract for `atomic_umax`
4241    unsafe {
4242        match order {
4243            Relaxed => intrinsics::atomic_umax::<T, { AO::Relaxed }>(dst, val),
4244            Acquire => intrinsics::atomic_umax::<T, { AO::Acquire }>(dst, val),
4245            Release => intrinsics::atomic_umax::<T, { AO::Release }>(dst, val),
4246            AcqRel => intrinsics::atomic_umax::<T, { AO::AcqRel }>(dst, val),
4247            SeqCst => intrinsics::atomic_umax::<T, { AO::SeqCst }>(dst, val),
4248        }
4249    }
4250}
4251
4252/// Updates `*dst` to the min value of `val` and the old value (unsigned comparison)
4253#[inline]
4254#[cfg(target_has_atomic)]
4255#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4256unsafe fn atomic_umin<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4257    // SAFETY: the caller must uphold the safety contract for `atomic_umin`
4258    unsafe {
4259        match order {
4260            Relaxed => intrinsics::atomic_umin::<T, { AO::Relaxed }>(dst, val),
4261            Acquire => intrinsics::atomic_umin::<T, { AO::Acquire }>(dst, val),
4262            Release => intrinsics::atomic_umin::<T, { AO::Release }>(dst, val),
4263            AcqRel => intrinsics::atomic_umin::<T, { AO::AcqRel }>(dst, val),
4264            SeqCst => intrinsics::atomic_umin::<T, { AO::SeqCst }>(dst, val),
4265        }
4266    }
4267}
4268
4269/// An atomic fence.
4270///
4271/// Fences create synchronization between themselves and atomic operations or fences in other
4272/// threads. To achieve this, a fence prevents the compiler and CPU from reordering certain types of
4273/// memory operations around it.
4274///
4275/// There are 3 different ways to use an atomic fence:
4276///
4277/// - atomic - fence synchronization: an atomic operation with (at least) [`Release`] ordering
4278///   semantics synchronizes with a fence with (at least) [`Acquire`] ordering semantics.
4279/// - fence - atomic synchronization: a fence with (at least) [`Release`] ordering semantics
4280///   synchronizes with an atomic operation with (at least) [`Acquire`] ordering semantics.
4281/// - fence - fence synchronization: a fence with (at least) [`Release`] ordering semantics
4282///   synchronizes with a fence with (at least) [`Acquire`] ordering semantics.
4283///
4284/// These 3 ways complement the regular, fence-less, atomic - atomic synchronization.
4285///
4286/// ## Atomic - Fence
4287///
4288/// An atomic operation on one thread will synchronize with a fence on another thread when:
4289///
4290/// -   on thread 1:
4291///     -   an atomic operation 'X' with (at least) [`Release`] ordering semantics on some atomic
4292///         object 'm',
4293///
4294/// -   is paired on thread 2 with:
4295///     -   an atomic read 'Y' with any order on 'm',
4296///     -   followed by a fence 'B' with (at least) [`Acquire`] ordering semantics.
4297///
4298/// This provides a happens-before dependence between X and B.
4299///
4300/// ```text
4301///     Thread 1                                          Thread 2
4302///
4303/// m.store(3, Release); X ---------
4304///                                |
4305///                                |
4306///                                -------------> Y  if m.load(Relaxed) == 3 {
4307///                                               B      fence(Acquire);
4308///                                                      ...
4309///                                                  }
4310/// ```
4311///
4312/// ## Fence - Atomic
4313///
4314/// A fence on one thread will synchronize with an atomic operation on another thread when:
4315///
4316/// -   on thread:
4317///     -   a fence 'A' with (at least) [`Release`] ordering semantics,
4318///     -   followed by an atomic write 'X' with any ordering on some atomic object 'm',
4319///
4320/// -   is paired on thread 2 with:
4321///     -   an atomic operation 'Y' with (at least) [`Acquire`] ordering semantics.
4322///
4323/// This provides a happens-before dependence between A and Y.
4324///
4325/// ```text
4326///     Thread 1                                          Thread 2
4327///
4328/// fence(Release);      A
4329/// m.store(3, Relaxed); X ---------
4330///                                |
4331///                                |
4332///                                -------------> Y  if m.load(Acquire) == 3 {
4333///                                                      ...
4334///                                                  }
4335/// ```
4336///
4337/// ## Fence - Fence
4338///
4339/// A fence on one thread will synchronize with a fence on another thread when:
4340///
4341/// -   on thread 1:
4342///     -   a fence 'A' which has (at least) [`Release`] ordering semantics,
4343///     -   followed by an atomic write 'X' with any ordering on some atomic object 'm',
4344///
4345/// -   is paired on thread 2 with:
4346///     -   an atomic read 'Y' with any ordering on 'm',
4347///     -   followed by a fence 'B' with (at least) [`Acquire`] ordering semantics.
4348///
4349/// This provides a happens-before dependence between A and B.
4350///
4351/// ```text
4352///     Thread 1                                          Thread 2
4353///
4354/// fence(Release);      A --------------
4355/// m.store(3, Relaxed); X ---------    |
4356///                                |    |
4357///                                |    |
4358///                                -------------> Y  if m.load(Relaxed) == 3 {
4359///                                     |-------> B      fence(Acquire);
4360///                                                      ...
4361///                                                  }
4362/// ```
4363///
4364/// ## Mandatory Atomic
4365///
4366/// Note that in the examples above, it is crucial that the access to `m` are atomic. Fences cannot
4367/// be used to establish synchronization between non-atomic accesses in different threads. However,
4368/// thanks to the happens-before relationship, any non-atomic access that happen-before the atomic
4369/// operation or fence with (at least) [`Release`] ordering semantics are now also properly
4370/// synchronized with any non-atomic accesses that happen-after the atomic operation or fence with
4371/// (at least) [`Acquire`] ordering semantics.
4372///
4373/// ## Memory Ordering
4374///
4375/// A fence which has [`SeqCst`] ordering, in addition to having both [`Acquire`] and [`Release`]
4376/// semantics, participates in the global program order of the other [`SeqCst`] operations and/or
4377/// fences.
4378///
4379/// Accepts [`Acquire`], [`Release`], [`AcqRel`] and [`SeqCst`] orderings.
4380///
4381/// # Panics
4382///
4383/// Panics if `order` is [`Relaxed`].
4384///
4385/// # Examples
4386///
4387/// ```
4388/// use std::sync::atomic::AtomicBool;
4389/// use std::sync::atomic::fence;
4390/// use std::sync::atomic::Ordering;
4391///
4392/// // A mutual exclusion primitive based on spinlock.
4393/// pub struct Mutex {
4394///     flag: AtomicBool,
4395/// }
4396///
4397/// impl Mutex {
4398///     pub fn new() -> Mutex {
4399///         Mutex {
4400///             flag: AtomicBool::new(false),
4401///         }
4402///     }
4403///
4404///     pub fn lock(&self) {
4405///         // Wait until the old value is `false`.
4406///         while self
4407///             .flag
4408///             .compare_exchange_weak(false, true, Ordering::Relaxed, Ordering::Relaxed)
4409///             .is_err()
4410///         {}
4411///         // This fence synchronizes-with store in `unlock`.
4412///         fence(Ordering::Acquire);
4413///     }
4414///
4415///     pub fn unlock(&self) {
4416///         self.flag.store(false, Ordering::Release);
4417///     }
4418/// }
4419/// ```
4420#[inline]
4421#[stable(feature = "rust1", since = "1.0.0")]
4422#[rustc_diagnostic_item = "fence"]
4423#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4424pub fn fence(order: Ordering) {
4425    // SAFETY: using an atomic fence is safe.
4426    unsafe {
4427        match order {
4428            Acquire => intrinsics::atomic_fence::<{ AO::Acquire }>(),
4429            Release => intrinsics::atomic_fence::<{ AO::Release }>(),
4430            AcqRel => intrinsics::atomic_fence::<{ AO::AcqRel }>(),
4431            SeqCst => intrinsics::atomic_fence::<{ AO::SeqCst }>(),
4432            Relaxed => panic!("there is no such thing as a relaxed fence"),
4433        }
4434    }
4435}
4436
4437/// A "compiler-only" atomic fence.
4438///
4439/// Like [`fence`], this function establishes synchronization with other atomic operations and
4440/// fences. However, unlike [`fence`], `compiler_fence` only establishes synchronization with
4441/// operations *in the same thread*. This may at first sound rather useless, since code within a
4442/// thread is typically already totally ordered and does not need any further synchronization.
4443/// However, there are cases where code can run on the same thread without being ordered:
4444/// - The most common case is that of a *signal handler*: a signal handler runs in the same thread
4445///   as the code it interrupted, but it is not ordered with respect to that code. `compiler_fence`
4446///   can be used to establish synchronization between a thread and its signal handler, the same way
4447///   that `fence` can be used to establish synchronization across threads.
4448/// - Similar situations can arise in embedded programming with interrupt handlers, or in custom
4449///   implementations of preemptive green threads. In general, `compiler_fence` can establish
4450///   synchronization with code that is guaranteed to run on the same hardware CPU.
4451///
4452/// See [`fence`] for how a fence can be used to achieve synchronization. Note that just like
4453/// [`fence`], synchronization still requires atomic operations to be used in both threads -- it is
4454/// not possible to perform synchronization entirely with fences and non-atomic operations.
4455///
4456/// `compiler_fence` does not emit any machine code, but restricts the kinds of memory re-ordering
4457/// the compiler is allowed to do. `compiler_fence` corresponds to [`atomic_signal_fence`] in C and
4458/// C++.
4459///
4460/// [`atomic_signal_fence`]: https://en.cppreference.com/w/cpp/atomic/atomic_signal_fence
4461///
4462/// # Panics
4463///
4464/// Panics if `order` is [`Relaxed`].
4465///
4466/// # Examples
4467///
4468/// Without the two `compiler_fence` calls, the read of `IMPORTANT_VARIABLE` in `signal_handler`
4469/// is *undefined behavior* due to a data race, despite everything happening in a single thread.
4470/// This is because the signal handler is considered to run concurrently with its associated
4471/// thread, and explicit synchronization is required to pass data between a thread and its
4472/// signal handler. The code below uses two `compiler_fence` calls to establish the usual
4473/// release-acquire synchronization pattern (see [`fence`] for an image).
4474///
4475/// ```
4476/// use std::sync::atomic::AtomicBool;
4477/// use std::sync::atomic::Ordering;
4478/// use std::sync::atomic::compiler_fence;
4479///
4480/// static mut IMPORTANT_VARIABLE: usize = 0;
4481/// static IS_READY: AtomicBool = AtomicBool::new(false);
4482///
4483/// fn main() {
4484///     unsafe { IMPORTANT_VARIABLE = 42 };
4485///     // Marks earlier writes as being released with future relaxed stores.
4486///     compiler_fence(Ordering::Release);
4487///     IS_READY.store(true, Ordering::Relaxed);
4488/// }
4489///
4490/// fn signal_handler() {
4491///     if IS_READY.load(Ordering::Relaxed) {
4492///         // Acquires writes that were released with relaxed stores that we read from.
4493///         compiler_fence(Ordering::Acquire);
4494///         assert_eq!(unsafe { IMPORTANT_VARIABLE }, 42);
4495///     }
4496/// }
4497/// ```
4498#[inline]
4499#[stable(feature = "compiler_fences", since = "1.21.0")]
4500#[rustc_diagnostic_item = "compiler_fence"]
4501#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4502pub fn compiler_fence(order: Ordering) {
4503    // SAFETY: using an atomic fence is safe.
4504    unsafe {
4505        match order {
4506            Acquire => intrinsics::atomic_singlethreadfence::<{ AO::Acquire }>(),
4507            Release => intrinsics::atomic_singlethreadfence::<{ AO::Release }>(),
4508            AcqRel => intrinsics::atomic_singlethreadfence::<{ AO::AcqRel }>(),
4509            SeqCst => intrinsics::atomic_singlethreadfence::<{ AO::SeqCst }>(),
4510            Relaxed => panic!("there is no such thing as a relaxed fence"),
4511        }
4512    }
4513}
4514
4515#[cfg(target_has_atomic_load_store = "8")]
4516#[stable(feature = "atomic_debug", since = "1.3.0")]
4517impl fmt::Debug for AtomicBool {
4518    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4519        fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
4520    }
4521}
4522
4523#[cfg(target_has_atomic_load_store = "ptr")]
4524#[stable(feature = "atomic_debug", since = "1.3.0")]
4525#[cfg(not(feature = "ferrocene_subset"))]
4526impl<T> fmt::Debug for AtomicPtr<T> {
4527    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4528        fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
4529    }
4530}
4531
4532#[cfg(target_has_atomic_load_store = "ptr")]
4533#[stable(feature = "atomic_pointer", since = "1.24.0")]
4534#[cfg(not(feature = "ferrocene_subset"))]
4535impl<T> fmt::Pointer for AtomicPtr<T> {
4536    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4537        fmt::Pointer::fmt(&self.load(Ordering::Relaxed), f)
4538    }
4539}
4540
4541/// Signals the processor that it is inside a busy-wait spin-loop ("spin lock").
4542///
4543/// This function is deprecated in favor of [`hint::spin_loop`].
4544///
4545/// [`hint::spin_loop`]: crate::hint::spin_loop
4546#[inline]
4547#[stable(feature = "spin_loop_hint", since = "1.24.0")]
4548#[deprecated(since = "1.51.0", note = "use hint::spin_loop instead")]
4549#[cfg(not(feature = "ferrocene_subset"))]
4550pub fn spin_loop_hint() {
4551    spin_loop()
4552}