Struct rustc_middle::mir::interpret::allocation::Allocation
source · [−]pub struct Allocation<Prov = AllocId, Extra = ()> {
bytes: Box<[u8]>,
provenance: ProvenanceMap<Prov>,
init_mask: InitMask,
pub align: Align,
pub mutability: Mutability,
pub extra: Extra,
}
Expand description
This type represents an Allocation in the Miri/CTFE core engine.
Its public API is rather low-level, working directly with allocation offsets and a custom error
type to account for the lack of an AllocId on this level. The Miri/CTFE core engine memory
module provides higher-level access.
Fields
bytes: Box<[u8]>
The actual bytes of the allocation. Note that the bytes of a pointer represent the offset of the pointer.
provenance: ProvenanceMap<Prov>
Maps from byte addresses to extra provenance data for each pointer.
Only the first byte of a pointer is inserted into the map; i.e.,
every entry in this map applies to pointer_size
consecutive bytes starting
at the given offset.
init_mask: InitMask
Denotes which part of this allocation is initialized.
align: Align
The alignment of the allocation to detect unaligned reads.
(Align
guarantees that this is a power of two.)
mutability: Mutability
true
if the allocation is mutable.
Also used by codegen to determine if a static should be put into mutable memory,
which happens for static mut
and static
with interior mutability.
extra: Extra
Extra state for the machine.
Implementations
sourceimpl<Prov> Allocation<Prov>
impl<Prov> Allocation<Prov>
sourcepub fn from_bytes<'a>(
slice: impl Into<Cow<'a, [u8]>>,
align: Align,
mutability: Mutability
) -> Self
pub fn from_bytes<'a>(
slice: impl Into<Cow<'a, [u8]>>,
align: Align,
mutability: Mutability
) -> Self
Creates an allocation initialized by the given bytes
pub fn from_bytes_byte_aligned_immutable<'a>(
slice: impl Into<Cow<'a, [u8]>>
) -> Self
sourceimpl Allocation
impl Allocation
sourcepub fn adjust_from_tcx<Prov, Extra, Err>(
self,
cx: &impl HasDataLayout,
extra: Extra,
adjust_ptr: impl FnMut(Pointer<AllocId>) -> Result<Pointer<Prov>, Err>
) -> Result<Allocation<Prov, Extra>, Err>
pub fn adjust_from_tcx<Prov, Extra, Err>(
self,
cx: &impl HasDataLayout,
extra: Extra,
adjust_ptr: impl FnMut(Pointer<AllocId>) -> Result<Pointer<Prov>, Err>
) -> Result<Allocation<Prov, Extra>, Err>
Adjust allocation from the ones in tcx to a custom Machine instance with a different Provenance and Extra type.
sourceimpl<Prov, Extra> Allocation<Prov, Extra>
impl<Prov, Extra> Allocation<Prov, Extra>
Raw accessors. Provide access to otherwise private bytes.
pub fn len(&self) -> usize
pub fn size(&self) -> Size
sourcepub fn inspect_with_uninit_and_ptr_outside_interpreter(
&self,
range: Range<usize>
) -> &[u8]ⓘNotable traits for &[u8]impl Read for &[u8]impl Write for &mut [u8]
pub fn inspect_with_uninit_and_ptr_outside_interpreter(
&self,
range: Range<usize>
) -> &[u8]ⓘNotable traits for &[u8]impl Read for &[u8]impl Write for &mut [u8]
Looks at a slice which may contain uninitialized bytes or provenance. This differs
from get_bytes_with_uninit_and_ptr
in that it does no provenance checks (even on the
edges) at all.
This must not be used for reads affecting the interpreter execution.
sourcepub fn provenance(&self) -> &ProvenanceMap<Prov>
pub fn provenance(&self) -> &ProvenanceMap<Prov>
Returns the provenance map.
sourceimpl<Prov: Provenance, Extra> Allocation<Prov, Extra>
impl<Prov: Provenance, Extra> Allocation<Prov, Extra>
Byte accessors.
sourcepub fn get_bytes_unchecked(&self, range: AllocRange) -> &[u8]ⓘNotable traits for &[u8]impl Read for &[u8]impl Write for &mut [u8]
pub fn get_bytes_unchecked(&self, range: AllocRange) -> &[u8]ⓘNotable traits for &[u8]impl Read for &[u8]impl Write for &mut [u8]
This is the entirely abstraction-violating way to just grab the raw bytes without caring about provenance or initialization.
This function also guarantees that the resulting pointer will remain stable
even when new allocations are pushed to the HashMap
. mem_copy_repeatedly
relies
on that.
sourcepub fn get_bytes_strip_provenance(
&self,
cx: &impl HasDataLayout,
range: AllocRange
) -> Result<&[u8], AllocError>
pub fn get_bytes_strip_provenance(
&self,
cx: &impl HasDataLayout,
range: AllocRange
) -> Result<&[u8], AllocError>
Checks that these bytes are initialized, and then strip provenance (if possible) and return them.
It is the caller’s responsibility to check bounds and alignment beforehand.
Most likely, you want to use the PlaceTy
and OperandTy
-based methods
on InterpCx
instead.
sourcepub fn get_bytes_mut(
&mut self,
cx: &impl HasDataLayout,
range: AllocRange
) -> Result<&mut [u8], AllocError>
pub fn get_bytes_mut(
&mut self,
cx: &impl HasDataLayout,
range: AllocRange
) -> Result<&mut [u8], AllocError>
Just calling this already marks everything as defined and removes provenance, so be sure to actually put data there!
It is the caller’s responsibility to check bounds and alignment beforehand.
Most likely, you want to use the PlaceTy
and OperandTy
-based methods
on InterpCx
instead.
sourcepub fn get_bytes_mut_ptr(
&mut self,
cx: &impl HasDataLayout,
range: AllocRange
) -> Result<*mut [u8], AllocError>
pub fn get_bytes_mut_ptr(
&mut self,
cx: &impl HasDataLayout,
range: AllocRange
) -> Result<*mut [u8], AllocError>
A raw pointer variant of get_bytes_mut
that avoids invalidating existing aliases into this memory.
sourceimpl<Prov: Provenance, Extra> Allocation<Prov, Extra>
impl<Prov: Provenance, Extra> Allocation<Prov, Extra>
Reading and writing.
sourcepub fn read_scalar(
&self,
cx: &impl HasDataLayout,
range: AllocRange,
read_provenance: bool
) -> Result<Scalar<Prov>, AllocError>
pub fn read_scalar(
&self,
cx: &impl HasDataLayout,
range: AllocRange,
read_provenance: bool
) -> Result<Scalar<Prov>, AllocError>
Reads a non-ZST scalar.
If read_provenance
is true
, this will also read provenance; otherwise (if the machine
supports that) provenance is entirely ignored.
ZSTs can’t be read because in order to obtain a Pointer
, we need to check
for ZSTness anyway due to integer pointers being valid for ZSTs.
It is the caller’s responsibility to check bounds and alignment beforehand.
Most likely, you want to call InterpCx::read_scalar
instead of this method.
sourcepub fn write_scalar(
&mut self,
cx: &impl HasDataLayout,
range: AllocRange,
val: Scalar<Prov>
) -> Result<(), AllocError>
pub fn write_scalar(
&mut self,
cx: &impl HasDataLayout,
range: AllocRange,
val: Scalar<Prov>
) -> Result<(), AllocError>
Writes a non-ZST scalar.
ZSTs can’t be read because in order to obtain a Pointer
, we need to check
for ZSTness anyway due to integer pointers being valid for ZSTs.
It is the caller’s responsibility to check bounds and alignment beforehand.
Most likely, you want to call InterpCx::write_scalar
instead of this method.
sourcepub fn write_uninit(
&mut self,
cx: &impl HasDataLayout,
range: AllocRange
) -> Result<(), AllocError>
pub fn write_uninit(
&mut self,
cx: &impl HasDataLayout,
range: AllocRange
) -> Result<(), AllocError>
Write “uninit” to the given memory range.
sourceimpl<Prov: Copy, Extra> Allocation<Prov, Extra>
impl<Prov: Copy, Extra> Allocation<Prov, Extra>
Provenance.
sourcefn range_get_provenance(
&self,
cx: &impl HasDataLayout,
range: AllocRange
) -> &[(Size, Prov)]
fn range_get_provenance(
&self,
cx: &impl HasDataLayout,
range: AllocRange
) -> &[(Size, Prov)]
Returns all provenance overlapping with the given pointer-offset pair.
sourcefn offset_get_provenance(
&self,
cx: &impl HasDataLayout,
offset: Size
) -> Option<Prov>
fn offset_get_provenance(
&self,
cx: &impl HasDataLayout,
offset: Size
) -> Option<Prov>
Get the provenance of a single byte.
sourcepub fn range_has_provenance(
&self,
cx: &impl HasDataLayout,
range: AllocRange
) -> bool
pub fn range_has_provenance(
&self,
cx: &impl HasDataLayout,
range: AllocRange
) -> bool
Returns whether this allocation has progrnance overlapping with the given range.
Note: this function exists to allow range_get_provenance
to be private, in order to somewhat
limit access to provenance outside of the Allocation
abstraction.
sourcefn clear_provenance(
&mut self,
cx: &impl HasDataLayout,
range: AllocRange
) -> Result<(), AllocError>where
Prov: Provenance,
fn clear_provenance(
&mut self,
cx: &impl HasDataLayout,
range: AllocRange
) -> Result<(), AllocError>where
Prov: Provenance,
Removes all provenance inside the given range. If there is provenance overlapping with the edges, it are removed as well and the bytes they cover are marked as uninitialized. This is a somewhat odd “spooky action at a distance”, but it allows strictly more code to run than if we would just error immediately in that case.
sourceimpl<Prov: Copy, Extra> Allocation<Prov, Extra>
impl<Prov: Copy, Extra> Allocation<Prov, Extra>
pub fn prepare_provenance_copy(
&self,
cx: &impl HasDataLayout,
src: AllocRange,
dest: Size,
count: u64
) -> AllocationProvenance<Prov>
sourcepub fn mark_provenance_range(&mut self, provenance: AllocationProvenance<Prov>)
pub fn mark_provenance_range(&mut self, provenance: AllocationProvenance<Prov>)
Applies a provenance copy.
The affected range, as defined in the parameters to prepare_provenance_copy
is expected
to be clear of provenance.
This is dangerous to use as it can violate internal Allocation
invariants!
It only exists to support an efficient implementation of mem_copy_repeatedly
.
sourceimpl<Prov: Copy, Extra> Allocation<Prov, Extra>
impl<Prov: Copy, Extra> Allocation<Prov, Extra>
Uninitialized bytes.
sourcefn is_init(&self, range: AllocRange) -> Result<(), AllocRange>
fn is_init(&self, range: AllocRange) -> Result<(), AllocRange>
Checks whether the given range is entirely initialized.
Returns Ok(())
if it’s initialized. Otherwise returns the range of byte
indexes of the first contiguous uninitialized access.
sourcefn check_init(&self, range: AllocRange) -> Result<(), AllocError>
fn check_init(&self, range: AllocRange) -> Result<(), AllocError>
Checks that a range of bytes is initialized. If not, returns the InvalidUninitBytes
error which will report the first range of bytes which is uninitialized.
fn mark_init(&mut self, range: AllocRange, is_init: bool)
sourceimpl<Prov, Extra> Allocation<Prov, Extra>
impl<Prov, Extra> Allocation<Prov, Extra>
Transferring the initialization mask to other allocations.
sourcepub fn compress_uninit_range(&self, range: AllocRange) -> InitMaskCompressed
pub fn compress_uninit_range(&self, range: AllocRange) -> InitMaskCompressed
Creates a run-length encoding of the initialization mask; panics if range is empty.
This is essentially a more space-efficient version of
InitMask::range_as_init_chunks(...).collect::<Vec<_>>()
.
sourcepub fn mark_compressed_init_range(
&mut self,
defined: &InitMaskCompressed,
range: AllocRange,
repeat: u64
)
pub fn mark_compressed_init_range(
&mut self,
defined: &InitMaskCompressed,
range: AllocRange,
repeat: u64
)
Applies multiple instances of the run-length encoding to the initialization mask.
This is dangerous to use as it can violate internal Allocation
invariants!
It only exists to support an efficient implementation of mem_copy_repeatedly
.
Trait Implementations
sourceimpl<'tcx> ArenaAllocatable<'tcx, IsNotCopy> for Allocation
impl<'tcx> ArenaAllocatable<'tcx, IsNotCopy> for Allocation
fn allocate_on<'a>(self, arena: &'a Arena<'tcx>) -> &'a mut Self
fn allocate_from_iter<'a>(
arena: &'a Arena<'tcx>,
iter: impl IntoIterator<Item = Self>
) -> &'a mut [Self]ⓘNotable traits for &[u8]impl Read for &[u8]impl Write for &mut [u8]
sourceimpl<'tcx> Borrow<Allocation<AllocId, ()>> for InternedInSet<'tcx, Allocation>
impl<'tcx> Borrow<Allocation<AllocId, ()>> for InternedInSet<'tcx, Allocation>
sourcefn borrow<'a>(&'a self) -> &'a Allocation
fn borrow<'a>(&'a self) -> &'a Allocation
sourceimpl<Prov: Clone, Extra: Clone> Clone for Allocation<Prov, Extra>
impl<Prov: Clone, Extra: Clone> Clone for Allocation<Prov, Extra>
sourcefn clone(&self) -> Allocation<Prov, Extra>
fn clone(&self) -> Allocation<Prov, Extra>
1.0.0 · sourcefn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresourceimpl<Prov: Debug, Extra: Debug> Debug for Allocation<Prov, Extra>
impl<Prov: Debug, Extra: Debug> Debug for Allocation<Prov, Extra>
sourceimpl<'tcx, Prov, Extra, __D: TyDecoder<I = TyCtxt<'tcx>>> Decodable<__D> for Allocation<Prov, Extra>where
Prov: Decodable<__D>,
Extra: Decodable<__D>,
impl<'tcx, Prov, Extra, __D: TyDecoder<I = TyCtxt<'tcx>>> Decodable<__D> for Allocation<Prov, Extra>where
Prov: Decodable<__D>,
Extra: Decodable<__D>,
sourceimpl<'tcx, Prov, Extra, __E: TyEncoder<I = TyCtxt<'tcx>>> Encodable<__E> for Allocation<Prov, Extra>where
Prov: Encodable<__E>,
Extra: Encodable<__E>,
impl<'tcx, Prov, Extra, __E: TyEncoder<I = TyCtxt<'tcx>>> Encodable<__E> for Allocation<Prov, Extra>where
Prov: Encodable<__E>,
Extra: Encodable<__E>,
sourceimpl Hash for Allocation
impl Hash for Allocation
sourceimpl<'__ctx, Prov, Extra> HashStable<StableHashingContext<'__ctx>> for Allocation<Prov, Extra>where
Prov: HashStable<StableHashingContext<'__ctx>>,
Extra: HashStable<StableHashingContext<'__ctx>>,
impl<'__ctx, Prov, Extra> HashStable<StableHashingContext<'__ctx>> for Allocation<Prov, Extra>where
Prov: HashStable<StableHashingContext<'__ctx>>,
Extra: HashStable<StableHashingContext<'__ctx>>,
fn hash_stable(
&self,
__hcx: &mut StableHashingContext<'__ctx>,
__hasher: &mut StableHasher
)
sourceimpl<Prov: Ord, Extra: Ord> Ord for Allocation<Prov, Extra>
impl<Prov: Ord, Extra: Ord> Ord for Allocation<Prov, Extra>
sourcefn cmp(&self, other: &Allocation<Prov, Extra>) -> Ordering
fn cmp(&self, other: &Allocation<Prov, Extra>) -> Ordering
1.21.0 · sourcefn max(self, other: Self) -> Self
fn max(self, other: Self) -> Self
1.21.0 · sourcefn min(self, other: Self) -> Self
fn min(self, other: Self) -> Self
1.50.0 · sourcefn clamp(self, min: Self, max: Self) -> Selfwhere
Self: PartialOrd<Self>,
fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: PartialOrd<Self>,
sourceimpl<Prov: PartialEq, Extra: PartialEq> PartialEq<Allocation<Prov, Extra>> for Allocation<Prov, Extra>
impl<Prov: PartialEq, Extra: PartialEq> PartialEq<Allocation<Prov, Extra>> for Allocation<Prov, Extra>
sourcefn eq(&self, other: &Allocation<Prov, Extra>) -> bool
fn eq(&self, other: &Allocation<Prov, Extra>) -> bool
sourceimpl<Prov: PartialOrd, Extra: PartialOrd> PartialOrd<Allocation<Prov, Extra>> for Allocation<Prov, Extra>
impl<Prov: PartialOrd, Extra: PartialOrd> PartialOrd<Allocation<Prov, Extra>> for Allocation<Prov, Extra>
sourcefn partial_cmp(&self, other: &Allocation<Prov, Extra>) -> Option<Ordering>
fn partial_cmp(&self, other: &Allocation<Prov, Extra>) -> Option<Ordering>
1.0.0 · sourcefn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read moreimpl<Prov: Eq, Extra: Eq> Eq for Allocation<Prov, Extra>
impl<Prov, Extra> StructuralEq for Allocation<Prov, Extra>
impl<Prov, Extra> StructuralPartialEq for Allocation<Prov, Extra>
Auto Trait Implementations
impl<Prov, Extra> RefUnwindSafe for Allocation<Prov, Extra>where
Extra: RefUnwindSafe,
Prov: RefUnwindSafe,
impl<Prov, Extra> Send for Allocation<Prov, Extra>where
Extra: Send,
Prov: Send,
impl<Prov, Extra> Sync for Allocation<Prov, Extra>where
Extra: Sync,
Prov: Sync,
impl<Prov, Extra> Unpin for Allocation<Prov, Extra>where
Extra: Unpin,
Prov: Unpin,
impl<Prov, Extra> UnwindSafe for Allocation<Prov, Extra>where
Extra: UnwindSafe,
Prov: UnwindSafe,
Blanket Implementations
sourceimpl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
sourceimpl<Ctxt, T> DepNodeParams<Ctxt> for Twhere
Ctxt: DepContext,
T: for<'a> HashStable<StableHashingContext<'a>> + Debug,
impl<Ctxt, T> DepNodeParams<Ctxt> for Twhere
Ctxt: DepContext,
T: for<'a> HashStable<StableHashingContext<'a>> + Debug,
default fn fingerprint_style() -> FingerprintStyle
sourcedefault fn to_fingerprint(&self, tcx: Ctxt) -> Fingerprint
default fn to_fingerprint(&self, tcx: Ctxt) -> Fingerprint
default fn to_debug_str(&self, Ctxt) -> String
sourcedefault fn recover(Ctxt, &DepNode<<Ctxt as DepContext>::DepKind>) -> Option<T>
default fn recover(Ctxt, &DepNode<<Ctxt as DepContext>::DepKind>) -> Option<T>
DepNode
,
something which is needed when forcing DepNode
s during red-green
evaluation. The query system will only call this method if
fingerprint_style()
is not FingerprintStyle::Opaque
.
It is always valid to return None
here, in which case incremental
compilation will treat the query as having changed instead of forcing it. Read moresourceimpl<T, R> InternIteratorElement<T, R> for T
impl<T, R> InternIteratorElement<T, R> for T
type Output = R
fn intern_with<I, F>(iter: I, f: F) -> <T as InternIteratorElement<T, R>>::Outputwhere
I: Iterator<Item = T>,
F: FnOnce(&[T]) -> R,
sourceimpl<T> MaybeResult<T> for T
impl<T> MaybeResult<T> for T
sourceimpl<CTX, T> Value<CTX> for Twhere
CTX: DepContext,
impl<CTX, T> Value<CTX> for Twhere
CTX: DepContext,
default fn from_cycle_error(tcx: CTX) -> T
impl<'a, T> Captures<'a> for Twhere
T: ?Sized,
Layout
Note: Unable to compute type layout, possibly due to this type having generic parameters. Layout can only be computed for concrete, fully-instantiated types.