Struct rustc_query_system::dep_graph::graph::CurrentDepGraph
source · pub(super) struct CurrentDepGraph<K: DepKind> {
encoder: Steal<GraphEncoder<K>>,
new_node_to_index: Sharded<FxHashMap<DepNode<K>, DepNodeIndex>>,
prev_index_to_index: Lock<IndexVec<SerializedDepNodeIndex, Option<DepNodeIndex>>>,
fingerprints: Lock<FxHashMap<DepNode<K>, Fingerprint>>,
forbidden_edge: Option<EdgeFilter<K>>,
anon_id_seed: Fingerprint,
total_read_count: AtomicU64,
total_duplicate_read_count: AtomicU64,
node_intern_event_id: Option<EventId>,
}
Expand description
CurrentDepGraph
stores the dependency graph for the current session. It
will be populated as we run queries or tasks. We never remove nodes from the
graph: they are only added.
The nodes in it are identified by a DepNodeIndex
. We avoid keeping the nodes
in memory. This is important, because these graph structures are some of the
largest in the compiler.
For this reason, we avoid storing DepNode
s more than once as map
keys. The new_node_to_index
map only contains nodes not in the previous
graph, and we map nodes in the previous graph to indices via a two-step
mapping. SerializedDepGraph
maps from DepNode
to SerializedDepNodeIndex
,
and the prev_index_to_index
vector (which is more compact and faster than
using a map) maps from SerializedDepNodeIndex
to DepNodeIndex
.
This struct uses three locks internally. The data
, new_node_to_index
,
and prev_index_to_index
fields are locked separately. Operations that take
a DepNodeIndex
typically just access the data
field.
We only need to manipulate at most two locks simultaneously:
new_node_to_index
and data
, or prev_index_to_index
and data
. When
manipulating both, we acquire new_node_to_index
or prev_index_to_index
first, and data
second.
Fields§
§encoder: Steal<GraphEncoder<K>>
§new_node_to_index: Sharded<FxHashMap<DepNode<K>, DepNodeIndex>>
§prev_index_to_index: Lock<IndexVec<SerializedDepNodeIndex, Option<DepNodeIndex>>>
§fingerprints: Lock<FxHashMap<DepNode<K>, Fingerprint>>
This is used to verify that fingerprints do not change between the creation of a node and its recomputation.
forbidden_edge: Option<EdgeFilter<K>>
Used to trap when a specific edge is added to the graph.
This is used for debug purposes and is only active with debug_assertions
.
anon_id_seed: Fingerprint
Anonymous DepNode
s are nodes whose IDs we compute from the list of
their edges. This has the beneficial side-effect that multiple anonymous
nodes can be coalesced into one without changing the semantics of the
dependency graph. However, the merging of nodes can lead to a subtle
problem during red-green marking: The color of an anonymous node from
the current session might “shadow” the color of the node with the same
ID from the previous session. In order to side-step this problem, we make
sure that anonymous NodeId
s allocated in different sessions don’t overlap.
This is implemented by mixing a session-key into the ID fingerprint of
each anon node. The session-key is just a random number generated when
the DepGraph
is created.
total_read_count: AtomicU64
These are simple counters that are for profiling and
debugging and only active with debug_assertions
.
total_duplicate_read_count: AtomicU64
§node_intern_event_id: Option<EventId>
The cached event id for profiling node interning. This saves us from having to look up the event id every time we intern a node which may incur too much overhead. This will be None if self-profiling is disabled.
Implementations§
source§impl<K: DepKind> CurrentDepGraph<K>
impl<K: DepKind> CurrentDepGraph<K>
fn new(
profiler: &SelfProfilerRef,
prev_graph_node_count: usize,
encoder: FileEncoder,
record_graph: bool,
record_stats: bool
) -> CurrentDepGraph<K>
fn record_edge(
&self,
dep_node_index: DepNodeIndex,
key: DepNode<K>,
fingerprint: Fingerprint
)
sourcefn intern_new_node(
&self,
profiler: &SelfProfilerRef,
key: DepNode<K>,
edges: SmallVec<[DepNodeIndex; 8]>,
current_fingerprint: Fingerprint
) -> DepNodeIndex
fn intern_new_node(
&self,
profiler: &SelfProfilerRef,
key: DepNode<K>,
edges: SmallVec<[DepNodeIndex; 8]>,
current_fingerprint: Fingerprint
) -> DepNodeIndex
Writes the node to the current dep-graph and allocates a DepNodeIndex
for it.
Assumes that this is a node that has no equivalent in the previous dep-graph.
fn intern_node(
&self,
profiler: &SelfProfilerRef,
prev_graph: &SerializedDepGraph<K>,
key: DepNode<K>,
edges: SmallVec<[DepNodeIndex; 8]>,
fingerprint: Option<Fingerprint>,
print_status: bool
) -> (DepNodeIndex, Option<(SerializedDepNodeIndex, DepNodeColor)>)
fn promote_node_and_deps_to_current(
&self,
profiler: &SelfProfilerRef,
prev_graph: &SerializedDepGraph<K>,
prev_index: SerializedDepNodeIndex
) -> DepNodeIndex
fn debug_assert_not_in_new_nodes(
&self,
prev_graph: &SerializedDepGraph<K>,
prev_index: SerializedDepNodeIndex
)
Auto Trait Implementations§
impl<K> !RefUnwindSafe for CurrentDepGraph<K>
impl<K> Send for CurrentDepGraph<K>
impl<K> !Sync for CurrentDepGraph<K>
impl<K> Unpin for CurrentDepGraph<K>where
K: Unpin,
impl<K> !UnwindSafe for CurrentDepGraph<K>
Blanket Implementations§
Layout§
Note: Most layout information is completely unstable and may even differ between compilations. The only exception is types with certain repr(...)
attributes. Please see the Rust Reference’s “Type Layout” chapter for details on type layout guarantees.
Size: 536 bytes