Custom Memory Allocation

Custom Memory Allocators

wee_alloc

wee_alloc wee_alloc-crates.io wee_alloc-github wee_alloc-lib.rs cat-memory-management cat-no-std cat-wasm cat-web-programming cat-embedded

wee_alloc is a Wasm-enabled allocator.

Use a Custom Allocator with tikv-jemallocator

tikv-jemallocator tikv-jemallocator-crates.io tikv-jemallocator-github tikv-jemallocator-lib.rs cat-memory-management cat-api-bindings

tikv-jemallocator is a Rust allocator backed by 'jemalloc' (a well-known C library). It is a drop-in replacement for the standard Rust allocator (in alloc::alloc).

//! Demonstrates using a custom global allocator.
//!
//! Add to your `Cargo.toml`:
//! ```toml
//! [target.'cfg(not(target_env = "msvc"))'.dependencies]
//! tikv-jemallocator = "0.6"
//! tikv-jemalloc-ctl = { version = "0.6.0", features = [ "stats", "use_std" ] } # optional - for introspection.
//! ```
#![cfg(not(target_env = "msvc"))]

use tikv_jemallocator::Jemalloc;

// Once you've defined the following static, jemalloc will be used for all
// allocations requested by Rust code in the same program.
#[global_allocator]
static GLOBAL: Jemalloc = Jemalloc;

fn main() -> anyhow::Result<()> {
    // Allocate a large vector.
    let v: Vec<i32> = Vec::with_capacity(1_000_000);
    print_alloc()?;

    // Drop the vector.
    drop(v);
    print_alloc()?;
    Ok(())
}

fn print_alloc() -> anyhow::Result<()> {
    use tikv_jemalloc_ctl::epoch;
    use tikv_jemalloc_ctl::stats;
    // Many statistics are cached and only updated when the epoch is advanced.
    epoch::advance().unwrap();
    let allocated = stats::allocated::read().unwrap();
    let resident = stats::resident::read().unwrap();
    println!(
        "{} bytes allocated / {} bytes resident",
        allocated, resident
    );

    // Full allocator statistics:
    // tikv_jemalloc_ctl::stats_print::stats_print(std::io::stdout(),
    // tikv_jemalloc_ctl::stats_print::Options::default())?;
    Ok(())
}

Use the mimalloc Memory Allocator

mimalloc mimalloc-crates.io mimalloc-github mimalloc-lib.rs cat-api-bindings cat-memory-management

Mimalloc is a general purpose, performance-oriented allocator built by Microsoft. It is also a drop-in replacement for the standard Rust allocator (in alloc::alloc).

//! Demonstrates the use of the mimalloc memory allocator.
//!
//! Mimalloc is a general purpose, performance oriented allocator built by
//! Microsoft.
//!
//! Add to your `Cargo.toml`:
//! ```toml
//! mimalloc = "0.1.46" # Or latest
//! ```
//!
//! A C compiler is required.
//!
//! Using secure mode adds guard pages, randomized allocation, encrypted free
//! lists, etc. The performance penalty is usually around 10%.
//!
//! ```toml
//! [dependencies]
//! mimalloc = { version = "*", features = ["secure"] }
//! ```

use mimalloc::MiMalloc;

#[global_allocator]
static GLOBAL: MiMalloc = MiMalloc;

fn main() {
    // Allocate a large vector.
    let _v = vec![0; 1024 * 1024];
}

Pre-allocated Storage for a Uniform Data Type

slab

slab slab-crates.io slab-github slab-lib.rs cat-data-structures cat-memory-management cat-no-std

slab provides pre-allocated storage for a single data type. If many values of a single type are being allocated, it can be more efficient to pre-allocate the necessary storage. Since the size of the type is uniform, memory fragmentation can be avoided. Storing, clearing, and lookup operations become very cheap.

While slab may look like other Rust collections, it is not intended to be used as a general purpose collection. The primary difference between slab and Vec is that slab returns the key when storing the value.

It is important to note that keys may be reused. In other words, once a value associated with a given key is removed from a slab, that key may be returned from future calls to insert.

bumpalo

bumpalo bumpalo-crates.io bumpalo-github bumpalo-lib.rs cat-memory-management cat-no-std cat-rust-patterns

bumpalo is a fast bump allocation arena for Rust.

Garbage Collection with seize

seize seize-crates.io seize-github seize-lib.rs cat-concurrency cat-memory-management

seize allows fast, efficient, and predictable memory reclamation for concurrent data structures.