How to Create a 2D Array in Rust: Vec<Vec<T>> vs Ndarray

rust 2d array ndarray vec data structures

To create a 2D array in Rust, you have four practical options: a fixed-size [[T; COLS]; ROWS] when dimensions are known at compile time, a Vec<Vec<T>> for quick prototyping, a flat Vec<T> with manual index math for cache-friendly performance, and the ndarray crate when you need slicing, broadcasting, or linear algebra. For most production code where the dimensions are determined at runtime, use a flat Vec<T> wrapped in a small struct. It gives you a single contiguous allocation, excellent cache locality, and zero dependency overhead.

Option 1: Fixed-Size Array on the Stack

When both dimensions are compile-time constants, a plain nested array is the simplest and fastest choice. It lives entirely on the stack with zero heap allocations.

fn main() {
    // Initialize a 3x4 grid of zeros.
    let mut grid: [[i32; 4]; 3] = [[0; 4]; 3];

    grid[1][2] = 42;

    // Iterate over every element with row and column indices.
    for (r, row) in grid.iter().enumerate() {
        for (c, val) in row.iter().enumerate() {
            print!("({r},{c})={val} ");
        }
        println!();
    }
}

Option 2: Vec<Vec<T>>, Convenient but Slow

This is the pattern most beginners reach for because it mirrors how 2D arrays work in Python or Java. Each inner Vec is a separate heap allocation, so you get ROWS + 1 allocations total, and rows are scattered across memory. This kills cache performance in tight loops. Use it for throwaway scripts or when rows genuinely have different lengths (a jagged array).

fn main() {
    let rows = 3;
    let cols = 4;

    // Each row is an independent heap allocation.
    let mut grid = vec![vec![0i32; cols]; rows];

    grid[1][2] = 42;

    println!("Element at (1,2): {}", grid[1][2]);
}

Option 3: Flat Vec<T> with Index Math (the Production Choice)

A single Vec of length rows * cols stores the entire grid in one contiguous block of memory. You convert 2D coordinates to a 1D index with row * cols + col. Wrapping this in a struct with an Index or IndexMut impl gives you ergonomic grid[(r, c)] syntax while keeping the performance of a flat buffer. Game engines, image libraries, and numerical code all use this approach in practice.

struct Grid<T> {
    data: Vec<T>,
    cols: usize,
}

impl<T: Clone> Grid<T> {
    fn new(rows: usize, cols: usize, default: T) -> Self {
        Self { data: vec![default; rows * cols], cols }
    }
}

impl<T> std::ops::Index<(usize, usize)> for Grid<T> {
    type Output = T;
    fn index(&self, (r, c): (usize, usize)) -> &T {
        &self.data[r * self.cols + c]
    }
}

impl<T> std::ops::IndexMut<(usize, usize)> for Grid<T> {
    fn index_mut(&mut self, (r, c): (usize, usize)) -> &mut T {
        &mut self.data[r * self.cols + c]
    }
}
fn main() {
    let mut grid = Grid::new(3, 4, 0i32);
    grid[(1, 2)] = 42;
    println!("Element at (1,2): {}", grid[(1, 2)]);
}

Option 4: the Ndarray Crate

If you need slicing along arbitrary axes, matrix multiplication, or interop with BLAS/LAPACK, reach for ndarray. It stores data in a flat buffer internally (the same idea as Option 3) and adds rich n-dimensional indexing, views, and broadcasting. Add it to your Cargo.toml.

cargo add ndarray

use ndarray::Array2;

fn main() {
    // Create a 3x4 array filled with zeros.
    let mut grid = Array2::<i32>::zeros((3, 4));

    grid[[1, 2]] = 42;

    // Sum a single row using a slice view.
    let row_sum: i32 = grid.row(1).sum();
    println!("Row 1 sum: {row_sum}");

    // Iterate with indices.
    for ((r, c), val) in grid.indexed_iter() {
        print!("({r},{c})={val} ");
    }
}

Performance Comparison

The critical difference between Vec<Vec<T>> and the flat approaches (Option 3 and ndarray) is cache locality. A Vec<Vec<i32>> with 1,000 rows makes 1,001 heap allocations, and iterating column-by-column forces the CPU to chase pointers across memory. A flat Vec makes exactly one allocation, and a sequential scan reads straight through contiguous memory, which is the scenario that modern CPUs are optimized for.

In benchmarks that iterate over a 1,000×1,000 i32 grid, the flat Vec approach is typically 2 to 5 times faster than Vec<Vec<T>> for full-grid scans. ndarray matches the flat Vec because it uses the same layout internally. The fixed-size stack array is fastest of all when it fits on the stack, but you'll hit a stack overflow for large grids (the default stack size is 8 MB on most platforms).

Gotchas and Pitfalls

Stack overflow with large fixed arrays. A [[f64; 1000]; 1000] is 8 MB and will blow the default stack. Either increase the stack size with std::thread::Builder or use a heap-allocated approach instead.

Off-by-one with row-major vs. column-major. The flat Vec pattern above is row-major (row * cols + col). If you iterate column-first, you get cache-hostile access. ndarray defaults to row-major (RowMajor / C order) but can switch to column-major (ColumnMajor / Fortran order) at construction time. Pick whichever matches your dominant access pattern.

Vec<Vec<T>> allows jagged rows. Nothing prevents grid[0] from having length 5 and grid[1] from having length 3. If your code assumes rectangular data, a single wrong push silently corrupts your grid. The flat Vec wrapper makes this structurally impossible.

Bounds checking. All four approaches panic on out-of-bounds access in both debug and release mode (Rust checks Vec and array indices). If you need unchecked access in a hot loop, use get_unchecked inside an unsafe block. Profile first, though, because the branch predictor usually eliminates the cost of bounds checks in sequential iteration.

Which One Should You Pick?

Use the fixed-size array when dimensions are constants and the grid is small (under a few hundred KB). Use the flat Vec struct for any runtime-sized grid where you control the access patterns: game maps, image buffers, dynamic programming tables. Use ndarray when you need slicing, linear algebra, or you're porting NumPy code. Use Vec<Vec<T>> only for genuinely jagged data or one-off prototypes where you don't care about performance.

← Back to all articles