Skip to content

Why PyTorch Doesn't Work for Spatial Storage

Why PyTorch Doesn’t Work for Spatial Storage

Section titled “Why PyTorch Doesn’t Work for Spatial Storage”

PyTorch is designed for tensor operations on fixed-size arrays. MagickCache requires dynamic spatial storage with variable geometry. These are fundamentally incompatible paradigms.

# PyTorch tensor operations
x = torch.tensor([[1, 2, 3], [4, 5, 6]]) # Fixed 2x3 matrix
y = torch.tensor([[7, 8, 9], [10, 11, 12]]) # Same dimensions required
result = torch.matmul(x, y.T) # Matrix multiplication

Characteristics:

  • Fixed dimensions: Tensors have predetermined shapes
  • Dense operations: Every element participates in calculations
  • Batch processing: Operations on entire arrays at once
  • GPU parallelization: SIMD operations across tensor elements
# What we need for spatial storage
space = SpatialStorage()
space.store_at((15.7, 23.1, 8.9), "data1") # Arbitrary coordinates
space.store_at((156.3, 2.7, 99.1), "data2") # Completely different location
results = space.query_sphere((15.0, 23.0, 9.0), radius=2.0) # Variable results

Problems:

  • Dynamic coordinates: Can’t predefine tensor dimensions
  • Sparse data: Most spatial coordinates are empty
  • Variable results: Query results have unpredictable sizes
  • Geometric operations: Spatial relationships aren’t matrix math
[a b] [e f] [ae+bg af+bh]
[c d] [g h] = [ce+dg cf+dh]

Properties:

  • Fixed input/output dimensions
  • Every element affects every result
  • Parallelizable with SIMD
  • Mathematically regular
Query: Find all points within sphere(center=(0,0,0), radius=5)
Result: Variable number of points with irregular distribution

Properties:

  • Dynamic input/output sizes
  • Only nearby elements affect results
  • Requires spatial indexing
  • Geometrically irregular
# Dense tensor - every position has a value
tensor = torch.zeros(1000, 1000, 1000) # 1 billion floats allocated
# Uses 4GB+ memory even if mostly empty
# Sparse spatial storage - only occupied coordinates
spatial = SpatialStorage()
spatial.store_at((100, 500, 750), value) # Only allocates what's needed
spatial.store_at((200, 600, 850), value) # Minimal memory usage
# Must know dimensions at creation time
model = torch.nn.Linear(784, 128) # Fixed: 784 inputs → 128 outputs
# Cannot dynamically add new input dimensions or change structure
# Can add data anywhere, anytime
storage.store_at((x, y, z), data) # Any coordinates allowed
storage.expand_region(new_bounds) # Dynamic space expansion
# Array-style indexing
tensor[0, 5, 10] # Access element at fixed indices
tensor[0:5, :, 10:20] # Slice rectangular regions
# Coordinate-based indexing
storage.get_at((15.7, 23.1, 8.9)) # Access by spatial coordinate
storage.query_sphere(center, radius) # Query by geometric relationship
Matrix multiplication: O(n³)
Element-wise operations: O(n)
Reduction operations: O(n)

All operations scale with tensor size.

Spatial query: O(r³) where r = search radius
Point insertion: O(log n) with spatial indexing
Neighborhood search: O(k) where k = results found

Operations scale with geometric parameters, not total data size.

# Utilizes all GPU cores for dense operations
result = torch.matmul(large_tensor_a, large_tensor_b) # Every core working
// Custom CUDA kernels for spatial operations
__global__ void spatial_query_kernel(Point* points, Query query, Result* results) {
// Each thread handles one spatial region
// Irregular workload distribution
}
  • No spatial primitives: No built-in sphere queries, distance calculations
  • Tensor constraints: Can’t efficiently represent sparse spatial data
  • Memory overhead: Dense tensor allocation for sparse spatial data
  • Fixed operations: Limited to linear algebra operations
  • Spatial primitives: Native sphere queries, distance calculations, spatial indexing
  • Memory efficiency: Only allocate space for actual data points
  • Geometric operations: Optimized for spatial relationships
  • Variable workloads: Handle irregular spatial distributions
# Trying to do spatial query with PyTorch
def pytorch_spatial_query(tensor_space, center, radius):
# Create coordinate grids
x_coords = torch.arange(tensor_space.shape[0])
y_coords = torch.arange(tensor_space.shape[1])
z_coords = torch.arange(tensor_space.shape[2])
# Compute distances (very expensive)
distances = torch.sqrt((x_coords - center[0])**2 +
(y_coords - center[1])**2 +
(z_coords - center[2])**2)
# Find elements within radius
mask = distances <= radius
return tensor_space[mask] # Still O(n³) complexity!
// Native spatial query - O(r³) complexity
SearchResult* spatial_query(SpatialStorage* storage, Point center, float radius) {
// Only check grid cells within radius
GridCell* relevant_cells = get_cells_in_sphere(center, radius);
// Only examine points in relevant cells
for (cell in relevant_cells) {
check_points_in_cell(cell, center, radius);
}
}

“Provide building blocks for neural networks through tensor operations”

  • Assumes dense, regular computational patterns
  • Optimizes for batch processing
  • Designed for gradient-based optimization

“Provide native spatial storage with geometric operations”

  • Assumes sparse, irregular spatial patterns
  • Optimizes for proximity relationships
  • Designed for spatial locality
# Perfect for this
loss = criterion(model(batch_input), batch_targets)
loss.backward()
optimizer.step()
# Perfect for this
embeddings = torch.matmul(input_vectors, weight_matrix)
# Perfect for this
nearby_items = storage.query_sphere(user_location, search_radius)
# Perfect for this
storage.store_at(gps_coordinate, sensor_reading)
spatial_clusters = storage.find_dense_regions()

PyTorch is a hammer, and not every problem is a nail.

For spatial storage, we need:

  • Geometric operations, not tensor operations
  • Sparse efficiency, not dense parallelism
  • Dynamic structure, not fixed dimensions
  • Spatial indexing, not array indexing

This is why MagickCache required custom C implementation with specialized CUDA kernels - no existing framework was designed for our spatial storage paradigm.


Sometimes you have to build the tool that doesn’t exist yet.