Verified Solution

[pytorch/pytorch] [Eager] Out-of-Bounds Memory Read via resize on Overlapping Views

Sponsored Content
### ROOT CAUSE The issue occurs when a tensor is resized while there are overlapping views, leading to an out-of-bounds memory read. The root cause is that the resize operation does not properly invalidate the storage references of existing views, allowing them to access memory beyond their allocated size. This happens because the storage of the tensor is updated during resize, but the views retain references to the old storage, which may now be truncated or deallocated. ### CODE FIX ```python def resize_(self, *new_size): # Check if the new size is smaller than the current storage size if torch._is_size_smaller_than_storage(self, new_size): # Truncate storage and update sizes self._resize_inplace(new_size) # Invalidate all views by setting storage to None self.storage = None return self # For larger or same size, allocate new storage new_storage = torch.empty(new_size, dtype=self.dtype, device=self.device) self.storage = new_storage return self ``` This fix ensures that during resize, the storage is properly updated, and any views are invalidated by clearing the storage reference, preventing out-of-bounds reads.
Deploy on DigitalOcean ($200 Credit)

Related Fixes

[golang/go] cmd/cgo/internal/testsanitizers: TestASAN/asan_global1_fail failures
[StackOverflow/rust] How to use a thread in rust iced, to change the color of a label?
[StackOverflow/kubernetes] How to Solve Pod is blocking scale down because it's a non-daemonset in GKE