Free all memory before reallocating.

This should minimize the memory high-water-mark in applications. In the previous code, if there was a 1GB fallback block (and the main ptr_ was 0 bytes, say), then we would create a 1GB new allocation, and then free the 1GB fallback block, resulting in 2GB memory high water mark. In the new code, the memory high-water-mark will be only 1GB for that case.

In the best case this is expected to be a 2x savings on the memory high-water-mark. In practice, it may not reach 2x but it could be quite close -- it will be `(2 * (size of final steady state buffer allocation) - (next smaller total allocation size)) / (size of final steady state buffer allocation)`. In the case of "next smaller total allocation size" == 0 it achieves a 2x advantage. We expect that in practice the benefit will still be quite high, because this allocator specifically relies on reaching a steady state rapidly, and thus the steady state size must increase by very large jumps, meaning that (next smaller total allocation size) is likely much smaller than (size of final steady state buffer allocation).

PiperOrigin-RevId: 488405672
1 file changed
tree: b7f9e89bf27d6f705cde5db475e3ee435ed9c3e7
  1. cmake/
  2. doc/
  3. example/
  4. ruy/
  5. third_party/
  6. .gitignore
  7. .gitmodules
  8. BUILD
  9. CMakeLists.txt
  10. CONTRIBUTING.md
  11. LICENSE
  12. README.md
  13. WORKSPACE
README.md

The ruy matrix multiplication library

This is not an officially supported Google product.

ruy is a matrix multiplication library. Its focus is to cover the matrix multiplication needs of neural network inference engines. Its initial user has been TensorFlow Lite, where it is used by default on the ARM CPU architecture.

ruy supports both floating-point and 8bit-integer-quantized matrices.

Efficiency

ruy is designed to achieve high performance not just on very large sizes, as is the focus of many established libraries, but on whatever are the actual sizes and shapes of matrices most critical in current TensorFlow Lite applications. This often means quite small sizes, e.g. 100x100 or even 50x50, and all sorts of rectangular shapes. It's not as fast as completely specialized code for each shape, but it aims to offer a good compromise of speed across all shapes and a small binary size.

Documentation

Some documentation will eventually be available in the doc/ directory, see doc/README.md.