[PATCH] mm/filemap: do not allocate cache pages beyond end of file at read

From: Konstantin Khlebnikov
Date: Mon Oct 28 2019 - 05:59:54 EST


Page cache could contain pages beyond end of file during write or
if read races with truncate. But generic_file_buffered_read() always
allocates unneeded pages beyond eof if somebody reads here and one
extra page at the end if file size is page-aligned.

Function generic_file_buffered_read() calls page_cache_sync_readahead()
if page not found in cache and then do another lookup. Readahead checks
file size in __do_page_cache_readahead() before allocating pages.
After that generic_file_buffered_read() falls back to slow path and
allocates page for ->readpage() without checking file size.

This patch checks file size before allocating page for ->readpage().

Signed-off-by: Konstantin Khlebnikov <khlebnikov@xxxxxxxxxxxxxx>
---
mm/filemap.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/mm/filemap.c b/mm/filemap.c
index 85b7d087eb45..92abf5f348a9 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2225,6 +2225,10 @@ static ssize_t generic_file_buffered_read(struct kiocb *iocb,
goto out;

no_cached_page:
+ /* Do not allocate cache pages beyond end of file. */
+ if (((loff_t)index << PAGE_SHIFT) >= i_size_read(inode))
+ goto out;
+
/*
* Ok, it wasn't cached, so we need to create a new
* page..