Making Linux read swap back into memory

The Linux kernel swaps out most pages from memory when I run an application that uses most of the 16GB of physical memory. After the application finishes, every action (typing commands, switching workspaces, opening a new web page, etc.) takes very long to complete because the relevant pages first need to be read back in from swap.

Is there a way to tell the Linux kernel to copy pages from swap back into physical memory without manually touching (and waiting for) each application? I run lots of applications so the wait is always painful.

I often use swapoff -a && swapon -a to make the system responsive again, but this clears the pages from swap, so they need to be written again the next time I run the script.

Is there a kernel interface, perhaps using sysfs, to instruct the kernel to read all pages from swap?

Edit: I am indeed looking for a way to make all of swap swapcached. (Thanks derobert!)
[P.S.… and… are related topics but do not address the question of how to get the Linux kernel to copy pages from swap back into memory without clearing swap.]


Thank you for visiting the Q&A section on Magenaut. Please note that all the answers may not help you solve the issue immediately. So please treat them as advisements. If you found the post helpful (or not), leave a comment & I’ll get back to you as soon as possible.

Method 1

It might help to up /proc/sys/vm/page-cluster (default: 3).

From the kernel documentation (sysctl/vm.txt):


page-cluster controls the number of pages up to which consecutive
pages are read in from swap in a single attempt. This is the swap
counterpart to page cache readahead. The mentioned consecutivity is
not in terms of virtual/physical addresses, but consecutive on swap
space – that means they were swapped out together.

It is a logarithmic value – setting it to zero means “1 page”, setting
it to 1 means “2 pages”, setting it to 2 means “4 pages”, etc. Zero
disables swap readahead completely.

The default value is three (eight pages at a time). There may be some
small benefits in tuning this to a different value if your workload is

Lower values mean lower latencies for initial faults, but at the same
time extra faults and I/O delays for following faults if they would
have been part of that consecutive pages readahead would have brought

The documentation doesn’t mention a limit, so possibly you could set this absurdly high to make all of swap be read back in really soon. And of course turn it back to a sane value afterwards.

Method 2

It seems to me that you can’t magically “make the system responsive again”. You either incur the penalty or reading pages back from swap space into memory now or you incur it later, but one way or the other you incur it. Indeed, if you do something like swapoff -a && swapon -a then you may feel more pain rather than less, because you force some pages to be copied back into memory that would otherwise have never been needed again and eventually dropped without being read (think: you quit an application while much of its heap is swapped out; those pages can be discarded altogether without ever getting read back in to memory).

but this clears the pages from swap, so they need to be written again the next time I run the script.

Well, pretty much any page that gets copied back from swap into main memory is about to be modified anyway, so if it ever needed to be moved back out to swap again in the future, it would have to be written anew in swap anyway. Keep in mind that swap is mainly heap memory, not read-only pages (which are usually file-backed).

I think your swapoff -a && swapon -a trick is as good as anything you could come up with.

Method 3

Based on memdump program originally found here I’ve created a script to selectively read specified applications back into memory. remember:

declare -A Q
for i in "<a href="" class="__cf_email__" data-cfemail="e9cda9">[email protected]</a>"; do
    E=$(readlink /proc/$i/exe);
    if [ -z "$E" ]; then
        #echo skipped $i;
    if echo $E | grep -qF memdump; then
        #echo skipped $i >&2;
    if [ -n "${Q[${E}]}" ]; then
        #echo already $i >&2;
    echo "$i $E" >&2
    memdump $i 2> /dev/null
done | pv -c -i 2 > /dev/null

Usage: something like
# ./remember $(< /mnt/cgroup/tasks )
1 /sbin/init
882 /bin/bash
1301 /usr/bin/hexchat
2.21GiB 0:00:02 [ 1.1GiB/s] [  <=>     ]
6838 /sbin/agetty
11.6GiB 0:00:10 [1.16GiB/s] [      <=> ]
23.7GiB 0:00:38 [ 637MiB/s] [   <=>    ]

It quickly skips over non-swapped memory (gigabytes per second) and slows down when swap is needed.

Method 4

You may try adding the programs you most care about to a cgroup and tuning swappiness so that the next time the application runs the programs you add are less likely to be candidates for swapping.

Some of their pages will likely still be swapped out but it may get around your performance problems. A large part of it is probably just the “stop and start” behavior when a lot of a program’s pages are in swap and the program has to continually pause in order swap its pages into RAM but only in 4k increments.

Alternatively, you may add the application that’s running to a cgroup and tune swappiness so that the application is the one that tends to use the swap file most. It’ll slow down the application but it’ll spare the rest of the system.

Method 5

There is a very nice discussion here
which boils down to decreasing swappiness, with the idea that to increase perceived responsiveness of the system one should prevent swapping out the code (and this is what happens). This is not really an answer to your question, but this may prevent the problem from appearing (your applications just do not get swapped, only unused data and page cache)

All methods was sourced from or, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments