Blacklist ntfs3 and use ntfs-3g for slower but stable experience

Updated: April 24, 2026

What are you talking about, Dedo, you be asking? Well, I be answering. If you happen to have a heterogeneous operating system setup, your lowest common denominator is data sharing across all these platforms. There are several filesystem options that are supported well, and they include FAT32 and EXFAT. But you may also have NTFS drives in your mix. Okay, on paper, no problem. Linux handles that. For all practical purposes, you can even create and format partitions as NTFS using any one of the Linux partitioning tools.

In reality, I encountered all sorts of issues recently. First, here's a baseline for you. I've been doing data copy from Windows to Linux and back consistently and reliably over the last at least fifteen years. Throw TrueCrypt and VeraCrypt containers into the mix, and still, no trouble. TB upon TB copied and written, millions of files and folders created and whatnot. That was in the past. Today, we're talking system freezes and disk corruption. Technically, another regression. Let's work around it, shall we.

How come?

From what I've been able to gather, Linux has a kernel module called ntfs3. It's supposed to be fast. Alas, it doesn't work well. Notably, I noticed my machines would very quickly run out of memory on any large data copy operation, especially if you have fast storage and quick buses, ergo NVMe, SSD or such. I also noticed data access problems. Various programs trying to access NTFS-formatted drives would hang or seize. I even encountered data loss with an entire partition and its contents gone. All right, it is what it is.

Solution

When I first reported about this, lots of you emailed me and suggested I try the slower userspace utility. I've already started this experiment myself, but it was nice to get some extra nudges and confirmation from my readers. Indeed, after I blacklisted the kernel module and thus began using the ntfs-3g utility, my NTFS problems disappeared. Here's how I did it using an Ubuntu-based system - the principle is the same for pretty much any other distro out there, with perhaps only tiny variations in the path and name of the blacklist file.

Open as root or sudo the following file in a text editor:

sudo nano /etc/modprobe.d/blacklist.conf

Add to the bottom of this list the following directive:

blacklist ntfs3

Reboot your Linux system.

Install, if not already installed, the ntfs-3g package, e.g.: in Ubuntu:

sudo apt install ntfs-3g

Now, you can do all your work as before, including sophisticated setups like encrypted containers, or even the Mac-Linux-Virtualization-shared-folder combo as I detailed in this funky little guide. The downside? The data operations won't be very fast. But they should work correctly. I tested this multiple times, including the highly complex setup above. From the most recent test:

sent 146,685,013,222 bytes received 4,491,159 bytes 15,228,601.54 bytes/sec
total size is 308,345,210,556 speedup is 2.10

Conclusion

I know, I know. Using NTFS in Linux is asking for trouble. Then again, most desktop systems in this world use Windows, and if you want cross-operability in a simple way, NTFS is the way to go. You cannot convince random people to partition and format their disks with EXFAT or anything of that sort. Furthermore, Linux has the built-in support. The kernel package is sort of a clue. Hence, if it's there, it should work.

As I've faced tons of problems with the ntfs3 driver, I think the simplest solution is to use the old, crusty, slow but reliable ntfs-3g set. You won't get blazing speed, but you will get consistent results. My testing over the years, with millions of files and many TBs of data copied shows this to be the case. Should you encounter similar issues to mine, give it a go. Your backups won't be fast, but your system work go bork, either. Soon, we might even get a new ntfs driver in the kernel, so that could help resolve all of these issues, reliability and speed. Stay tuned for updates.

Cheers.