Post by DMcCunneyPost by 98 GuyI make it a habbit of installing XP on FAT32-formatted drives on
old systems with poor specs. FAT32 is faster and more accessible
/ controllable / workable compared to NTFS. I hate NTFS. It's
claimed benefits over FAT32 are largely illusionary for home and
SOHO users.
I can't agree. NTFS is a *lot* more robust, and I haven't seen
speed issues.
You have a laptop manufactured in 2002. With some wierd-ass bastardized
i86 CPU to boot. Of course you have speed issues.
FAT32 is faster than NTFS no matter how you cut it. Journalling =
overhead.
I don't really want to to take this thread on a tangent, but I've
posted some comments (far below) about the true nature of the FAT32 vs
NTFS issue.
Post by DMcCunneyIf I have a problem that seriously corrupts the file system on
NTFS, CHKDSK will normally find everything and put it back where
it came from under its proper name. FAT16/FAT32 is another matter
You are quite misinformed about the repairability of FAT32 (I don't use
fat16, and you shouldn't confuse or equate FAT16 with FAT32 from a
feature, performance or capability point of view).
Post by DMcCunneyIf the box is that low end, I won't install XP on it at all.
I shouldn't have chosen 2K for the old notebook.
The joke about 2K and XP is that unless you're behind a NAT-router, the
minute you install 2K or XP from original the original CD and go on-line
to perform a WindozeUpdate, your system will be infected by network worm
before it has a chance to fully download and install any security
updates or patches (this is known as the "Windows Survival Time" and is
well documented and graphed). Windows 98 has no such vulnerabilities
along those lines, and in general Win-98 is far more invulnerable to a
wide variety of heap-spray and buffer-overflow exploits that commonly
brought down and infected 2K and XP between the years 2003 and 2007.
Post by DMcCunneyYou've confirmed what I suspected, and I can proceed. I just need
to make the NTFS partition FAT32, and there are several ways I
could do that.
Thank you.
No problem.
=======================
NTFS vs FAT32
Journaling doesn't preserve data, nor does it prevent an unintended
file-system event from happening. What journaling does is to make sure
the file system is "clean" after the event happens. And it also makes
sure that any partial data is completely lost after the event has
happened.
Try reading the following (written by CQuirke, a current (or former?) MS
MVP):
http://cquirke.blogspot.com/2006/01/bad-file-system-or-incompetent-os.html
http://cquirke.blogspot.com/2008/03/ntfs-vs-fatxx-data-recovery.html
http://cquirke.blogspot.com/2008/03/why-bad-sector-often-kills-you.html
http://cquirke.mvps.org/ntfs.htm
Specifically, take note of the following comments:
=======================
Claim:
- "NTFS may be safer..."
- "transaction rollback cleanly undoes interrupted operations"
Fact:
Your file system is returned to the same state it had before the
interruption. New data that existed during the interruption that was
being written will be lost. NTFS sacrifices orphaned data at the
expense of maintaining a "clean" cluster allocation. FAT32 can't
roll-back incomplete transactions, so data that was being written can be
recovered, but it comes because of the unintended creation of lost
clusters or chains which can lead to a "messy" cluster allocation
--> but rarely (if ever) a disfunctional file system.<--
========================
My comment:
Journalling does not result in or increase file recoverability. System
files are never journaled because they're never written or over-written
or re-written. Journalling serves only to clean up any mess that's left
behind if a file-write operation is improperly terminated. System
files, apps, DLL's and other program-code files are rarely re-written
during normal use. Only temp data files, pagefile, user data files,
internet-sourced data caching are subject to file-writing. A lot of
that is garbage and not desirable anyways when the system goes down and
needs to be restarted. That's why .chk files are largely useless, and
that's also why the file system is still perfectly usable even if those
.chk files were never created and the lost clusters remained lost.
User data will be sacrificed for the sake of maintaining file system
integrity. That is a weakness of NTFS (that it is so easily made
vulnerable by incomplete file transactions).
On the other hand, FAT32's integrity is not comprimised by incomplete
file transactions, even if it does lead to the creation of lost
fragments.
If you read CQuirke's commentary, he makes a point of saying that there
are planned and unplanned file-system events and that NTFS is given more
credit for "saving" a file system from unplanned events than it actually
accomplishes. A system (or drive) that loses power (for what-ever
reason) is exactly the reason that you look to the design of the file
system and assess it's claims of "robustness" or recovery potential.
If an NTFS volume has to be rolled back to a journaled state (for
what-ever reason), then the odds are high that some user data will be
lost - assuming there was a user sitting at the keyboard creating data
that was being periodically saved, or perhaps it's a server that was
receiving network data or a file or an e-mail or it was writing data to
a log file.
I honestly don't see the benefit of journaling, seeing that I've never
encountered a situation on a FAT32 drive where journaling would have
made any difference or would have been desirable.
CQuirke:
===================
Some recovery tools (including anything DOS-based, such as DiskEdit and
ReadNTFS)can't be safely used beyond the 137G line, so it is best to
keep crucial material within this limit. Because ReadNTFS is one of the
only tools that accesses NTFS files independently of the NTFS.sys
driver, it may be the only way into to NTFS volumes corrupted in ways
that crash NTFS.sys!
Given the poor results I see when recovering data from NTFS, I'd have to
recommend using FAT32 rather than NTFS as a data survivability
strategy.
====================
While CQuirke quite correctly observes the XP can't format a FAT32
volume larger than 32gb, it's been my experience that when a FAT32
volume (or drive) of any size is pre-formatted and then presented to XP,
that XP has no problems mounting and using the volume / drive, and XP
can even be installed on and operate from such a volume / drive.
He also mentions the 137 gb volume size issue that is associated with
FAT32, but that association is false. It originates from the fact that
the 32-bit protected mode driver (ESDI_506.PDR) used by win-98 has a
"flaw" that prevents it from correctly addressing sectors beyond the 137
gb point on the drive. There are several work-around for this (third
party replacement for that driver, the use of SATA raid mode, etc) but
that issue is relavent only to win-98 and how it handles large FAT32
volumes, not how XP handles large FAT32 volumes.
And note that NTFS is proprietary and un-documented at the byte level.
Not exactly something that give me a lot of confidence when it comes to
competent third-party recovery tools. In fact, because NTFS's file
system is "sprawled" out and distributed across the entire volume, it
can be more difficult to piece together when it fails.
Again, from CQuirke:
===========================
More to the point, accessibility is fragile with NTFS. Almost all OSs
depend on NTFS.SYS to access NTFS, whether these be XP (including Safe
Command Only), the bootable XP CD (including Recovery Console), Bart PE
CDR, MS WinPE, Linux that uses the "capture" approach to shelling
NTFS.SYS, or SystemInternals' "Pro" (writable) feeware NTFS drivers for
DOS mode and Win9x GUI.
FATxx concentrates all "raw" file system structure at the front of the
disk, making it possible to backup and drop in variations of this
structure while leaving file contents undisturbed. For example, if the
FATs are botched, you can drop in alternate FATs (i.e. using different
repair strategies) and copy off the data under each. It also means the
state of the file system can be snapshotted in quite a small footprint.
In contrast, NTFS sprawls its file system structure all over the place,
mixed in with the data space. This may remove the performance impact of
"back to base" head travel, but it means the whole volume has to be
raw-imaged off to preserve the file system state. This is one of several
compelling arguments in favor of small volumes, if planning for
survivability.
============================
What FAT32 can't do is provide user-level file permissions and
encryption. If you need that, then it means your drive exists in an
environment where you can't physically secure your PC from access by
others, and if you live or work in such an environment then I feel sorry
for you.
User permissions, rights, etc, have no place on a consumer desktop or
laptop PC. The concept is absurd, always has been, but Micro$haft had
no choice when they took their corporate / institutional / gov't
-certified OS (NT and it's derivatives) and shoved it down consumer's
throats. And it did consumers absolutely zero good having those
"features" when their machines got hacked and became spam zombies during
2003 - 2006.
Unlike most IT-centric people who migrated away from Win-9x the moment
that win-2k came out, I continued to run win-9x on dozens of PC's during
the past 10 years, and I've seen the huge improvements in performance
and stability that came with better hardware, more system memory, better
motherboards and video cards, and better drivers for win-9x towards the
end of it's commercial life (circa 2006).
Those that left win-98 back in 2000 or 2001 have only bad memories of an
OS trying to run on 16 or 32 mb of memory with buggy AGP video drivers
that left their system hanging and resulted in many scandisk sessions
and .chk files.
I have never lost data on a FAT-32 drive due to logical file-table
errors that could not be fixed or repaired. For the past 5 years I
really haven't had to run scandisk on any win-98 system - period.
You've got to understand the original purpose of NTFS back when it was
designed in the early 1990's.
Hard drives were less reliable than they would be by the early 2000's.
They did not have automatic bad-sector remapping, or in-drive caching.
Journaling in NTFS was designed to overcome the pathetic fault-tolerance
and failure rates of the hard drives of that age.
NTFS was initially going to find it's way onto servers, where I agree
that multi-threaded apps, multi-user file access, and general file-level
coherency was going to be important (and where files were more likely to
cross the 4gb threshold).
A lot of people have a misconception of journaling. If there is an
interruption or hardware failure that causes the system to shut down
(not just a failure that occurrs during a write operation), your file
system is returned to the same state it had before the interruption.
New data that existed during the interruption that was being written
will be lost. NTFS sacrifices orphaned data at the expense of
maintaining a "clean" cluster allocation. FAT32 can't roll-back
incomplete transactions, so data that was being written can be
recovered, but it comes because of the unintended creation of lost
clusters or chains which can lead to a "messy" cluster allocation but
rarely (if ever) a disfunctional file system.
The primary feature of NTFS is, by design, that you can't gain
file-system access without booting the GUI and logging into the system
(or at least that's how Microsoft deployed it). I don't particularly
care for that, given that I work in a home and soho environment.
FAT32 also gets a bad rap because Microsoft chose to increase cluster
size along with volume size, which is completely unnecessary.
I've formatted up to 500 gb FAT32 volumes using 4kb cluster size and
installed both Win-98 and XP on such volumes (!). Win-98 functions
pretty well given that configuration - aside from the fact that defrag
and scandskw can't deal with so many clusters (but DOS checkdsk can).