An imperial unit (let's remember we got this from the Brits who now say they're metric... but are they?) is generally based on something in day-to-day life so they're relevant. They would have probably been named in the late 40's or early 50's. So I suspect the they'd be based on ways data was transmitted then.
4 taps (like on a telegraph) = 1 character
so 1 tap is 2 bits
1 sheet (like paper) = 13,000 characters
so 1 sheet = 52,000 taps = 104,000 bits
... etc
1 bankbox = 500 sheets = 26 million taps = 52 million bits
It's 1 to 4 for the English alphabet, though only E is a 1 tap. I started with 3 taps = 1 character but then all the whole number in my examples go away.
Today we have 64-bit computers (e.g. amd64), which descended from 32-bit computers (i386), which descended from 16-bit computers (Intel 8086), which descended from 8-bit computers (Intel 8008). Bit widths in our world naturally follow powers of two.
However, some 1960s computers used word sizes that weren't powers of two. Both IBM and DEC, among others, made 18- and 36-bit systems. Suppose that computing had continued to follow a multiples-of-nine pattern instead of the powers-of-two pattern?
For one thing, hexadecimal is less common. If you're writing 9-, 18-, 36-bit values, you typically write in octal, not hex. (In our world, Unix permissions modes are written in octal; Unix originated on the PDP-7, an 18-bit system.)
IPv4 addresses are 36 bits wide instead of 32, and you write them in octal instead of decimal. localhost is 700.0.0.1, and a typical LAN subnet mask is 777.777.777.0.
No hexadecimal means no 0xDEADBEEF or 0xCAFEBABE jokes. However, memory or files that get overwritten with junk are said to be "525'd", because binary 101010101... is octal 525252....
char would be nine bits wide instead of eight. This affects the development of character sets.
In our world, ASCII was originally a 6-bit encoding, expanded to 7-bits to support lowercase. IBM then extended it to 8-bits with code pages for different European languages, creating 8-bit PC extended ASCII. However, no single code page supports all European languages, to say nothing of non-European ones. This led to the invention of multibyte character encodings and ultimately Unicode.
In 9-bit world, multibyte characters are adopted earlier, using the high bit to indicate an extended character. Code pages don't get invented; mojibake never happens.
With 36-bit time_t, the Year 2038 problem doesn't happen; the time_t's don't wrap around until the year 3058!
A 3ยฝ" high-density floppy disk stores one megabyte of 9-bit bytes.
You know digital storage isn't metric, right? It's powers of two, not powers of ten. Since more of US Customary is based on powers of two than metric is, I'm confident in saying they're already in Freedom Units.
We currently are, but not really. A gigabyte is 1024 megabytes, and a megabyte is 1024 kilobytes, etc. However macOS and hardware manufacturers use 1000 instead of 1024 to calculate storage space. So you could say Apple uses the metric version of storage and Windows uses the imperial version.
Technically a gigabyte is 1000 megabyte. Megabytes are 1000 kilobytes and kilobytes are 1000 bytes. Which are all proper metric units but sadly don't make any sense. So datasystem manufactures and computer generally calculate with their proper counterparts that you mentions gibibyte mebibyte and kibibyte which sre actualy 1024 of their previous ones. Small but crucial difference.
Storage could be measured in what is needed for various files. They would have to be of various sizes, but not linearly increasing much like inch, foot, yard.
Launch Codes, Pledge of Allegiances, Constitutions, God Bless the U.S.As, Average individual's Patriot Act file (Pafs for short), etc.