Solar wrote:This is all highly OT,
Optometry Today? Occupational therapy? (Google didn't help.)
It is much easier to transfer hex to binary (since each digit corresponds 1:1 with a half-byte) than doing the same with decimal.
It is significantly easier, I agree, but the need to do this isn't common enough for it to be a real issue. If it was, I'd have written a converter into my OS long ago.
David has a point regarding the visuals of one, two, and three digit decimals, but CPU opcodes simply aren't grouped accordingly to make those visual groupings generally meaningful.
They aren't grouped deliberately to make them visually distinct, but there are accidental patterns which show up all over the place which make it easy to know where you are in a piece of code. The same happens with Z80 code (which I originally used), but the patterns are very different. The same will doubtless happen with most instruction sets, though I think Itanium would need a very different approach - I was looking forward to working with that, but it doesn't look as if it's going to survive.
Opcodes are grouped by bit patters, which are easier read in hex.
The first part of that's true, but you quickly learn to recognise all the multiples of 8 between 0 and 256, so it makes no difference.
Virtually all relevant documentation is using hex.
Which means I have to translate port addresses and the like before I can use them, but again it's nowhere near common enough to compensate for the disadvantages.
Most if not all tools use hex as both input and output.
I don't use other people's tools, but clearly anyone who wants to do so is going to be better off using hex. I realise now that I should put the time into creating a hex mode for my OS so that people who want to become part of the establishment can use my OS to get a proper feel for what goes on underneath assembler without being forced into using decimals. Thank you for helping me see that this is an important issue.
I'm not trying to convert anyone who programs in more conventional ways to using decimals and I don't recommend that beginners use decimals and shun hex in the way that I do - I've simply avoided hex, assembler and programming languages because I find that they get in the way of programming by making it far more complex and unintuitive than it should be. Decimal is our native way of thinking about numbers, and speaking to the processor directly through numbers is for me the easiest way to work. I have always found it difficult to adapt to other people's ways of doing things if they involve unnecessary complexity or have arbitrary aspects to them which don't work the way I would have designed them to - it leads to all manner of bugs which are hard to track down because I simply can't learn and hold all that junk in my head. By working with raw machine code numbers, the complexity is minimised and I always know exactly what I'm getting - if a piece of my code doesn't work, I know for certain that it's my fault, and I never have to trawl through complex documentation to find out if I'm using the tools properly.
No more alien than "speaking" a programming language. It all boils down to being used to one or the other. And, as I said, I bow to your choice of using decimals because that's what you grew used to, but I will always and strongly recommend a newcomer getting used to hex.
Just like I recommend my child to learn English as first foreign language, not Latin.
Decimal is their native language, so it isn't equivalent to learning a second language. What it really comes down to is this - if you have no desire to do anything other than program in machine code, you'll probably be better off using decimals. If you want to use assembler and intend to work with other people's code and for them to work with yours, then you'd be better off using hex. If you want to use a compiler and are likely to need to use a lot of inline assembly, then again you'd be better off with hex. So, most people will need hex. I need to write that hex mode into my OS.