Hi!
Just as most programmers today warn people not to use assembler, probably future programmers will warn people not to use high-level programming languages.
It is written in book Java How to Program ninth edition that instead of using the strings of numbers that computers could directly understand, programmers began using English-like abbreviations to represent elementary
operations:
1.5 Machine Languages, Assembly Languages and High-Level Languages
Programmers write instructions in various programming languages, some directly understandable by computers and others requiring intermediate translation steps. Hundreds of such languages are in use today. These may be divided into three general types:
Machine languages
Assembly languages
High-level languages
Any computer can directly understand only its own machine language, defined by its hardware design. Machine languages generally consist of strings of numbers (ultimately reduced to 1s and 0s) that instruct computers to perform their most elementary operations one at a time. Machine languages are machine dependent (a particular machine language can be used on only one type of computer). Such languages are cumbersome for humans. For example, here’s a section of an early machine-language program that adds overtime pay to base pay and stores the result in gross pay:
+1300042774
+1400593419
+1200274027
Programming in machine language was simply too slow and tedious for most programmers. Instead of using the strings of numbers that computers could directly understand, programmers began using English-like abbreviations to represent elementary operations. These abbreviations formed the basis of assembly languages. Translator programs called assemblers were developed to convert early assembly-language programs to machine language at computer speeds. The following section of an assembly-language program also adds overtime pay to base pay and stores the result in gross pay:
load basepay
add overpay
store grosspay
Although such code is clearer to humans, it’s incomprehensible to computers until translated to machine language. Computer usage increased rapidly with the advent of assembly languages, but programmers still had to use many instructions to accomplish even the simplest tasks. To speed the programming process, high-level languages were developed in which single statements could be written to accomplish substantial tasks. Translator programs called compilers convert high-level language programs into machine language. High-level languages allow you to write instructions that look almost like everyday English and contain commonly used mathematical notations. A payroll program written in a high-level language might contain a single statement such as
grossPay = basePay + overTimePay
Will future programmers probably warn people not to use high-level programming languages just as most programmers today warn people not to use assembler?
If yes, what are the programming languages that will replace the high-level programming languages?
I am not pushing natural language programming in this topic.
Maybe the natural language programming is not the future of programming.
I asked the following questions in this topic:
Will future programmers probably warn people not to use high-level programming languages just as most programmers today warn people not to use assembler?
If yes, what are the programming languages that will replace the high-level programming languages?
Will the high-level programming languages became obsolete?
-
- Member
- Posts: 33
- Joined: Tue Aug 13, 2019 10:52 pm
- Libera.chat IRC: QuantumRobin
- Location: Piaui, Brazil
Re: Will the high-level programming languages became obsolet
Hi,
For example, I run into a code analyzer that is complaining about reading a word from an unaligned address, although the target architecture is perfectly capable of doing so. A usual programmer would believe the analyzer, and would mess up their code with byte reads an shifting to construct a word, which would add unnecessary complexity to the code, and would slow down execution. And only because the analyzer does not know the target architecture well enough. This kind of stupidity could be prevented by a little Assembly knowledge.
I once had to help Java programmers to debug their webpage code which received bad arguments from queries, and to my biggest surprise they had absolutely no clue what HTTP was, and how arguments were passed from a webform to the server, and in turn into their application. Nothing. Although they were developing web apps, they knew nothing about webservers and HTTP.
Cheers,
bzt
Despite this sound hilarious, actually there could be a sense to it. Many high-level programmers today are unaware how computers actually work, and therefore they write terrible code.QuantumRobin wrote:Just as most programmers today warn people not to use assembler, probably future programmers will warn people not to use high-level programming languages.
For example, I run into a code analyzer that is complaining about reading a word from an unaligned address, although the target architecture is perfectly capable of doing so. A usual programmer would believe the analyzer, and would mess up their code with byte reads an shifting to construct a word, which would add unnecessary complexity to the code, and would slow down execution. And only because the analyzer does not know the target architecture well enough. This kind of stupidity could be prevented by a little Assembly knowledge.
I once had to help Java programmers to debug their webpage code which received bad arguments from queries, and to my biggest surprise they had absolutely no clue what HTTP was, and how arguments were passed from a webform to the server, and in turn into their application. Nothing. Although they were developing web apps, they knew nothing about webservers and HTTP.
Cheers,
bzt
Re: Will the high-level programming languages became obsolet
No. (The answer to the title.)
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].
-
- Member
- Posts: 5568
- Joined: Mon Mar 25, 2013 7:01 pm
Re: Will the high-level programming languages became obsolet
Not any time soon. Probably never.QuantumRobin wrote:Will the high-level programming languages became obsolete?
The code analyzer may be correct, depending on the language. For example, in C, it is undefined behavior to merely create an unaligned pointer, even if you never use it. The compiler is free to assume that undefined behavior will never happen, and optimize on that assumption. Here is an excellent example by one of the lead LLVM developers.bzt wrote:For example, I run into a code analyzer that is complaining about reading a word from an unaligned address, although the target architecture is perfectly capable of doing so.
You're correct that it adds extra complexity to the code, but it may be required by the language (like C) and execution will be just as fast if you're using a good compiler (like Clang).bzt wrote:A usual programmer would believe the analyzer, and would mess up their code with byte reads an shifting to construct a word, which would add unnecessary complexity to the code, and would slow down execution.
Re: Will the high-level programming languages became obsolet
That's the point. The generated code does not depend on the language (in this aspect), it only depends on the target architecture which runs it. The alignment requirements are the same regardless the language being C, ADA, Pascal Basic etc.Octocontrabass wrote:The code analyzer may be correct, depending on the language.
I don't think what a target CPU is capable of can be part of any portable multiplatform language in any meaningful way (except for Assembly and other non-portable languages which are neither portable nor multiplatform by design). And I'm definitely sure it has a performance impact, which can be serious if you have to access a million variables in a loop for example. I run into this issue when I tried to optimize a certain code for performance, meaning I wanted to utilize everything that the target CPU can offer. Think about it:Octocontrabass wrote:You're correct that it adds extra complexity to the code, but it may be required by the language (like C) and execution will be just as fast if you're using a good compiler (like Clang).
a) one single memory read instruction in the MMU
vs.
b) two memory instructions, reading constant immediate (the shift value), accessing ALU circuit, using a scratch register (ALU may not have access to the destination register), handling shift operation, plus an additional OR operation, moving the result into the destination register etc.
The second is definitely slower, even an unaligned access in the first case is *much* faster than any combination of the latter. Although it may be required on certain architectures which does not support unaligned access, if the target CPU supports that, I simply see no sane reason why to use the second variant.
However this made a good point against high-level languages: a code written in them will always be less efficient than a code written in a low-level language. But they never gonna be obsolete, I agree, because most humans are lazy.
Cheers,
bzt
-
- Member
- Posts: 232
- Joined: Mon Jul 25, 2016 6:54 pm
- Location: Adelaide, Australia
Re: Will the high-level programming languages became obsolet
This is totally incorrect, and hasn't been remotely true for freaking decades. Machine generated code from well written high level source is faster than hand rolled assembly for any non-trivial task.bzt wrote: However this made a good point against high-level languages: a code written in them will always be less efficient than a code written in a low-level language. But they never gonna be obsolete, I agree, because most humans are lazy.
As for lazy, is mechanized agriculture lazy? Was the industrial revolution powered by laziness? Of course not, the point of labour saving is to perform the same task with fewer resources, so the same people can do more in the same time. It's reductive to the extreme to call it lazy, and the result of some real intellectual laziness.
-
- Member
- Posts: 5568
- Joined: Mon Mar 25, 2013 7:01 pm
Re: Will the high-level programming languages became obsolet
Unless you're using Clang. See for yourself.bzt wrote:And I'm definitely sure it has a performance impact, which can be serious if you have to access a million variables in a loop for example.
Re: Will the high-level programming languages became obsolet
Yes, that's a pretty common misconception. I tend to believe that generated code is good, but every time I put that assumption to a test, it turns out that my hand written code is way better. Last time I run into an issue where the compiler was unable to optimize dot product calculation for vectors. SSE4 has a direct instruction for that, which I can use if I use hand rolled Assembly.StudlyCaps wrote:This is totally incorrect, and hasn't been remotely true for freaking decades. Machine generated code from well written high level source is faster than hand rolled assembly for any non-trivial task.
I think the confusion originates from the fact that "hand rolled Assembly" is not well-defined. Hand rolled by whom? If we are talking about an average programmer, then the answer is yes, compiler generated code is better than their hand-written Assembly. However if we are talking about experienced low-level programmers (like OSDevers for example), then the answer is most definitely no, those devs can write better code than a compiler.
What are you talking about?StudlyCaps wrote:As for lazy, is mechanized agriculture lazy? Was the industrial revolution powered by laziness? Of course not, the point of labour saving is to perform the same task with fewer resources, so the same people can do more in the same time. It's reductive to the extreme to call it lazy, and the result of some real intellectual laziness.
(And about the industrial revolution you are extremely naive, the point never was to save time for the people. They saved labour because machines were cheaper. Machines do not want expensive health-care, nor do they strike for better working conditions, and most definitely they don't sleep, so that the factory could go around the clock. It was all about profit maximalization, nothing like being humane or good-hearted, you're naive if you think that.)
Back on the topic, high level languages take the burden of low-level machinery from the shoulders of the developer. For example in C, you have to take care of your heap variables by explicit use of malloc()/free(). In a "higher" language, C++ for example new and delete takes that away and automates allocation in many cases when you construct instances. The more "higher" level the language is, the more of these are abstracted away. You don't need to care for delete either with Java or Python for example. This also means it is easier to program in a high-level language, which suits lazy-people, who don't care about effectiveness just wants to get the job done.
And humanity is lazy by default, they try to get out the most with the least effort. That's actually every living thing's nature. Just look around you.
Cheers,
bzt
Re: Will the high-level programming languages became obsolet
Laziness can be a virtue for programmers. There is not enough time in the day to painstakingly hand optimize every line of most projects. The fellows that can outdo the compiler reliably are valuable, but should also know when to do things the hard way, and when to do things the quick way.
Regarding the topic, it seems a silly question, as what "high-level programming language" means changes as new languages are invented. It used to mean C, and whatever magic language you're thinking of will still get called a high-level language.
As for the eternal compiler vs hand written debate, x64 dodges it by way of most compilers I know of don't even allow for inline assembly anymore, and just provide intrinsics. For most things, this is perfectly fine, and gets you the best of both worlds. You get to name things reliably, and let the compiler figure out the register assignment, and still get the benefit of picking the operations directly. It'll still schedule things and likely reorder things a bit, but it is usually for the best.
That said, that magic dot product instruction is not always the best way to go. On certain x64 implementations, the latency and performance of it is rather irregular compared to the rest of the SSE instructions, and scheduling around that is a mess. I think the dependencies it introduces can result in some sub-optimal results from the out of order unit and rename units as well. I had a case recently where a somewhat more long-winded implementation via the horizontal adds beat the direct dp instruction by about 10%. YMMV, always profile.
The other dirty secret is that these days all the fancy instruction selection in the world tends to be a tedious micro-optimization, and the single best metric for anything is how many bytes of memory get touched. Aside from that dark period where everyone and their mother decided that out-of-order execution was a silly fad, and 70 stage in-order pipelines at 4GHz were going to save everything, CPU improvements have been focused on taking responsibility for instruction scheduling as far from the hands of the programmer and compiler as possible, for the better. Pentium U/V pipe scheduling, while interesting, and satisfying when it works well, was tedious as all hell, and the world is a better place with it consigned to history.
Regarding the topic, it seems a silly question, as what "high-level programming language" means changes as new languages are invented. It used to mean C, and whatever magic language you're thinking of will still get called a high-level language.
As for the eternal compiler vs hand written debate, x64 dodges it by way of most compilers I know of don't even allow for inline assembly anymore, and just provide intrinsics. For most things, this is perfectly fine, and gets you the best of both worlds. You get to name things reliably, and let the compiler figure out the register assignment, and still get the benefit of picking the operations directly. It'll still schedule things and likely reorder things a bit, but it is usually for the best.
That said, that magic dot product instruction is not always the best way to go. On certain x64 implementations, the latency and performance of it is rather irregular compared to the rest of the SSE instructions, and scheduling around that is a mess. I think the dependencies it introduces can result in some sub-optimal results from the out of order unit and rename units as well. I had a case recently where a somewhat more long-winded implementation via the horizontal adds beat the direct dp instruction by about 10%. YMMV, always profile.
The other dirty secret is that these days all the fancy instruction selection in the world tends to be a tedious micro-optimization, and the single best metric for anything is how many bytes of memory get touched. Aside from that dark period where everyone and their mother decided that out-of-order execution was a silly fad, and 70 stage in-order pipelines at 4GHz were going to save everything, CPU improvements have been focused on taking responsibility for instruction scheduling as far from the hands of the programmer and compiler as possible, for the better. Pentium U/V pipe scheduling, while interesting, and satisfying when it works well, was tedious as all hell, and the world is a better place with it consigned to history.