Are command line interfaces for an OS obsolete?

All off topic discussions go here. Everything from the funny thing your cat did to your favorite tv shows. Non-programming computer questions are ok too.
LtG
Member
Member
Posts: 384
Joined: Thu Aug 13, 2015 4:57 pm

Re: Are command line interfaces for an OS obsolete?

Post by LtG »

XenOS wrote: I'm not talking about pixels, but graphics libraries used by the GUI for rendering menus, widgets etc. such as Qt or GTK. Some also used Motif. And stuff like Cairo for font rendering. Compare this to a lightweight plain X terminal with fixed-width font. Also I use tmux for terminal multiplexing, so I can have several CLI applications running in one terminal.
Fair enough, but I think part of the problem here is Linux. Because of "freedom", there's a thousand and one libraries and everybody rolls their own --> no sharing. I think in a good GUI OS all apps would use the same font rendering (needed by the CLI anyway), same most things needed by the CLI anyway. So the overhead all of sudden is almost non-existent.
XenOS wrote: I have many use cases for grep: searching stuff in some project's source code (as in C, C++);
That's what IDE's are for. If the IDE sucks, then the solution is to fix the IDE, not use grep. Grep has no contextual understanding that the IDE (or whatever app) has.
XenOS wrote: piping curl / wget output into grep for finding some specific information in a web resource;
I can use ctrl-F in both FF and Chrome.
XenOS wrote: Using one tool - grep - with a common syntax for all these different jobs is really convenient.
grep != regexp. Nothing prevents you from using regexp in your C++ code, or Java, or anything. The syntax in regexp remains the same, from app to app, but in practice, you'd rarely need it, if the data was structured to begin with.
XenOS wrote: It would be horrible if I had to use a GUI and some binary format with a specific API for this.
Specific API, like SQL? Or something even more context aware and simpler? You're saying using something less context aware (magical text) and grep is better?
XenOS wrote: Simple example: I found someone put a list of songs on Youtube on a web page and I want to listen to them in random order. Sure, I could copy the links in the browser into some playlist on Youtube. Or I can just run curl on that page, pipe the output into grep to filter out the video links, pipe the output into xargs and run VLC player with that, telling it on the the command line arguments that I want infinite looping, random order and no video to keep it from switching the screensaver off. I can even save the command line into a shell script and next time I find such a nice list, just run that script on the URL.
I agree, browsers fail us. The browser (or HTML service, or whatever) should give us easy access to the data. These are some of the things I'm trying to improve upon.

I do think the solution you present is flawed, not just in practice, but also in that it tries to treat the symptoms.

In practice, how do you grep the links? I haven't checked Youtube HTML, but unless it has some super convenient tags, you're going to get all URL's on the page, half of which have nothing to do with your music (like link to ToS).

In principle, I think screensaver is orthogonal to video is playing or not (this is a Linux problem #1), and we should have access to the semantic data, in this case the actual playlist. With access to the playlist, we'd also get access to the links in the playlist --> give that to the _music_ (not video) player, and you're problems are solved.

#1, Linux is not a real OS, it's a kernel and a thousand distros, thus nothing is ever unified and all the benefits of that are lost. There's no central component that would know if the screen should switch off or not.

PS. I'm obviously not claiming to have solved all of the above. Rather I think grep is like C, made decades ago and lacking today. I use Linux, CLI, grep, etc, daily, and all the time I think there should be a better way. With programming IntelliJ does offer a lot of improvements, but still not enough, in part because of bad languages (Java, C/C++, etc). With IntelliJ+Java I'd rather follow the contextual links in the IDE any day over grep, though occasionally I do use grep if it's something that crosses project boundaries. But again, that's a problem with the tool (IDE), and not a praise of grep. The only reason to use grep is because everything else failed you.
Korona
Member
Member
Posts: 1000
Joined: Thu May 17, 2007 1:27 pm
Contact:

Re: Are command line interfaces for an OS obsolete?

Post by Korona »

In my experience, typing regexp into IDEs is significantly slower than grep, especially when searching thousands of files. And some regexp engines just suck (you mentioned Excel...). Plus you need to click more unless you know a shitton of otherwise useless (outside of the IDE) hotkeys (case sensitive? global? whole project? whole directory? headers only? source files only?).
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].
User avatar
Schol-R-LEA
Member
Member
Posts: 1925
Joined: Fri Oct 27, 2006 9:42 am
Location: Athens, GA, USA

Re: Are command line interfaces for an OS obsolete?

Post by Schol-R-LEA »

Has anyone brought up Dr. Wirth's attempt to square this circle in Oberon OS through 'live text', yet? Basically, rather than a command line in the usual sense, what Oberon does is allow any text in a specific syntactic form (the 'ModuleCommand' syntax) from any window to be run as Oberon (the programming language) code simply by clicking it with the middle button - assuming it is valid code, that is. It also allowed menu building simply through pinning the text to a menubar - IIUC, menus were simply small textboxes composed of pinned code sections. This was what Wirth called a 'texual user interface' - graphical in user operation, but entirely built through program code.

Which basically means any open text window is equivalent to a shell console, but with the full editing abilities always at hand. It means that there are no shell commands as such, only interactive compile-and-go operations, but this isn't much of a much; having to write the code in Oberon? Maybe a bit more of a showstopper for most people, as the later Wirth languages never really had much of a following.

Image
(Image from this Wikipedia article on Oberon)

Image
(This image and the next are from progtools page on Oberon and its successors)

While the original UI only supported tiled windows, later versions allow free, layered windows:
Image

This should seem somewhat familiar to anyone how has ever run Elisp code directly from an Emacs buffer, as well. I don't know much about the sort of code auto-selecting tools the Oberon system provided, but the Emacs elisp mode has the advantage that - since Elisp code is always in the form of an s-expression - it can easily snarf the entire expression, no matter how long, by selecting the s-expr from its last paren backwards, as a single operation. One can even select a section of code in the middle of a larger function and run just that part, assuming that the snippet makes sense to run independently. I'm pretty sure that vim has extensions for running Lisp family languages which work similarly, and most Lisp-specific editors (e.g., Dr Racket have some ability to do the same.

I am pretty sure Smalltalk environments such as Squeak, and some graphical Forths, have equivalent tools, too.
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
User avatar
xenos
Member
Member
Posts: 1121
Joined: Thu Aug 11, 2005 11:00 pm
Libera.chat IRC: xenos1984
Location: Tartu, Estonia
Contact:

Re: Are command line interfaces for an OS obsolete?

Post by xenos »

LtG wrote:Fair enough, but I think part of the problem here is Linux. Because of "freedom", there's a thousand and one libraries and everybody rolls their own --> no sharing. I think in a good GUI OS all apps would use the same font rendering (needed by the CLI anyway), same most things needed by the CLI anyway. So the overhead all of sudden is almost non-existent.
You're not getting the point. With just a terminal, or even in plain text mode, I don't need any graphics library for fancy widget stuff. Not even a single shared one.
LtG wrote:That's what IDE's are for. If the IDE sucks, then the solution is to fix the IDE, not use grep. Grep has no contextual understanding that the IDE (or whatever app) has.
I don't need any contextual understanding if I just want to know in which file and line some string appears. That is not what an IDE is for, that comes bloated with whatever tools for formatting, analyzing, compiling, editing...
LtG wrote:I can use ctrl-F in both FF and Chrome.
Does that allow you to pipe all occurrences of the search string into some other program for further processing, without doing that by hand?
LtG wrote:grep != regexp. Nothing prevents you from using regexp in your C++ code, or Java, or anything. The syntax in regexp remains the same, from app to app, but in practice, you'd rarely need it, if the data was structured to begin with.
The data is structured. It is C++ code or HTML or something else. How would you re-structure, say, a HTML document or other web format in such a way that one day it gives me all Youtube links, another day all occurrences of "LtG", without using a search string?
LtG wrote:Specific API, like SQL? Or something even more context aware and simpler? You're saying using something less context aware (magical text) and grep is better?
I don't need context awareness if all I want is to find a string in a single or multiple text files or a data stream. I don't need special, context aware tools. I just need one simple and lightweight tool that is optimized to do its job well and fast.
LtG wrote:In practice, how do you grep the links? I haven't checked Youtube HTML, but unless it has some super convenient tags, you're going to get all URL's on the page, half of which have nothing to do with your music (like link to ToS).

Code: Select all

www\.youtube\.com/watch\?v=[:alnum:]*
LtG wrote:In principle, I think screensaver is orthogonal to video is playing or not (this is a Linux problem #1), and we should have access to the semantic data, in this case the actual playlist. With access to the playlist, we'd also get access to the links in the playlist --> give that to the _music_ (not video) player, and you're problems are solved.
In my example there was not even a playlist. Just some links someone posted on a website - maybe a forum, maybe his website, maybe a dating profile.
LtG wrote:#1, Linux is not a real OS, it's a kernel and a thousand distros, thus nothing is ever unified and all the benefits of that are lost. There's no central component that would know if the screen should switch off or not.
I wasn't even talking about a specific OS, so why would you blame Linux for that?
LtG wrote:The only reason to use grep is because everything else failed you.
Rather because it is the best tool for the job.
Programmers' Hardware Database // GitHub user: xenos1984; OS project: NOS
Korona
Member
Member
Posts: 1000
Joined: Thu May 17, 2007 1:27 pm
Contact:

Re: Are command line interfaces for an OS obsolete?

Post by Korona »

IMHO, the most important aspect of the CLI is that it works remotely, over slow connections (e.g., serial I/O or GPIO) and it's portable from the tiniest devices (IoT lightbulbs, routers) to the biggest machines (all of HPC is CLI driven). No single GUI works for all those cases. In fact, not even "fancy" CLIs (that support partial graphical output, such as these mentioned above) work for that use case.
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].
LtG
Member
Member
Posts: 384
Joined: Thu Aug 13, 2015 4:57 pm

Re: Are command line interfaces for an OS obsolete?

Post by LtG »

XenOS wrote: You're not getting the point. With just a terminal, or even in plain text mode, I don't need any graphics library for fancy widget stuff. Not even a single shared one.
So how does the text get _rendered_ to the screen? You're still using _some_ "graphics library", even in text mode, just happens to be built into the firmware. It's still going to take just as much memory (the frame buffer) and there's no benefit.

I'm not really sure what you mean by "graphics library", or what you're problem is with them.
XenOS wrote: I don't need any contextual understanding if I just want to know in which file and line some string appears. That is not what an IDE is for, that comes bloated with whatever tools for formatting, analyzing, compiling, editing...
Why would you want to know in which file some string appears? There's an underlying reason you want to know that, and that reason provides context, you're not telling us what it is, so I can't suggest a better approach.

Suppose the "string" is actually a function name and you want to know all the places it's used in, so you can rename it. So in reality we're not looking for some "string", we're looking for a function based on name, grep can't really help you there, though you can easily create a regexp that kind of matches the function uses, but might bring false-positives. Something context/semantics aware would give you the exact match.

And the "bloated" IDE goes one step further, it allows you to rename the function directly, so you don't need to do any of the above. Can you reliably rename a function with grep/sed?
XenOS wrote:
LtG wrote:I can use ctrl-F in both FF and Chrome.
Does that allow you to pipe all occurrences of the search string into some other program for further processing, without doing that by hand?
If it doesn't, then that's a problem with FF/Chrome.
XenOS wrote: The data is structured. It is C++ code or HTML or something else. How would you re-structure, say, a HTML document or other web format in such a way that one day it gives me all Youtube links, another day all occurrences of "LtG", without using a search string?
In what way is C++ source file structured? It's not, it's a plain text file.

If it were structured you could just rename a function/variable, but you can't. IDE's create a structure for it in the background, allowing you to rename things.

How does your search string give you all youtube links, without knowing what is a youtube link? It doesn't. I'm guessing you're going to _hope_ that all such links start with youtube.com, but could be any URL short link as well, and you'd miss all of those.

I'm not saying you can't search for "LtG", I'm saying more often than not, that's not what you want. Usually you would want to find all posts made by "LtG", or all mentions of "LtG" in posts, or something else, but not all occurrences of plain text "LtG", though that too can happen, just more rare.
XenOS wrote: I don't need context awareness if all I want is to find a string in a single or multiple text files or a data stream. I don't need special, context aware tools. I just need one simple and lightweight tool that is optimized to do its job well and fast.
It's not optimized, it's the opposite of optimized. It's going to do a brute force search of all the data, instead of looking up a variable name in a structured source file. This is similar to the zero terminated strings, to get the length you have to iterate thru the whole thing.

I mean, something as simple as text files still cause problems. Try to open a few MiB text file (let alone few GiB) with notepad or gedit. There are better editors that can actually handle the problem to some extent, but no editor can handle it properly. You just can't random access text files, like go to line 2M, without parsing the file up to that point. And if the editor wants to be memory efficient, then when you next jump to line 1M it needs to parse the file to that point again.
XenOS wrote:
LtG wrote:#1, Linux is not a real OS, it's a kernel and a thousand distros, thus nothing is ever unified and all the benefits of that are lost. There's no central component that would know if the screen should switch off or not.
I wasn't even talking about a specific OS, so why would you blame Linux for that?
I'm not blaming Linux, just used it as an example (like I also use Windows), because it's well known.
LtG
Member
Member
Posts: 384
Joined: Thu Aug 13, 2015 4:57 pm

Re: Are command line interfaces for an OS obsolete?

Post by LtG »

Korona wrote:IMHO, the most important aspect of the CLI is that it works remotely, over slow connections (e.g., serial I/O or GPIO) and it's portable from the tiniest devices (IoT lightbulbs, routers) to the biggest machines (all of HPC is CLI driven). No single GUI works for all those cases. In fact, not even "fancy" CLIs (that support partial graphical output, such as these mentioned above) work for that use case.
Why should the lightbulb have to parse strings in a CLI?

Instead, how about four "ports":
- turn on
- turn off
- is the lightbulb on?
- hours count (or second, ms, us)

With a CLI somebody needs to create a program that outputs correctly formatted magical text strings and somebody else has to write a parser in the CLI that parses those magical text strings. Neither is necessary or useful.
User avatar
xenos
Member
Member
Posts: 1121
Joined: Thu Aug 11, 2005 11:00 pm
Libera.chat IRC: xenos1984
Location: Tartu, Estonia
Contact:

Re: Are command line interfaces for an OS obsolete?

Post by xenos »

LtG wrote:So how does the text get _rendered_ to the screen? You're still using _some_ "graphics library", even in text mode, just happens to be built into the firmware. It's still going to take just as much memory (the frame buffer) and there's no benefit.
Once again, I am not talking about the frame buffer here, but about the actual library code. Rendering bitmap fonts requires a lot less resources that, e.g., rendering some GUI with widgets and anti-aliased vector fonts. If it is built into the firmware, it does not even require any additional memory use, whereas a graphics library obviously needs to be loaded somewhere in RAM in addition to the firmware that is already in place anyway.
LtG wrote:I'm not really sure what you mean by "graphics library", or what you're problem is with them.
I gave you some examples (GTK, Qt) and the problems (use of lots of resources) before.
LtG wrote:Why would you want to know in which file some string appears? There's an underlying reason you want to know that, and that reason provides context, you're not telling us what it is, so I can't suggest a better approach.
It can be any reason, and the point is that a tool such as grep does not even need to know the reason. It just does the search for me, and I don't have to tell it why. A don't want a dozen of different approaches, one for each possible reason I will ever come across.
LtG wrote:Suppose the "string" is actually a function name and you want to know all the places it's used in, so you can rename it. So in reality we're not looking for some "string", we're looking for a function based on name, grep can't really help you there, though you can easily create a regexp that kind of matches the function uses, but might bring false-positives. Something context/semantics aware would give you the exact match.

And the "bloated" IDE goes one step further, it allows you to rename the function directly, so you don't need to do any of the above. Can you reliably rename a function with grep/sed?
What if the string I am searching for is not a function (or variable or class) name, but something that has nothing to do with the program syntax at all? Say I notice a typo in the output of my program, and I want to know in which source file that particular string is located which has the typo. If I have an IDE that can search just for identifiers, it will not help me at all. All its great features, like context aware editing, are completely useless to me in this situation. Yet they take resources - obviously an IDE has more program code, needs to handle more cases, uses more memory than just running grep.
LtG wrote:If it doesn't, then that's a problem with FF/Chrome.
...which is exactly the reason why I would not use FF/Chrome in this case, but - again - grep. It is one tool, it handles many of my use cases and I don't need specialized / different tools depending on whether I want to filter C++ or HTML.
LtG wrote:In what way is C++ source file structured? It's not, it's a plain text file.
It follows C++ syntax, so it is structured into functions, classes etc.
LtG wrote:How does your search string give you all youtube links, without knowing what is a youtube link? It doesn't. I'm guessing you're going to _hope_ that all such links start with youtube.com, but could be any URL short link as well, and you'd miss all of those.
If I want to know whether my string matches the links on a website, I just run it through grep. If I want to know whether there are short links, I just look at the source. There is no need to rely on "hope" if I can just test and see.
LtG wrote:It's not optimized, it's the opposite of optimized. It's going to do a brute force search of all the data, instead of looking up a variable name in a structured source file. This is similar to the zero terminated strings, to get the length you have to iterate thru the whole thing.
It is optimized to do exactly that - do a brute force search as fast and efficient as possible. Which is exactly what I want in the mentioned use cases.
I mean, something as simple as text files still cause problems. Try to open a few MiB text file (let alone few GiB) with notepad or gedit. There are better editors that can actually handle the problem to some extent, but no editor can handle it properly. You just can't random access text files, like go to line 2M, without parsing the file up to that point. And if the editor wants to be memory efficient, then when you next jump to line 1M it needs to parse the file to that point again.
That's a problem with the editor. I never had problems with large files and VIM. And if the file is too large to open the whole file, which usually happens only with auto-generated files such as log files, nothing I would write or edit by hand, I ask myself why should I even open it in an editor? Or read / display the whole file? If I just want to extract some specific information, I use - once again - grep.
Programmers' Hardware Database // GitHub user: xenos1984; OS project: NOS
LtG
Member
Member
Posts: 384
Joined: Thu Aug 13, 2015 4:57 pm

Re: Are command line interfaces for an OS obsolete?

Post by LtG »

XenOS wrote: Once again, I am not talking about the frame buffer here, but about the actual library code. Rendering bitmap fonts requires a lot less resources that, e.g., rendering some GUI with widgets and anti-aliased vector fonts. If it is built into the firmware, it does not even require any additional memory use, whereas a graphics library obviously needs to be loaded somewhere in RAM in addition to the firmware that is already in place anyway.
Win95 works with 4MiB of RAM, so how much is a lot?

If Qt is bloated, then that's a problem with Qt.
XenOS wrote: It can be any reason, and the point is that a tool such as grep does not even need to know the reason. It just does the search for me, and I don't have to tell it why.
Of course it can be any reason, but providing the context allows the tool to do a better job.
XenOS wrote: What if the string I am searching for is not a function (or variable or class) name, but something that has nothing to do with the program syntax at all? Say I notice a typo in the output of my program, and I want to know in which source file that particular string is located which has the typo. If I have an IDE that can search just for identifiers, it will not help me at all. All its great features, like context aware editing, are completely useless to me in this situation. Yet they take resources - obviously an IDE has more program code, needs to handle more cases, uses more memory than just running grep.
For the typo you'd be looking for a string with said typo, still less to search than searching everything.

And you shouldn't compare IDE to grep, rather the tiny part of the IDE that performs searches, that shouldn't be significantly bigger or smaller than grep.

Also, with grep you usually do provide some context, for instance which file(s) to search. I'm suggesting providing more context. For instance if the search is done at the OS level, something like:
- the typoed word
- that it's a C++ string
- project name (in the IDE this would be implicit)

The last two might be in different order, or it might be that the result already popped up after writing the typoed word, if the search was fast enough.

I want to start with the full data set and then give just enough info to narrow it down until I get the results I want. I can't effectively do that with grep, though I do do it. What I want is something better than grep.
XenOS wrote: If I want to know whether my string matches the links on a website, I just run it through grep. If I want to know whether there are short links, I just look at the source. There is no need to rely on "hope" if I can just test and see.
What if I want all the youtube video links, that is, links that result in a youtube video, not just the links that happen to have the word youtube on them?

And I can't just "test and see" if it's a tool I run automatically.
XenOS wrote: It is optimized to do exactly that - do a brute force search as fast and efficient as possible. Which is exactly what I want in the mentioned use cases.
But why do you want that? I'm guessing you have an actual reason for each of those grep searches you do, and in each case if you could provide better parameters to grep then it could do the job better, such as faster, less memory, more narrow results.
XenOS wrote: That's a problem with the editor. I never had problems with large files and VIM.
Vim is one of the better ones, but it's not that good. I just tested opening a 300MiB text file on OpenBSD and it took around 10 seconds, not instant. Available memory dropped by about 150MiB.

I did a second test with a 2.3GiB file, it took ~90s.

I'm guessing the first thing vim does is to go thru the entire file to figure out how many lines there are, because the file doesn't know it. Once started I was able to jump to any line instantly, so I'm guessing vim keeps a list of where the lines start.

Most programming languages have strings that know their length, so the length was known when the file was created, but that key piece of info got thrown out, only to be recreated when the file is used, over and over again.
XenOS wrote: And if the file is too large to open the whole file, which usually happens only with auto-generated files such as log files, nothing I would write or edit by hand, I ask myself why should I even open it in an editor? Or read / display the whole file? If I just want to extract some specific information, I use - once again - grep.
So with a log file, how would you find all instances of exceptions thrown from a Java program where the stack trace has a specific method call and that happened during a specific time period, say 1.1.2000 between 15:00 and 17:15?

And why is it difficult? Because all the semantic meaning of the data (dates and exceptions) has been thrown out and has been reduced to magical text, where the file doesn't even know how many lines there are, and the lines don't even know their own length.

If the semantic meaning was still there, then it would be easy to search it.

Have you ever had to correlate multiple log files from multiple systems? Not fun, depending on the complexity I might end up piecing them together in some type of notepad. That's actually something that humans are bad at, but computers are good at, yet in practice I end up doing it manually.
User avatar
xenos
Member
Member
Posts: 1121
Joined: Thu Aug 11, 2005 11:00 pm
Libera.chat IRC: xenos1984
Location: Tartu, Estonia
Contact:

Re: Are command line interfaces for an OS obsolete?

Post by xenos »

LtG wrote:Win95 works with 4MiB of RAM, so how much is a lot?
Depends on what you compare with and how much is available. VGA BIOS typically fits in 64kiB ROM.
LtG wrote:If Qt is bloated, then that's a problem with Qt.
And that's a reason for not using it, which is exactly what I said.
LtG wrote:Of course it can be any reason, but providing the context allows the tool to do a better job.
But that would require that the tool knows about all the possible contexts, which makes it more complex.
LtG wrote:For the typo you'd be looking for a string with said typo, still less to search than searching everything.
But for that it first has to figure out where strings start and where they end. How would you figure that out without searching for the beginnings and ends of strings? Needless to say that of course you can cook up a regex that filters out strings first and then searches the typo only in those. So that is "providing context to grep" without making the tool more complex.
LtG wrote:And you shouldn't compare IDE to grep, rather the tiny part of the IDE that performs searches, that shouldn't be significantly bigger or smaller than grep.
And how would I use that "tiny part of the IDE" only, without having to run the whole, big, bloated IDE if I just want to use a tiny part of it?
LtG wrote:Also, with grep you usually do provide some context, for instance which file(s) to search. I'm suggesting providing more context. For instance if the search is done at the OS level, something like:
- the typoed word
- that it's a C++ string
- project name (in the IDE this would be implicit)
It's not hard to do that with regex. Apart from that, I keep separate projects in separate folders, so there is no reason to specify "project name" as context, because I just run grep in the project folder (and subfolders).
LtG wrote:What if I want all the youtube video links, that is, links that result in a youtube video, not just the links that happen to have the word youtube on them?
You grep for all links, pipe all of them into curl (or something similar), tell it to follow redirects, let it output the final URL and grep that for Youtube videos. No GUI needed for this.
LtG wrote:And I can't just "test and see" if it's a tool I run automatically.
If it's a tool run automatically, then why on earth should it have a GUI? A UI is for user interaction.
LtG wrote:But why do you want that? I'm guessing you have an actual reason for each of those grep searches you do, and in each case if you could provide better parameters to grep then it could do the job better, such as faster, less memory, more narrow results.
I do provide context - it's in the regex, or in a combination of searches - and that narrows down the search as much as I want. That's not an argument for using a GUI. Whatever information I could provide to a GUI, I can also provide to a command line tool, and grep has enough context for my purposes.
LtG wrote:Vim is one of the better ones, but it's not that good. I just tested opening a 300MiB text file on OpenBSD and it took around 10 seconds, not instant. Available memory dropped by about 150MiB.

I did a second test with a 2.3GiB file, it took ~90s.
If your task requires loading files of that size, something's wrong with either the task or the method you use to solve that.
LtG wrote:So with a log file, how would you find all instances of exceptions thrown from a Java program where the stack trace has a specific method call and that happened during a specific time period, say 1.1.2000 between 15:00 and 17:15?
Simple - grep for the method name, grep for the date, filter with awk to get the time window. Since Java stack traces are multi-line, I'd probably cat the file through sed before and after, so that newlines correspond to beginning and end of a stack trace. Or if that whole thing is too complicated, just write a small Perl script to do that.
LtG wrote:And why is it difficult? Because all the semantic meaning of the data (dates and exceptions) has been thrown out and has been reduced to magical text, where the file doesn't even know how many lines there are, and the lines don't even know their own length.
Of course, if I have data in a different format, I use a different method to filter it. Still I can have a command line tool for that instead of a GUI.
Programmers' Hardware Database // GitHub user: xenos1984; OS project: NOS
LtG
Member
Member
Posts: 384
Joined: Thu Aug 13, 2015 4:57 pm

Re: Are command line interfaces for an OS obsolete?

Post by LtG »

XenOS wrote:
LtG wrote:Of course it can be any reason, but providing the context allows the tool to do a better job.
But that would require that the tool knows about all the possible contexts, which makes it more complex.
XenOS wrote: But for that it first has to figure out where strings start and where they end. How would you figure that out without searching for the beginnings and ends of strings? Needless to say that of course you can cook up a regex that filters out strings first and then searches the typo only in those. So that is "providing context to grep" without making the tool more complex.
You'd rather hand code the "context" every time?

The tool wouldn't be significantly more complex, just some interface for allowing new formats to be understood by the tool.
XenOS wrote: And how would I use that "tiny part of the IDE" only, without having to run the whole, big, bloated IDE if I just want to use a tiny part of it?
I'd rather use the whole IDE, granted most (all?) of them are slightly bloated, but I'd still rather get all the benefits the offer.

I simply objected to you comparing one tiny feature (grep) with the whole IDE, because you don't develop with grep alone, you probably use the CLI too, and vim, and a compiler and other stuff. So the bloat is still there, just pieced out, and possibly smaller.
XenOS wrote: It's not hard to do that with regex. Apart from that, I keep separate projects in separate folders, so there is no reason to specify "project name" as context, because I just run grep in the project folder (and subfolders).
So you do provide the context, with CWD, which is exactly what it is. Early on they realized it would be kind of neat if you could have an implicit context.
XenOS wrote: If it's a tool run automatically, then why on earth should it have a GUI? A UI is for user interaction.
Because you want control over it?
XenOS wrote: If your task requires loading files of that size, something's wrong with either the task or the method you use to solve that.
Log files shouldn't be rotated, but because the system is bad they created a workaround, instead of fixing the system.

Large systems tend to have large log files, and I certainly wouldn't rotate them more often than once a day, unless the files were too large and the system so crappy that it's the only way, but that would still be a workaround.

First your argument was that vim can handle large files, now it's that when it can't, those files are too large and it's the files fault. So by definition vim is perfect, because when it fails it's somebody else's fault.

Granted in this case there's not much vim could do, the problem lies in the magical text file format.
XenOS wrote:
LtG wrote:So with a log file, how would you find all instances of exceptions thrown from a Java program where the stack trace has a specific method call and that happened during a specific time period, say 1.1.2000 between 15:00 and 17:15?
Simple - grep for the method name, grep for the date, filter with awk to get the time window. Since Java stack traces are multi-line, I'd probably cat the file through sed before and after, so that newlines correspond to beginning and end of a stack trace. Or if that whole thing is too complicated, just write a small Perl script to do that.
You call that simple? I call that convoluted. Assume it's a 1GiB log file, how many times does your proposed solution go thru all the data?

For a once off search of something I have to write a Perl script? And you don't find anything wrong with that picture?

Aside from that, it seems your argument seems to be that because it can be done in a CLI, then there's no reason for a GUI (or anything better).

By that line of reasoning we could all use brainf*ck the programming language, I wouldn't. I want something better.

Brainf*ck is turing complete, so anything you can do in any other language you can do in brainf*ck, and anything you can do in a GUI you can do in a CLI (if we allow for the definition of a CLI to show pictures, etc).

I usually have some high level task I need to complete, like fixing a problem, to do that I need to figure out why it happens, so I need to look at the logs, and at that point with CLI and grep I immediately end up dealing with minutia, instead of staying high level and asking the log what happened with a particular request, or what has happening at a particular time, etc.

If we reduce CLI to mean textual, then the title question of this topic is:
Can you live without a GUI (including browsers)? For me that's a no.
Can you live without a CLI? Yes. If the necessary changes were made I'd do so happily.

Today under Linux I do need to CLI, in Windows I don't. So today I wouldn't want to get rid of the CLI, because Linux really doesn't allow me to do everything within the GUI. On my own OS I of course hope to make everything better, and adding the burden of creating a CLI is just more work, and serving two masters will only result in compromise.

Plus it's an interesting challenge figuring out how to do all the necessary stuff in such a way that it feels intuitive in a GUI. Like I don't want to use Excel to do data wrangling, on Windows I use it because it's a tool I have and it can usually get the job done the fastest.
User avatar
Schol-R-LEA
Member
Member
Posts: 1925
Joined: Fri Oct 27, 2006 9:42 am
Location: Athens, GA, USA

Re: Are command line interfaces for an OS obsolete?

Post by Schol-R-LEA »

XenOS wrote:
LtG wrote:If it doesn't, then that's a problem with FF/Chrome.
...which is exactly the reason why I would not use FF/Chrome in this case, but - again - grep. It is one tool, it handles many of my use cases and I don't need specialized / different tools depending on whether I want to filter C++ or HTML.
LtG wrote:In what way is C++ source file structured? It's not, it's a plain text file.
It follows C++ syntax, so it is structured into functions, classes etc.
I believe that LtG's point (correct me if I am wrong) is that the structure of it as a text source file is implicit, rather than structured as (e.g.,) a syntax tree or some other data structure which can be rapidly traversed without additional parsing. The relevant point being that some IDEs - not all, or even most, but some - will have already generated such a structure upon loading the file, building a collection of annotated symbol tables of the various identifiers with metadata regarding what it refers to and the locations it appears in, a partial AST of the code (if the IDE is language-aware enough, whether internally or through an extension script), tries for auto-completion, pre-loaded buffers of source files of code which the current file references, etc.

While these operations do expend resources - execution time and memory - and may not be needed in a given session, they do speed these operations up from the programmer's perspective, as they are done in the background at points when the programmer's attention is elsewhere (this relates to the oft-debated assertion that mouse motions are in fact faster than keyboard commands, but the key commands seem faster because the time spent is mostly in recollecting of the command combination, while the time used on mouse actions is more of the muscle-memory variety and thus more evident to the programmer's perception). One can argue that the speed-up is illusory, but since the cycles and memory 'wasted' on them would otherwise be unused, they actually do provide a real gain in working performance.
XenOS wrote:
LtG wrote:It's not optimized, it's the opposite of optimized. It's going to do a brute force search of all the data, instead of looking up a variable name in a structured source file. This is similar to the zero terminated strings, to get the length you have to iterate thru the whole thing.
It is optimized to do exactly that - do a brute force search as fast and efficient as possible. Which is exactly what I want in the mentioned use cases.
I think that much of this argument comes down to matters of perspective. LtG is thinking in terms of the high-level problem ("How do I find all instances of this function being called") while XenOS is focused in the lower-level ("what substrings in this file match the function name"). The former isn't really any 'better' than the latter, or vice versa, but they are not the same question, and do not result in the same answer.

I would add that calling grep 'brute force' is a bit misleading, at least in terms of how it performs the search. LtG seems to be saying that it's 'brute force' with regards to how it accomplishes the objective, though, rather than how the algorithms perform.

Still, it might be worth considering how a regex compiler such as grep(1) actually operates, as it isn't actually doing string searches in the usual sense at all (and I am using the word 'compiler' advisedly, as it actually generates code in a compile-and-go fashion). My understanding is that the typical implementation for a regex search engine is, in fact, a rump lexical analyzer more closely related to compiler tools such as LEX than anything. It works by generating a finite-state machine recognizer on the fly based on the given pattern, which it then either evaluates as a bytecode, or temporarily stores in memory as a native executable and exec()'s as a separate process.

(TBF, several string search algorithms also work though constructing a DFA table which they then operate over, but regex engines are distinct in that they usually generate actual code of one type or another. Also, the point about IDEs is that they aren't doing a string search, either, but some sort of table or tree lookup instead - the IDE has already analyzed the code and is working with the data structures it already has.)

It is a good deal more sophisticated - and more resource-intensive - than either of you seem to realize. I know that this may not sound relevant, but my point is that the 'simpler' tool in question is anything but, and both of you are basing part of your argument on the same fallacy; even if you are both using it for opposing points, the fact remains that it is a fallacy.
Last edited by Schol-R-LEA on Sat Aug 10, 2019 12:24 pm, edited 2 times in total.
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
LtG
Member
Member
Posts: 384
Joined: Thu Aug 13, 2015 4:57 pm

Re: Are command line interfaces for an OS obsolete?

Post by LtG »

Schol-R-LEA wrote:
XenOS wrote: ...which is exactly the reason why I would not use FF/Chrome in this case, but - again - grep. It is one tool, it handles many of my use cases and I don't need specialized / different tools depending on whether I want to filter C++ or HTML.
LtG wrote:In what way is C++ source file structured? It's not, it's a plain text file.
It follows C++ syntax, so it is structured into functions, classes etc.
I believe that LtG's point (correct me if I am wrong) is that the structure of it as a text source file is implicit, rather than structured as (e.g.,) a syntax tree or some other data structure which can be rapidly traversed without additional parsing. The relevant point being that some IDEs - not all, or even most, but some - will have already generated such a structure upon loading the file, building a collection of annotated symbol tables of the various identifiers with metadata regarding what it refers to and the locations it appears in, a partial AST of the code (if the IDE is language-aware enough, whether internally or through an extension script), tries for auto-completion, pre-loaded buffers of source files of code which the current file references, etc.
Correct, I essentially "loathe" "magical plain text files". HTML is text, Linux config files are text, JSON is text, most things are text, because it's supposed to be simpler.

The reality is that it's not simpler. With grep you have to perform the incantation to produce a incantation for the args (structure serialized to magic text) so that at the very next step grep can go and parse that incantation back into something meaningful, a struct.

It happens with all the config files (and INI files on Windows side), and it produces a whole lot of problems without any benefits. Most config files have some booleans, but are they yes/no, true/false, 1/0, or something else? Often they accept more than one, increasing complexity, mostly on the receiving code side, but also mental complexity for the configurer.

How many Linux CLI apps have their own parser for args? How many of them have bugs? How about config parsers? How should a config parser deal with multiple declarations? Keep the last, keep the first, error out? I'm suggesting it shouldn't be allowed in the first place, and the way to do that is to use a binary format (ASCII text is also a binary format, but magical, because people tend to think there's a difference between it and other binary formats).

I think Windows registry tried to address this problem, but failed. I'm not sure what went wrong, often I like to think it's because of design by committee, but I really have no idea.

I'm not bashing Linux here, it's just that CLI apps are more common on Linux side.

I think source code files should be binary, so the AST is preserved, that way IDE loading a project is faster, refactoring (rename, etc) is easier and always correct, etc. It also means faster compile times.

Though all of that is orthogonal to GUI vs CLI/TUI, but I think relevant because trying to do all of that in a CLI/TUI isn't very feasible. I do agree that piping on Linux side works fairly well, but that doesn't mean something that achieves the same goal couldn't work on GUI side. Windows apps are mostly monolithic, there's very little interop between apps besides the clipboard, which is an absolute time saver.

A lot of these problems are legacy/historical, C++'s problems are significantly due to C, C's problems are mostly due to it's origins:
- it was created to make writing Unix (I guess apps first, the kernel second)
- there was a massive problem with resources compared to today

PDP-7 only had 64 k-words (144 KiB), which probably affected some design decisions. Doing a multipass compilation was probably harder than a single pass, thus the need for function declarations and headers (which are evil).

A few simple rules I try to follow, they're a bit harsh, to focus on the point, but also because they require discipline. It's not always easy. The second one is more relevant to this post, but I thought I'd mention the first one too.
- If you need to comment your code, then you've failed. Fix the code so you don't need comments.
- If you need to add a configuration option then you've _likely_ failed. Don't offload your failures to the user.
User avatar
Schol-R-LEA
Member
Member
Posts: 1925
Joined: Fri Oct 27, 2006 9:42 am
Location: Athens, GA, USA

Re: Are command line interfaces for an OS obsolete?

Post by Schol-R-LEA »

Damn, I was in the middle of editing my post again, so I'm not sure if you saw the part about how grep works vs. how the IDE's search would presumably work. Sorry for that. The point about how regex compilers operate may be relevant.

Also, does anyone have any comment on my post about Oberon and its solution to the question?
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
User avatar
Schol-R-LEA
Member
Member
Posts: 1925
Joined: Fri Oct 27, 2006 9:42 am
Location: Athens, GA, USA

Re: Are command line interfaces for an OS obsolete?

Post by Schol-R-LEA »

LtG wrote:Correct, I essentially "loathe" "magical plain text files". HTML is text, Linux config files are text, JSON is text, most things are text, because it's supposed to be simpler.

The reality is that it's not simpler. With grep you have to perform the incantation to produce a incantation for the args (structure serialized to magic text) so that at the very next step grep can go and parse that incantation back into something meaningful, a struct. [..] (ASCII text is also a binary format, but magical, because people tend to think there's a difference between it and other binary formats).
[..]
I think source code files should be binary, so the AST is preserved, that way IDE loading a project is faster, refactoring (rename, etc) is easier and always correct, etc. It also means faster compile times.
This is in some ways in line with my own views, though I am coming at it from an even more radical angle (that the concept of files itself, both functionally and conceptually, tends to constrict how data is used, imposing mental burden and forcing a lot of unnecessary operations when retrieving said data).
LtG wrote:Though all of that is orthogonal to GUI vs CLI/TUI, but I think relevant because trying to do all of that in a CLI/TUI isn't very feasible.
I think that XenOS's point is that this is exactly how they are doing it - that they don't use a GUI in the sense you mean, period, and only use graphics at all when they need to run something that is inherently graphical such as an image editor (and that even then, much of it is done using shell-launched tools such as filters, with only the end result being displayed after completion). This was a normal mode of operation for decades, it just seems unfamiliar today because it isn't how most people learn it now. Make of this what you will.
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Post Reply