Any solution must account for backward-compatibility. For instance, continuation sections
cannot change behaviour solely based on which line ending is used in the source file, because scripts already primarily use
`r`n but have continuation sections which produce
`n, and the behaviour is explicitly documented. The meaning of `n in FileOpen's second parameter cannot be changed; it already enables conversion both ways.
RaptorX wrote: ↑09 Dec 2023, 16:37
I think this is the part that we find a bit odd (at least me and everyone who selected "yes" on the poll). When I say
FileAppend text, 'file.ahk', 'UTF-8' I am not converting
from UTF-8, but rather
to UTF-8.
You previously suggested to use `r as the option for producing `r`n, and yet
"UTF-8 `r" would not convert to UTF-8 with `r as a line ending.
FileAppend would obviously default to one or the other. My preference is `n
The default behaviour is to write whatever is given. If you want `n, you use that in the data being passed to the function. As a default behaviour, this has the least potential for surprise, and is the most efficient way to permit every kind of line ending.
Defaulting to `n implies that the function will interpret and convert line endings by default. This may be unwanted, even if contrary behaviour hadn't already been established and documented.
There is no consistency on what `n means between commands
The inconsistency you have demonstrated is merely in the
wording of the documentation, not the meaning of the option.
FileAppend, FileRead and FileOpen take data between memory and file. In what terms could you explain an option as having the same meaning in all of these contexts? You must either eliminate the differences between the contexts, or account for them. The difference between them is that FileAppend writes to file, FileRead reads from file, and FileOpen could do either. The
commonality is that when the `n option is used, `r`n is in the
file and `n in
memory. In other words,
`n means to convert between `n (in memory) and `r`n (in the file).
In reality Join`n produces a string with `n but FileAppend with the `n option gives you a file with `r`n... And that's logical somehow.
Each option is logical within the appropriate context. Nothing can be taken out of context and retain its original meaning perfectly,
by definition:
context: the circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood.
What is not logical, is using a comparison between continuation sections (a language construct) and file I/O functions to argue that options in these disparate constructs aren't logical. You aren't just removing context, but changing the context to one which makes less sense.
I have noticed that most of the back and forth that you and I usually have, centers around the difference in thinking between a programmer and a non-programmer.
There are many more than two different ways of thinking. There are also non-programmers who properly exercise logic, and programmers who don't.
Most of the time, what im referring to, is not strictly logical
To me, "not strictly logical" is just irrational. You could arrive at a conclusion by intuitive leap, yet still rationalise and explain it with logic. If you don't understand why you arrived at a conclusion, how can you expect to communicate understanding to someone else?
Logic is an important part of communicating effectively, especially in a debate.
I was asking about the purpose of this topic, questioning whether there is actually a practical reason to change the `n option. I fail to see what
that has to do with thinking like a "programmer".
What Lexikos is arguing is that it is irrelevant what letter we use for that option
I don't think I did.
iseahound wrote:For example, `n correctly replaces CRLF to LF when reading from an external file, but [and correctly] does the inverse when writing to the file.
Writing is the inverse of reading.
For simplicity, just search for the first instance of `r, `n, or `r`n. If none of these are present, the current encoding of the master script should be used instead?
Simplicity is to have a single line ending value which is either a known default or whatever the caller explicitly specified. Searching the file (if the file is even being opened with read access) adds complexity. Providing a default based on what line ending the
script file uses adds more complexity, and potential for error. I see no reason to assume any there is any relation between the encoding of the script file and the encoding of the files it is processing.
RaptorX wrote:I think the results of that conversion varies between them
In what way?