I should probably apologize as I did not see how much thought and experience you had put into this before beginning to make my criticisms. I pointed out subjects that you had already examined and made rational decisions on, under the false assumption that you were unfamiliar with some of your alternate options.
[Shambles] wrote: ↑
24 Nov 2018, 13:47
It is reasonable to wonder what a low-level functional programming construct is good for, but it puts the person answering the question in a similar situation to answering "What is addition good for?".
The primary improvement comes from encouraging a style of programming where you focus on describing how to process a single value then reuse that over and over (e.g. to build more complex functions, potentially at run-time, or as the body of the equivalent to a loop). This is as opposed to a programming style where you do a lot of explicit state manipulation and control flow. Those things are hidden behind functions and reused, like everything else. This also has a tendency to make things somewhat safer because it is easy to make mistakes when manipulating state or using control flow. If the code that does that is in one place and reused everywhere, defects tend to get noticed and corrected, and they only need to be corrected in one place.
It sounds like this style of programming is strongest for performing data processing. Given that most code I write is for API interaction, be it the Windows API or some Web API, do you think this approach is something that I would find helpful?
The decisions made with regards to AutoHotkey’s object model are bad because we know of better alternatives, and why they are better, and we knew this long before AutoHotkey was ever written (e.g. Smalltalk was released in 1972). Yes, these decisions need to keep context in mind. I did so.
There are other programming languages where objects are effectively dictionaries. The one most people would probably mention is Smalltalk, where literally everything is an object and literally everything (even the meaning of true and false) can be inspected and altered at run-time. That is setting a very high bar though. Python is a better example because, absent the mistakes, Python’s object model is similar to AutoHotkey’s. Python objects are literally dictionaries. However, Python does not consider foo['bar'] = 0
and foo.bar = 0
to be the same thing. The first will use the __setitem__
special method and the second will use the __setattr__
special method. Conflation is the right term for this mistake. The inside of the dictionary is confused with the outside of the dictionary. This makes AutoHotkey less powerful (it cannot make a distinction), not more powerful (there is nothing you can do because of this mistake that you could not do in a language that did not make this mistake).
The decision to use weak typing is bad. Again, it is bad because we know of better alternatives and why they are better. The problem with weak typing, which should not
video on this topic. Instead of an operation failing when an operand is of the wrong type, a weakly typed programming language will try to coerce the operand into a type that can work. The only time an error will be detected is when the programmer writes code to check the type manually or when the weakly typed programming language cannot come up with more ways to make sense out of nonsense. This usually results in silent corruption or a crash far from the point where the error actually occurred. All this is separate from AutoHotkey intentionally
ignoring errors, by the way, which I should hope is obviously a bad decision.
I agree with much of what you are saying here, great points!
Allowing different spaces in an object to be accessed separately by the dot operator and the brackets operator is a good thing, and is something that python does much better than AutoHotkey. It is not clear from your original texts that this is what you are referring to, even in part, by conflation. I understood your position to be that the concept of using a dictionary-type-structure as the basis for the language's object system was itself flawed.
As far as weak typing, yes you are right. It made sense for what AutoHotkey was (text macros, hotkeys, mouse/keyboard manipulation, clipboard manipulation), and a lot of people still use it for just that. It does not make sense for AutoHotkey as a programming language in a larger capacity.
Ignoring errors, yes, is a bad design choice. This is a backwards compatibility problem to support code written before the exceptions system was implemented. This is being worked on in v2.
[Shambles] wrote:AutoHotkey’s tacked on, barely an afterthought, OOP system is bad. Again, it is bad because we know of better alternatives and why they are better. Perl made the same mistake. When most types exist outside of, and in ignorance of, the object system, non-objects and objects do not work well together. To make things worse, most of AutoHotkey’s object-like constructs also exist outside of its object system. For example, a function object obtained from Func is not an object in the same sense that a user-defined function object (i.e. a class with suitable members) is. There are methods that exist on the user-defined object that do not exist on the built-in function object, and the built-in function object has no base class.
Tacked on is accurate, barely an afterthought is a little dismissive to the work that Lexikos has put into the language/interpreter over the years, and bad is, I think, a slight overstatement. I don't claim that it's as good as possible, or that there's nowhere it could be improved. The goal as I understand it was to create an object system that worked with the rest of the intepreter, was dead simple enough for people who didn't have any idea what they were doing to use (leading to 1-indexed, all data exists in the same space so there's no question of how to access it, and many other oddities), and must be flexible enough to let the people who did know what they were doing to take advantage of it. Being good in a theoretical sense was not the priority over helping non-programmers to get up and running quickly. It meets the requirements it set out to even if those do not lend themselves well to, say, enterprise development.
You are right that the inclusion of separate objects that do not derive from the common base is a bit of an oddity. Function Objects, BoundFunc objects (which really ought to just be Function Objects), and RegEx Objects are admittedly bizarre. COM Objects also do not, but I think that's a rational decision given their capacity as an interface to remote systems.
[Shambles] wrote:You went on to claim that the mistakes I have mentioned make AutoHotkey more powerful again. I have already covered why that is not true. You did not bring up the other problem that the problems you did bring up make worse.
I am not sure what portion of my text you are referring to specifically, but my suggestion however weak is that it is more flexible
than a language which does not offer the ability, not more powerful
than a language that implements similar features in a more effective manner. I apologize for not making myself clear.
[Shambles] wrote:For a dynamically type checked programming language to work well it must report errors (this should be obvious) and it must provide a reliable way to determine the type of any value or use a shared interface on all values that can exhibit a behavior (it need not do both, but most programming languages do). AutoHotkey does neither, and this is a major problem. There is no way to test the type of all values because built-in and user-defined types do not share a common type hierarchy. AutoHotkey does not use common interfaces to implement common behaviors. Detecting and reporting errors and writing code to work with different types is extremely difficult due to this decision.
This is unfortunately very accurate. As I mentioned above v2 is working on the error reporting side of things. I'm not sure if it's also working on type determination, but I sure hope so.
Using a shared interface on all values is unlikely to ever happen with AHK for better or worse (probably worse). From a macros-and-automation standpoint it's largely unnecesary, but from a pure programming standpoint it sure would be nice.
Writing code to work with different types is pretty difficult in AHK, yes. This comes up when writing wide-reaching libraries like the one you're working on here, but in my experience rarely comes up in day-to-day usage of the language. This isn't to say it's not bad, just not often a nuisance.
I am aware of how to write a class that acts like a safe dictionary. I already did so long ago. It is HashTable
, and Facade uses it.
HashTable’s page describes how AutoHotkey’s objects are unsafe. Facade’s page summarizes it. Undesirable behavior that occurs reliably is still undesirable.
Again I must apologize, your hash tables are much better thought out and implemented than I initially assumed.
I agree that AHK's objects' indexing (and other features) behave undesirably in many cases. I would likely not have said anything on that if you had written "undesirable" in the readme rather than "unreliable".
[Shambles] wrote:Also, no, hash tables do not guarantee the same keys come in the same order. They very frequently will not do so if the hash table is mutated (this can lead to a resizing, and thus rehashing, in most implementations).
I had considered whether to write more or less about that, and it seems I missed the mark as I often do. The unwritten assumption of my text was that the user was not triggering any kind of rehash between iterations. Guarantee was too strong a word.
[Shambles] wrote:I have never written code that needs to operate on a dictionary’s keys in order. I know of a variety of ways to do it, but it would almost always indicate I was writing bad code. Dictionaries are for when you want to perform lookups efficiently by something other than consecutive integers. That usually implies that order is irrelevant. Other programming languages that /do/ provide ordered dictionaries almost exclusively retain the order that keys were inserted, instead of ordering the keys alphabetically (which has no meaning for object keys, which are useful in graph traversal algorithms). That is because insertion order is the only key order that makes sense in the face of mutation.
Sparse arrays are useful in the obscure case where you are trying to represent a very large (too large to fit into memory) data set where most of the values are the same. In every other situation they only cause problems. One does not expect an Array’s length to change if it is reversed or sorted, but it can if it can contain missing elements (specifically, leading missing elements in the reverse case and any missing elements in the sort case). And then there is the question of how one compares missing elements for the purpose of sorting. Writing code to work around this almost never desirable behavior for every comparison function used when sorting is no fun.
Actually using Arrays with missing elements is occasionally necessary in AutoHotkey. Specifically, it is necessary in the situation where you want to call a function that specifies defaults for some of its parameters variadically. If all parameters have defaults, you might need to use leading missing elements. If more than 1 parameter has defaults in a function with >= 3 parameters, you might need to use missing elements in the middle. These situations are, thankfully, rare. Those situations are the reason Facade even supports working with Arrays with missing elements.
These are very valid criticisms which I generally agree with. AutoHotkey's objects just weren't designed for such tasks, and AutoHotkey wasn't built for efficiency. It's not the right tool for these tasks and it's annoying to use for them in situations where you have to.
[Shambles] wrote:As for strong wording, I have written a lot of AutoHotkey code, which has led to a lot of frustration. These libraries are an attempt to reduce that frustration by plastering over AutoHotkey’s problems so that I never again have to deal with them. I would have stopped using AutoHotkey long ago if it were not so useful to Windows system administrators. AutoHotkey’s good parts are its I/O facilities, not its programming language.
You are right. Given that, why put your effort into improving the language features instead of improving the I/O facilities of another language? Is it library support, interoperability with existing codebases of your own, to appease other administrators, or some other reason?
I appreciate your strong viewpoints and candid responses even if we don't agree on everything. This thread probably isn't the best place for this type of discussion so I apologize for bringing it up.
Okay, back to code. I wanted to get started with your library so I decided to try my hand at a text manipulation algorithm, though I think I have a few misunderstandings on how to best use the library.
I've built this script to perform a barebones ROT13 algorithm, though it's missing the critical behavior of detecting characters outside the alphabetic range.
Code: Select all
SubA := Func_Flip(Func("Op_Sub")).Bind(Asc("a"))
AddA := Func("Op_Add").Bind(Asc("a"))
Add13 := Func("Op_Add").Bind(13)
Mod26 := Func_Flip(Func("Math_Mod")).Bind(26)
RotChar := Func_Comp(Func("Chr"), AddA, Mod26, Add13, SubA, Func("Asc"))
RotString := Func_Comp(Func("Func_Apply").Bind(Func("Op_Concat")), Func("Array_Map").Bind(RotChar), Func("StrSplit"))
MsgBox, % %RotString%("hello")
I tried to write some code to detect if a character was in the alphabetic range so I could chain it to Func_If
such that it would perform RotChar
on characters in the range and pass the character through using Func_Id
if not. My code unfortunately didn't work.
Code: Select all
GtA := Func("Op_Le").Bind(Asc("a"))
LtZ := Func("Op_Ge").Bind(Asc("z"))
IsAlpha := Func_And(GtA, LtZ)
MsgBox, % %IsAlpha%(96)
This gives me the exception "Argument is an Object". It appears that when I run Func_And
with a single predicate it behaves as I expected, but when I run it with multiple predicates it fails.
Also, either I'm misunderstanding the library or this isn't one of those tasks where it decreases verbosity. Compared to my more traditional implementation of the same ROT13 behavior, the function composition version hides the logic among a lot more fluff.
Code: Select all
Out := ""
for k, v in StrSplit("hello")
Out .= Chr(Mod(Asc(v) - Asc("a") + 13, 26) + Asc("a"))
Programming with function composition certainly requires a different head space than most of the code I've written in the past. I definitely plan to continue investigating how I might fit it into my workflow.
There have been community members before who tried to build function-composition-facilitating libraries before (though I can't remember any particular names) and none of them have ever gotten anywhere close to where you are now. Your code is very sound both in theory and in implementation