This might seem like it is rambling. I am trying to lead you through the necessary insights to understand the answers you asked for, and I am trying to answer questions I believe you need to ask in addition to the ones you asked.
guest3456 wrote: ↑
23 Jan 2020, 21:48
i mean i guess ill use my example, my main script manipulates multiple game windows.. i'm doing image and pixel searches to determine the game state, and then moving windows around or changing z-order or hotkeys which send clicks to the game window, depending on the state that my image searches detect. half way through the life of the script, i decided to try to write unit tests, and found it nearly impossible, i had to create gui's to act as fake game windows etc.
my main script is just a big timer loop that keeps doing the image/pixel processing to determine what actions to take. i have a very difficult time trying to figure out how to do this the functional way, but i think its more my lack of knowledge that is the limiter
Niklaus Wirth wrote a textbook with this title:
Algorithms + Data Structures = Programs
The title is insightful. The reason we should teach students algorithms and data structures is because those are what are used to describe programs, and the more of them a programmer knows, the easier it is for that programmer to describe programs and the more efficient their programs will be.
Textbooks that teach programming paradigms are rare, and courses that use them are rarer. Advanced Programming Language Design
is the best one that I know of. The reason we should teach students programming paradigms is because using a suitable programming paradigm tends to make writing part of a program easier, because it makes parts of the algorithms implicit, so the original author does not have to write them and maintainers do not have to read or maintain them.
Using the right paradigm occasionally makes enough of a difference to seem to grant magical powers. For example, did you know that if you really understand OOP, you can write software that is impractical to hack or infect with malware? Capability-Based Computer Systems
will teach you how.
One way to think of a programming paradigm is the operational (what it does in the machine) way I just described (implicit control flow and resource management, and sometimes error handling).
Another way to think of a programming paradigm is as a perspective.
One thing that makes programming difficult is humans' lack of foresight. We tend to not think ahead and have a very poor understanding of the implications of what we do and do not do. That is why we write programs that eventually corrupt their state.
Is there anywhere in human experience that these problems are minimized? Yes! Conventional mathematics (the kind taught in elementary school).
Conventional mathematics models things as permanent relationships. When solving an equation, the value of a variable never changes. You might solve it again for different values (or in programmer-speak "you might pass it different arguments") but within the same context (or in programmer-speak "call") it never changes (or in programmer-speak "the same variable might have different values in different calls but never in the same call"). Time stands still.
Functional programming is what happens when you use this perspective when programming. You stop thinking about change over time and start thinking about permanent relationships.
But change over time does
occur and it matters
! A computer that does not react to input is just an inefficient space heater!
Early functional programmers (e.g. those using Lisp in the 1960s) learned to separate the code that performs side effects from the code that performs processing. They used the functional programming paradigm where they could (for processing) and the imperative programming paradigm when they had to (for retaining the state of their program across events and performing I/O).
This was before "publish or perish" flooded academic journals with junk research, universities (and thus researchers) were defunded by governments because they were teaching students to think for themselves instead of being obedient drones (and then the universities became businesses that charge students as much as the market will bear for the training businesses should be providing on the job), and corporations invented an endless stream of products to sell (often creating the problems they get paid to solve). The end result of those malignant processes is a tech industry in decline resulting from a strange sort of amnesia from neophilia and dogmatic thought from marketing. If you are old enough and you are willing to pay attention to unpleasant truths, you will start noticing things like the current cloud computing
craze sounds a lot like the bad old days of rent to never own time-sharing computing
and functional reactive programming
from the 1990s sounds a lot like some new researchers trying to take credit for dataflow programming
from the 1960s.
Lets say you are in the throes of this malaise. Maybe you are someone that wants to sell a product that claims functional programming is a silver bullet. Maybe you are someone that actually believes functional programming is a silver bullet.
Is there some way to shoehorn side effects into the functional programming paradigm? Yes... Conventional mathematics can model time. Imagine a conventional graph where the X axis represents time moving from 0 at the beginning of a process onward and Y represents the function's value at that point in time. The function is still a fixed relationship between X and Y.
How does this translate to programming?
The changing real world can be thought of as that X axis that is the argument passed to your program that you think of as a function that produces output that can be thought of as the Y value at that point on the X axis. This is what is meant by "the state of the world".
Some modern programming languages, like Haskell, make you model things this way.
Haskell would make you use its complex type system and have documentation that claims programs written in Haskell are somehow pure (as in referentially transparent
) despite clearly performing side effects by playing word games with what is meant by program and side effect. This trick involves claiming that programs written in Haskell merely compute the good and pure value representing the state of the world while the evil and dirty runtime system (which is somehow 'not Haskell') is what actually performs the side effects the state of the world describes. This argument is vacuous. It would define C as a purely functional programming language and allow one to argue it 'just has better syntax for describing the state of the world' (i.e. performing side effects). The reason programmers should care about side effects is that they happen (if you remove them, the program's meaning changes), when they happen (if you delay them, the program's meaning changes), and the order they happen in (if you reorder them, the program's meaning changes) matters, and any compiler's optimizer is going to have to treat Haskell code with side effects the same as imperative code with side effects.
The Haskell code would be arranged exactly like the code good functional programmers would have written in older programming languages.
What would an AutoHotkey-syntaxed outline of that look like?
A command-line program (like a compiler) would look like this:
Code: Select all
; Implement ReadWorld(A_Args) here.
; Implement ComputeWorld(World) here.
; Implement WriteWorld(World) here.
An event-driven program (like a video game) would look like this:
Code: Select all
; Initialize World here.
; Implement ReadWorld(World) here.
; Implement ComputeWorld(World) here.
; Implement WriteWorld(World) here.
World := ComputeWorld(ReadWorld(World))
In both kinds of programs ReadWorld performs input, ComputeWorld performs processing, and WriteWorld performs output. In the command-line program ReadWorld receives the command-line arguments and might perform additional input (e.g. read some files). In the event-driven program ReadWorld receives the old state of the world and probably performs additional input (e.g. reading the window's message queue for keyboard and mouse input). The command-line and event-driven programs differ in that the command-line program contains code that is executed once and the event-driven program contains code that is executed repeatedly.
These examples are unrealistic. They have been simplified to be easy to understand. Notably, event-driven programs usually consist of many event handlers, and each of those would have different code for input, processing, and output. Most hardware has different interrupts (and therefore different interrupt handlers) for updating the screen, updating the sound queue, reacting to key presses, reacting to mouse events, reacting to USB input (like gamepads), reacting to input on network interfaces, and so on. All of them would update the state of the world. Many of them would only add pending input to the state of the world, not perform any output, or only update an output queue, not perform any input.
Your program should look a lot like the event-driven program just described.
Why bother structuring it that way? You answered this question yourself when you found out how hard it was to test and debug code where I/O is mingled with processing.
Representing your program's state as a value also makes some interesting things, like time traveling debugging
I suggest writing programs this way:
1. Figure out what data your program requires to do what you want.
2. Figure out what you want to do to that data. This tells you what algorithms to use.
3. Knowing what algorithms to use tells you what data structures to use (e.g. searching a dictionary is efficient, searching an array is not).
4. Knowing what data structures to use tells you how to represent your 'world'.
5. Write the simplest possible output procedure(s) you can for your main procedure or event handlers. Test them. Debug them. Try to never change them again.
6. Write the simplest possible input procedure(s) you can for your main procedure or event handlers. Test them. Debug them. Try to never change them again.
7. Write the processing function(s) to update the world for your main procedure or event handlers.
Why this order?
Some of it, like the parts that tell you what algorithms and data structures you need, just have to be done that way.
Output procedures are easier to test and debug than input procedures. It is hard to know if you are receiving input if you cannot see or hear anything.
If you follow this plan, the code that is inherently difficult to test, the I/O code, is as simple as possible (and thus unlikely to be defective) and need not be regularly retested and redebugged.
It is easier to stay motivated when you can see progress being made, and the I/O code is what lets you literally see progress being made.
One problem remains. How do you test and debug a world changing function? It no longer performs any I/O, but it shoves all the complexity into one place, so it is likely to have defects.
Most good programming languages have a REPL. If your only experience programming is with AutoHotkey, this will be foreign to you. REPL stands for "read, evaluate, print, loop". They read code you type into them, evaluate it (i.e. execute it to get the resulting value), 'print' the result to the screen, then wait for more input. You might be wondering what it means to 'print a value to the screen'. Long ago, people programmed using something called a teleprinter
that looked a lot like an old fashioned typewriter but was connected to the computer so that both the programmer and the computer could type. When either party finished typing something, the paper would scroll up. When you know that, the way the REPL works becomes more obvious. But what about values that aren't numbers or strings? Data structures are usually represented as the equivalent literal syntax. Things that have no equivalent literal syntax, like a function object or closure, are usually represented as something surrounded by 'angle brackets'.
A REPL session for an AutoHotkey-syntaxed programming language might look something like this:
Code: Select all
2 + 2
5 / 0
; Error: division by zero
X := 
; [1, 2, 3]
In most programming languages you would break your world changing function down into a lot of separate functions that it would call, and you would test these at the REPL as you wrote each one. The world itself, along with any changes to it, would be observable because you could see the data structure at the REPL.
But AutoHotkey does not work like that. So what do you do?
That is one of many reasons I do not enjoy programming in AutoHotkey despite it being useful and being good at it.
Another major reason is v1 does not actually report errors like in that fictional REPL session above. Luckily, someone wrote Facade.
What I do is write some throwaway visualization code for what I am working on.
But there really is no way to visualize a function value. So what do you do?
That is an excellent reason to get all the function-constructing code written once, debugged, and never write such code again. Luckily, someone wrote Facade.
Hopefully this helps.
guest3456 wrote: ↑
23 Jan 2020, 21:48
[Shambles] wrote: ↑
23 Jan 2020, 14:35
So when you want something to change its value under complex circumstances, instead of literally detecting the circumstances and mutating some variables, you could construct some code, at run-time if necessary (e.g. when the 'complex' part is the desired behavior can only be known based on user input), that accepts the 'circumstances' as arguments, and calculates the result. What was once a bunch of tests in branches that mutated some variables now becomes a reference to a function that computes a value.
this sounds important, but i'm not sure i understand it. can you come up with a quick example of the before and after?
Your program is an example.
There are simple examples in Facade's documentation for Func_Applicable and Func_CIf.
Func_Default's code could be considered an example. Some parts of Facade are 'written in Facade', and it is one such part.