Wow, Gio, thanks for all that information. What I'm realising is that this is such a complex subject that I'll need to do more learning - I haven't even really studied this thread yet.

I understand the principle of adjusting the weights, and also the use of the gradient (although my calculus is a bit rusty these days); my problem is following the complex formula by which all the data gets weighted and combined and crunched, so it shows when I actually try to construct code to do something new! I didn't do matrix stuff in school, so I'm trying to catch up, but I've been a hobby programmer for 30 years so I'm used to complex data structures. I think it's just about getting familiar enough until it clicks and you kind of see it. Maybe.

I did a little experiment in the last couple of days translating your code to deal with a more complex problem, but it's highlighting how little I know. It's not giving any meaningful results, either because the problem needs a different approach, or because I've stripped the second dimension out and this is when I need to use it. Please don't feel obliged to keep schooling me through this, though, I'm just sharing the journey. I've also switched language, because I find AHK a struggle and I can make quicker progress in BASIC.

The idea was to find a simple mathematical function that I could plug in instead of the three bits and an output bit. I used the highest common factor (HCF) of two integers, which is pretty easy to calculate, and then, if successful, I'd hope the net to "infer" a correct answer to a new pair of integers. Obviously this is different because it accesses the HCF result as an integer >=1, not just a binary as in the original. Anyway, my final results are all approaching 0.99999999, which would be nice if that represented anything.

I'm thinking now about other ways to approach the question, for instance, having just two input "neurons" representing the integers, and normalizing these with A / Range so they're both floats between 0 and 1 - this is a little more like the inputs in the handwritten integers example 3blue1brown uses here, where the brightness of pixels are normalized first. That would obviously be the way to go if the whole sherbang requires 0-1 data. But it might also require Range number of output neurons, as that uses 10 for the different numerals.

As I say, don't go to any trouble over this, but if you want to chip in with suggestions, that's very kind of you. I like messing about with stuff I don't understand until I either understand it or get bored and give up! I'll go back to reading for a while. Cheers.

## Neural Network basics - Artificial Intelligence using AutoHotkey!

### Re: Neural Network basics - Artificial Intelligence using AutoHotkey!

Hello Ahketype,

I see that you are coming across some dificulties, but it also seems like you are doing it just right: the path of experimentation, inquiring into the specific parts of what constitues an ANN, the idea of approaching a problem using these bases, and the attempt to have the theory sink in, that is indeed mostly what goes in the study of ANNs.

To be perfectly honest, when dealing with ANNs, it is not always easy to get a first working code for solving a problem. The theory is a basis of study exactly because everything in the code does change a lot between succesful implementations. One example: in the example code of this tutorial, we are setting the initial weights using random numbers between -4 and +4. When i change this to a broader range of numbers (i.e., between -10000 and +10000) i found that the code gets much harder time approximating to the correct answers. Also, when i attempted to run a python code for solving handwriten digits using the mnist database, i ran the code and it quickly achieved over 90% accuracy, but when i ran it again, it took a while and yet could not exceed 85% accuracy. After studying the code for a while, i think that this is closely related to the initial weights, as if the a certain set of initial weights would have a huge impact on how many iterations are required to approximate the results, but the truth is that i am yet to find the actual reasoning that could have me understand the exactly best initial random range for the MNIST database in example. I like to think that this is somewhat similar to a classrom full of students, in which some of them will naturaly understand the teachers words, while others will require many more hours of studying to get the same level of understanding, but none of them is really better than the others, it may just be that some of them somehow lucked out in having their brains more prepared for that specific type of problem when the class had just started (because of somehow unrelated life experiences?).

Anyways, this is just me trying to get an analogy

Everything in the code of the example can be changed: The initital weights, the number of neurons, the number of iterations, the way data gets fed into the net, and even the formula that updates the weights (it's not like adding the results of multiplying inputs, error and gradient is the only way to go for every problem). Changing these will result in attempts that may or may not make the network creator code more succesful at solving the problem at hand. Experimentation is likely the only way through which we can somehow get better at creating networks (or getting a more natural feel to what to try first). Or so i think. Big companies (tesla, google, etc) have been trying for like 10+ years to get nets that allow perfect automated car driving, and they have been very succesful actually, but none of them is getting the exactly same results. The way things work now, it may well be that some small team of people will ge there first (or not), that is why ANNs are such an interesting field of research for programmers in my opinion: i think one can still get a name for themselves if they are both dedicated and lucky.

My suggestion for a begginer would be to study some workings codes first (before attempting to write one from scratch to deal with a new task). Studying a variety of working code can probably lessen up the hardships of beggining a dive into this subject.

Also, have you taken a look at part II of the tutorial? The code in part I is somewhat limited, you will need at the very least a multi-layered vanilla network to solve most real problems.

I see that you are coming across some dificulties, but it also seems like you are doing it just right: the path of experimentation, inquiring into the specific parts of what constitues an ANN, the idea of approaching a problem using these bases, and the attempt to have the theory sink in, that is indeed mostly what goes in the study of ANNs.

To be perfectly honest, when dealing with ANNs, it is not always easy to get a first working code for solving a problem. The theory is a basis of study exactly because everything in the code does change a lot between succesful implementations. One example: in the example code of this tutorial, we are setting the initial weights using random numbers between -4 and +4. When i change this to a broader range of numbers (i.e., between -10000 and +10000) i found that the code gets much harder time approximating to the correct answers. Also, when i attempted to run a python code for solving handwriten digits using the mnist database, i ran the code and it quickly achieved over 90% accuracy, but when i ran it again, it took a while and yet could not exceed 85% accuracy. After studying the code for a while, i think that this is closely related to the initial weights, as if the a certain set of initial weights would have a huge impact on how many iterations are required to approximate the results, but the truth is that i am yet to find the actual reasoning that could have me understand the exactly best initial random range for the MNIST database in example. I like to think that this is somewhat similar to a classrom full of students, in which some of them will naturaly understand the teachers words, while others will require many more hours of studying to get the same level of understanding, but none of them is really better than the others, it may just be that some of them somehow lucked out in having their brains more prepared for that specific type of problem when the class had just started (because of somehow unrelated life experiences?).

Anyways, this is just me trying to get an analogy

Everything in the code of the example can be changed: The initital weights, the number of neurons, the number of iterations, the way data gets fed into the net, and even the formula that updates the weights (it's not like adding the results of multiplying inputs, error and gradient is the only way to go for every problem). Changing these will result in attempts that may or may not make the network creator code more succesful at solving the problem at hand. Experimentation is likely the only way through which we can somehow get better at creating networks (or getting a more natural feel to what to try first). Or so i think. Big companies (tesla, google, etc) have been trying for like 10+ years to get nets that allow perfect automated car driving, and they have been very succesful actually, but none of them is getting the exactly same results. The way things work now, it may well be that some small team of people will ge there first (or not), that is why ANNs are such an interesting field of research for programmers in my opinion: i think one can still get a name for themselves if they are both dedicated and lucky.

My suggestion for a begginer would be to study some workings codes first (before attempting to write one from scratch to deal with a new task). Studying a variety of working code can probably lessen up the hardships of beggining a dive into this subject.

Also, have you taken a look at part II of the tutorial? The code in part I is somewhat limited, you will need at the very least a multi-layered vanilla network to solve most real problems.

"What is suitable automation? Whatever saves your day for the greater matters."

Barcoder - Create QR Codes and other Barcodes using only Autohotkey !!

Archmage Gray - A fantasy shooter game fully coded in AutoHotkey

Barcoder - Create QR Codes and other Barcodes using only Autohotkey !!

Archmage Gray - A fantasy shooter game fully coded in AutoHotkey

### Re: Neural Network basics - Artificial Intelligence using AutoHotkey!

Thanks again Gio. Yikes, it's too hard for me! I think I'll leave artificial intelligence and continue with my current project, natural stupidity. But it was interesting and I've learned a few things. I think I get why there were other dimensions in the arrays now, having looked at the second example.

All the best.

All the best.

### Re: Neural Network basics - Artificial Intelligence using AutoHotkey!

Wooo, What a nice post.

I will check it out.

Thanks !!!

I will check it out.

Thanks !!!

### Re: Neural Network basics - Artificial Intelligence using AutoHotkey!

Nice tutorial. Don't know why I didn't see it before.

### Who is online

Users browsing this forum: magusneo and 9 guests