Cikkek
VoltWiRED: WoltCraft, de már nincs
Fórum:
Aaand it's gone
Fórum:
Megszavazta a nép: Ghost of a Tale betekintő
Fórum:
rwurl=https://i.imgur.com/iOXTsTn.png
A Ghost of a Tale egy igazi gyöngyszem. Az előzetes képek és videók alapján nagyon jónak tűnt az art design és a grafika, de nem gondoltam, hogy ennyire magával ragad majd. A játék Indiegogo kalapozással készült, és márciusban meg is jelent. Azóta már szépen foltozták is a nagyobb bugokat. A készítője az a Lionel Gallat, aki a Dreamworksnél animátorként vett részt például az Egyiptom hercege készítésében, a Universal Studiosnál pedig a Despicable Me című filmen dolgozott animation director pozícióban. Ez meg is látszik a játékon, a főhős mozgása nagyon természetesnek hat.
rwurl=http://www.ghostofatale.com/wp-content/uploads/2018/02/46.jpg
A sztoriról röviden annyit, hogy Tilót, a vándorzenész egeret alakítjuk, aki egy börtönben ébred, különválasztva a feleségétől, Merrától. A játék során őt kell megtalálni. A feladat adott, rá kell jönnünk hol tartják fogva Merrát, és meg kell szöknünk a börtönből.
rwurl=https://i.imgur.com/uyscj0t.png
Amikor sikerül kilógnunk, akkor derül ki, hogy nem csak egy egyszerű börtönben vagyunk, hanem egy hatalmas partmenti erődben, ahol sok tennivalónk akad. Az erőd és környéke ugyanis a Red Paw nevű patkányok alkotta katonai szervezet fennhatósága alatt áll, akik nem kedvelik a szökött egereket, kezdetben ők nehezítik meg a dolgunkat.
rwurl=http://www.ghostofatale.com/wp-content/uploads/2018/02/63.jpg
Tennivalónk bőven akad: fel kell kutatnunk NPC-ket, gyilkossági ügyben kell nyomoznunk, csempészhálózat lebuktatásában kell segédkeznünk, van egy rakás gyűjtenivaló stb. A játéktér nem kimondottan nagy, de egér mértékkel mérve hatalmas tud lenni. Ezt úgy kell elképzelni, hogy minden tárgy, a székek, asztalok, a kút stb. patkányméretű, ezzel szemben mi akkorák vagyunk mint egy 8 éves gyerek egy felnőtthöz képest. Talán pont ezért van olyan nosztalgikus hangulata a játéknak. Előhozza a gyerekkori kirándulások emlékét, amikor az erdők és a várak sokkal nagyobbnak tűntek, mint felnőttként.
Kalandozásaink során bejárhatjuk az említett erődöt, egy nagyobb erdős területet, lejutunk a partra és az erőd alatti csatorna illetve katakombahálózatba. Nagyon jól működik a területek, helyszínek megismerése, feltérképezése. Az elején, amikor még minden új, csak lassan szemlélődve osonunk, később, amikor már vágjuk, mi merre található az adott helyen, szélsebesen kotrunk végig az ismerős folyosókon, ösvényeken. Főleg akkor tehetünk így, ha felfedezzük a shortcutokat, amik segítségével mindjárt gyorsabb a közlekedés, amíg viszont ez nincs meg, sok a backtracking. A területek között van néhány, ami kicsit üresnek és befejezetlennek tűnik, ilyen például a kikötő: ahhoz képest, hogy milyen hatalmas, kevés dolgot tudunk ott csinálni.
rwurl=http://www.ghostofatale.com/wp-content/uploads/2018/02/Screen-Shot-02-15...
Amíg nincs megfelelő páncélunk, sokat kell bujkálni, erre vannak megfelelő tereptárgyak: ládák, szekrények stb. Lopakodás közben óvatosan elcsenhetünk kulcsokat, illetve bármit, amire szükségünk van, illetve rövid időre kiüthetjük az őröket üres üvegekkel, elterelhetjük a figyelmüket vagy csapdát állíthatunk nekik. Viszont amint megvan a patkánypáncélszett, már szemtelen kisegér módjára járkálhatunk köztük. Innentől megváltozik a felállás és szabadon járhatunk-kelhetünk, beszélhetünk velük és még küldetéseket is csinálhatunk nekik.
Mivel sokat mászkálunk sötét pincékben, éjjel az erdőben meg alig látni valamit, vihetünk magunkkal gyertyát ami persze elég hamar elfogy, de szerencsére használhatjuk az olajjal tölthető lámpásunkat is. Ezen kívül találhatunk egy olyan sapkát amin egy végtelen gyertya ég, ez viszont negatív hatással van a staminánkra, így ha azt használjuk, nem tudunk futni, ami a menekülésnél hátrány. Arra is van lehetőség (ha találunk ágyat), hogy aludjunk egyet: ez egy kicsit tölti az életünket, de ha úgy gondoljuk, alhatunk akár reggelig is, amikor már világosodik.
Harc igazából nincs a játékban, az említett ágak, üvegek, csiganyálas üvegcsék dobálásán kívül.
rwurl=http://www.ghostofatale.com/wp-content/uploads/2018/02/10.jpg
A karakterek általában elég emlékezetesek, a szövegeik jól megírtak, maga a sztori is végig érdekes tud lenni. A játék világa pedig annyira egyben van, hogy utánanéztem, nem adaptációja-e könyvnek esetleg mesének, és nem, ez egy teljesen önálló fantasy világ. Persze Gallat saját bevallása szerint sokat merített a The Secret of Nymh és a The Dark Crystal című filmekből, a Redwall-könyvekből, Alan Lee és John Howe rajzaiból, de hatással voltak rá olyan játékok is, mint a Legend of Zelda, a Dark Souls vagy éppen az ICO.
rwurl=http://www.ghostofatale.com/wp-content/uploads/2018/02/22.jpg
A hangok és a zenék is remekül sikerültek. Mind a dalok, mind az egyes helyszínek témái nagyon rendben vannak, árad belőlük a hangulat. A játék során többször előfordul, hogy vándorénekes lévén elő kell adnunk valamit egy beszélgetés során. Ilyenkor a megfelelő dalt kell kiválasztanunk a daloskönyvünkből és azt Tilo szépen előadja, szerencsére nem QTE formájában.
rwurl=https://youtu.be/m3zIWWr34IA
Míg minden mással nagyon elégedett voltam, egyedül a végjáték az, amit kissé sutának és összecsapottnak éreztem, ez az egyetlen, ami valamennyire kizökkentheti az embert az idillből.
Mindent összevetve egy remek kaland volt a Ghost of a Tale. Mi kb 20 óra alatt jutottunk a végére, szerintem ma ez egy nagyon ideális játékhossz. Remélem, anyagilag sikeres lesz a játék és tovább folytatódik Tilo szívet melengető története.
rwurl=https://youtu.be/tigso6GJtsY
Az eredeti komment hivatkozasa: http://www.rewired.hu/comment/164383#comment-164383
Neural Networks III: How would I implement one?
Fórum:
rwurl=https://imgur.com/FC1QvBY
In the third article in the series, I am attempting to keep everything fairly detailed and explain everything I do, as I dive deeply into actual implementation of the Feed-Forward Neural Network. To make the implementation process easier to comprehend, the article is divided into 5 sub-segments:
- Making a simple, Feed-Forward Neural Network structure
- Fixing the Neural Network’s bias/weight initial values
- Adding a learning algorithm to the Neural Network
- Multiple Input and Output Sets for our Neural Network
- Training for handwriting recognition with MNIST data set
Let’s jump right in!
I will use Java in this case, but any other programming language will follow the exact same route of ideas. I’m not going to use any exotic language specific solution, but will try to keep everything as generic as possible.
Making a simple, Feed-Forward Neural Network structure:
The structure of our entire Neural Network supposed to be very simple, as the theoretical example was in the previous article. I presume the entire codebase should cap around 150-200 lines of code, plus the helper utility classes.
First, let’s create a Network.java class, which represent our NN object.
We would need a few integer constant attributes here, which defines our network and doesn’t need to change over the program’s lifetime.
“NETWORK_LAYER_SIZES” Contains the number or neurons in each of our layer.
“NETWORK_SIZE” Contains the number of layers over our NN. We set this number to be associated from the NETWORK_LAYER_SIZES array’s length.
“INPUT_SIZE” Contains the number of input neurons. Input layer is the first in the network, so the first number in the NETWORK_LAYER_SIZES will represent this variable.
“OUTPUT_SIZE” Contains the number of output neurons in our NN. Output is the last layer, so it is represented by the last index in the NETWORK_LAYER_SIZES array.
Now we will declare a couple of variables to work with:
“output” Contains the calculated output value of every neuron over our entire network. This value needs to be as precise as possible to give us accurate results, so we use Double as a datatype. Using a two dimensional array here is sufficient enough to store the given layer, and the given neuron positions as well.
“weights” Stores all the weight data over the network. Note that this needs to be a three dimensional array to store all the necessary positions. The first value would be the given layer, the second is the given neuron, and the third is the previous neuron the weight is connected to. We need this previous neuron data because as we have learned in the previous article, a single neuron on the given layer is connected to all of the previous ones at the adjacent previous layer.
“bias” Is a two dimensional array, similar as the output, because every neuron has one bias variable as well.
Our constructor will receive the NETWORK_LAYER_SIZES value from the initialization method, and all the rest of the constant and variable data can be calculated from it.
For initializing the output, weight and bias values, we assign the NETWORK_SIZE as the first dimension’s size. Also we need to iterate through a FOR loop to initialize all the rest of the elements over the second layer. Note that while every neuron has an output and a bias, the very first layer doesn’t have weights on it (being the input layer), so we start assigning weight values from the second layer in this loop.
We now have a basic initialization constructor, but we need to have a method that will calculate the FEED-FORWARDING values as well. Let’s call it “calculate”. This methods takes an array of doubles as an input parameter, and returns an array of doubles as an output.
The very first IF block check just makes sure that the input array’s size matches our network’s previously set INPUT_SIZE constant value. If it doesn’t we cannot do any calculations.
The next line just passes these input values to the output array’s first element, since it doesn’t need to do any calculations with it.
After that, we have a nested FOR loop that iterates through all the rest of the layers, while iterating through all the neurons as well in the given layer. Here, each of the neurons will apply the summarization with the bias, apply the weight multiplication over each of the previous neurons, iterating through yet another FOR loop. Finally we apply the sigmoid function to this summarized value.
The math for the sigmoid function can be represented by this in Java:
We can quickly make a Main method to test our current version of the network with some random values. Let’s instantiate our network and have the input and output layers contain 5-5 neurons, and have two hidden layers, containing 4 and 3 neurons. We can feed some random values as inputs like 0.2, 0.3, 0.1, 0.2, 0.5. Java has a good amount of integrated helper methods to make our life easier as a programmers, and the “Arrays.toString” can print out all the values on the given array in a nicely formatted, coma separated list.
When we run this program, we notice an issue right away. No matter what input values are we entering, all five of the output values are always exactly 0.5. This occurs because all the weight and bias values are initialized as 0 instead of 1, which basically nullifies all of our calculated summary values, causing the sigmoid function to return us 0.5 every time as well.
Fixing the Neural Network’s bias/weight initial values:
We can change the weight/bias initialization lines at our constructor to start with 1, but I will go one step further and make those values randomized within a certain range. This will give us more flexible control over the network’s behavior right from the start.
I’m creating a helper class called “NetworkTools” to store all the array creating randomizer and related utility methods. These methods are going to become handy as we go on, and are very straight forward to understand. I’ve commented their functions at each of their header:
So going back to the Network constructor and changing the weight and bias initialization lines to produce some random values. These values absolutely doesn’t matter right now, they can be positive or negative also:
Now every time we run the program and make one pass of feed-forwarding calculations, it will give us random output values, proving that the network works as intended. Without a learning algorithm however, the network is fairly useless in this state, so let’s tackle that issue as well.
Adding a learning algorithm to the Neural Network:
So for every given input value combination, we would need to have a “targeted” output value combination as well. With these values we can measure and compare how far or close the network is from the currently calculated output values. Let’s say the previously declared input values (0.2,0.3,0.1,0.2,0.5) we want the network to output ideally (1,0,0,0,0) instead of any other seemingly random numbers.
As explained in the previous article, this is where Backpropagation, our learning mechanism comes in and trying to predict each of the previous adjacent layers supposed weight and bias values, which would produce this final desired output. It will start from the last layer and try to predict what weight and bias combination could produce a value closer to the desired output results, once that is figured out, will jump to the previous adjacent layer and do these modification again and so on until it gets to the starting layer. We start from the last layer because those related weight and bias values have the greatest influence over the final output. Note that by the rules of the network, we can only change these bias and weight values if we want to influence these output values, we cannot change any other values directly.
These differences over the desired output and current output are called the “error signal”. Before making any changes to the weight/bias values, we finish the backpropagation completely and store this error signal from each layer (except the very first input layer).
Intuitively this seems like a very easy task to do. Just subtracting the targeted value from the current value, try nudging the weight/biases over some positive values and measure if we got closer or further from the desired output. If we got closer, then try adding more positive values until we reach the desired output, but if we got further away, start applying negative values instead and keep doing so. This would be exactly true for a simple input range, representing a straightforward curve:
rwurl=https://imgur.com/W77WOyG
But unfortunately, as we get more and more input values, the complexity of the whole Neural Network function gets significantly more complex as well, and predicting the exact “right” position and the path towards it becomes less and less obvious:
rwurl=https://imgur.com/3LgjejG
As you can see, having multiple local minimums can easily “fool” the algorithm, thinking that it goes the right way, but in reality it may just pursuit some local ones, which never going to produce the desired output. You can think of the algorithm as a “heavy ball” for weights. This ball will run down the slope where it started from, and stop at the bottom that it happen to find. Now for instance, if the initial weight value would be between 0 and 0.5 somewhere, and no matter where it would start to adjust, with a naïve “heavy ball” approach, it would slide down to be around ~0.65 and stop there, while we can clearly see that that would always produce a wrong result. This is the primary reason we use randomized values each time we start the training process, instead of setting them to 1, so every time there would be a new chance for these weights to propagate over the proper global minimum values.
Furthermore, to successful tackle this backpropagation, each neuron needs to have an “error signal” and an “output derivative” values, beside their regular output values. Our backpropagation error function, which calculates how close we are from the target output values looks like this:
E = ½ (target-output)^2.
I am not going to go into the details of all the related math in this article, because it’s a fairly large subject of its own. But for anyone interested, I can refer you to Ryan Harris over Youtube. He has excellent tutorial series for backpropagation algorithms, just to help you comprehend the whole concept easier:
rwurl=https://www.youtube.com/watch?v=aVId8KMsdUU
There are many good written articles over the net for it, Wikipedia is a great detailed source as well:
https://en.wikipedia.org/wiki/Backpropagation
Alright, going back to coding. We need to declare these two additional variables and initialize them at the constructor before we can use them:
The feed-forwarding calculation needs to be updated with these variables as well:
Calculating the error needs another method, we will call this “backpropError”, which receives the target output array and does the error calculations for each layer, starting from the last one:
Once we have these values, we can finally update the weights and biases over our network. We will need another method for this. Let’s call this “updateWeightsAndBiases”. It can receive 1 parameter called the “learning rate”. The learning rate is just a ratio value, indication how brave should the learning algorithm be over nudging those values in a positive or negative values. Setting this number to too small will produce a much slower learning periods, but setting it to too high may produce errors or anomalies over the calculations, making the whole learning process taking slower again.
Next, let’s have a method that will make our live easier and connect all these learning functionalities together. Let’s call it “train”. It receives the input array, the output array and a learning rate. It goes over all these previously mentioned calculatios:
We are now set to use our learning algorithm! Let’s change the main method to do so. We can use the same similar network setup, having for instance the input layer as 3 neurons values: 0.1, 0.5, 0.2, while having the output layer of 5 neurons, expecting the output to be for this given input combination to be 0, 1, 0, 0, 0. The FOR loop represents the number of times we are running the learning algorithm and applying the changes to the weights/biases.
Running the program gives us a seemingly far away result from desired:
This is reasonable again, we ran the learning algorithm only once, and as we know already, it’s virtually impossible to “guess” the right weight/bias combination. The network needs many tries and measuring over and over, until it can get closer and closer. Let’s try running the learning algorithm 10 times for instance by changing the FOR loop value:
We can see that the output this time is getting actually closer to the desired values. The ones that should be zero are ~0.2, and the one that should be 1, is almost ~0.8. Ok, let’s try running the learning algorithm, say 10,000 times:
We can see that the values are getting really close to the desired ones, and the more and more we train the network, the more accurate will it actually be. It depends on us how close do we want to get to the desired values before we can safely say that the network knows the right output for the right input, and how much processing power do we want to trade in for the training. You can imagine that over a large network and large amount of input data, a couple of million iterations can take hours or even days.
Multiple Input and Output Sets for our Neural Network:
In most of the cases, we would have a large amount of different input sets and all of them need to produce a given targeted output sets. We could have different variable names for each selected input and desired output arrays, but you can imagine that this would start to get tedious even for a couple of hundred values, not talking about millions.
To tackle this issue, we need to be as efficient as possible and create a new class, which can contain and work with many-many input and their corresponding expected output values. Let’s call it “TrainSet”. I’m briefly going the talk about a few methods in it, because most of them are straight forward to understand just by looking at them.
So we have a constructor that can accept the input and output size, this will represent the number of neurons at the network’s input and output layer.
“addData(input[], expected[])” will expect two parameters, the first one being the currently inserted input array values and the second being currently expected output array values for it. You can call this method as many time you need, and add as many input/expected array combinations to the set, for instance with a simple FOR loop.
“getInput(index)” and “getOutput(index)” will get you back these input/expected array values from the given index point.
“extractBatch” Gives us the ability to extract only a given range of the preloaded set, instead of the all. This can be handy for instance if we have 7000 entries in the set, but we would like to work with only 20 for a given task.
The main method just generates some random input/expected values, stores them in the set with the help of the FOR loop, and outputs them in the end as a demonstration.
Going back to our Network class, let’s create a method called “trainWithSet”. This method will accept a whole trainset to work with, a number of training loops we would like to go through the whole set, and the batchSize we would like to work with:
We need a new main method to handle traning sets, let’s make one:
I’ve made a network with 5 neurons at the input layer, 2 at the output and 3 at both of the hidden layers. For this example this will be sufficient, but this is the time when we need to think of the hidden layer’s size. If we define the number of neurons too small here, the network won’t have enough space to “store” very large number of data combinations, because the new input values that would set the weights and biases, can override the already properly defined ones, resulting in never-ending try and error iterations, that will never produce accurate result for all the desired values.
On the other hand, having too large network size will make the network extremely slow to work with, and making it significantly slower to learn as well.
So we instantiated a new trainset in the main method, having the same number of input and output neuron numbers as our network does. We added 6 data sets, each containing the input set, and the expected output set for it.
Finally, we called each input set entry values (in our example, that’s 6 entries to loop through), and verify out network if it produces the expected output values, after the training.
If we run our program, we can see that the values are all random numbers all over the place:
This is fully expected now that we know how the training process works. Let’s notch up the training to run 1000 times:
We can see that the numbers are converging closer and closer to the expected output values, the more and more training do we make before testing. This is fully expected again. Let’s try 100,000 training iterations anyway:
Yep, as expected, all the tested output values are getting extremely close to the expected values, after this many training iterations.
You can see where we are going with this. Yes, again referring back to the previous article, we can use this to train the network with large number of written single digit numbers, to “guess” our uniquely handwritten sample that we will provide to it.
Finally, training for handwriting recognition with MNIST data set:
MNIST is a large, open and free database of handwritten digit values, and their supposed output labels. It has a training set over 60,000 examples and test set of 10,000 examples:
http://yann.lecun.com/exdb/mnist/
Let’s download the training set of images and training set of labels and store them in /res folder. We can make another file here, called number.png, a 28*28pixel large file that will eventually contain our personally handwritten testable image.
We will make several classes to work with the MNIST dataset values and connect them with our network. First the “MnistDbFile.java” to help us work with the database files:
Next is the “MnistImageFile.java” to work with the images in the database:
Next is the “MnistLabelFile.java” to help us work with the labels over the database:
And finally, the “Mnist.java” file, that will contain our main method to run the training algorithms, connect them with the training sets, and finally test our handwritten number and try to guess its value:
Let’s take a look at the main method in the “Mnist.java” class. The input neuron array size is 784, each one representing a single pixel value from a written number (from 28*28 images). An output neuron array size can be 10, each one representing a single digit number, from 0-9. The two hidden layer’s neuron sizes are set 70 and 35 in this case.
The “createTrainSet” method gives us the ability to choose only a range of batch set’s from the 60,000+ values. Setting this to a reasonable small number helps us reduce the time needed to load up the desired number of training values. In this example, we are using the values from 0 to 5000 from the 60,000.
The “trainData” method does all the training steps. We pass the created neural network to it, then the created trainSet. After that, we pass the number of “epoch” we want to loop through, then the number of “iterations” we would like to loop through. Finally, we pass the number of batch size we would like to work with. We preloaded 5000 images so might as well pass all those, but we can always chose a smaller or bigger number.
By “iteration” we mean the regular of times we loop through the training methods, as we did in the previous examples. On the other hand, “Epoch” in the neural network industry means the number of times rerun all the iteration loops with all the working datasets. You can think of the two terms as nested loops. “Iterations” being the inner loop, while “epoch” being the outer loop. The final results will be more accurate as more and more training iterations and epochs do we make, with more and more unique datasets. Naturally, the larger these numbers get, the slower our whole training process will get also.
The “testMyImage” methods will load our custom handwritten image, and tries to guess its digit value, based on the network’s trained knowledge. Let’s write any number with a mouse, with white brush over a completely black background. These pixels will represent input values from range from 0 (being completely black) to 1 (being completely white). In my case, I wrote number 3:
rwurl=https://imgur.com/00CJKGH
Let’s run the program. As you can see, this significantly takes longer than our previous small examples. We are working with larger amount of data over a larger network. This is a good time to mention, that the training normally only needs to occur once, even if it takes hours or days to setup. Once we properly trained our network, all the weight and bias values can be serialized and saved to a file for instance, so every time when we want to read a new handwritten image and ask the network for its guessed digit value, it can process it almost instantaneously. I will not go in detail of discussing the serialization process in this article, but will assume that the reader at this level does know what I’m talking about, or can figure it out very easily.
Did the network produce an accurate result? If yes, excellent! If not, don’t give up! Keep fiddling with the parameter values, give the network some more storage range in form of neurons, more testing data, or more iteration/epoch loops and retry the results until you manage to get it right.
Got any inspiration where else could you use this technology?
Into the Breach betekintő
Fórum:
rwurl=https://www.youtube.com/watch?v=tnURf37cXdQLogikai játék. Adott három mech, mindegyik más eszközökkel felszerelve, ezekkel kell a bogárszerű idegenek (Vek) támadásától megvédeni a városokat, és ha lehet, kiirtani őket. A Vekek is sokfélék, sokféle képességgel, ők lépnek először, majd támadni készülnek – hogy hogyan és hova, azt látni fogjuk a játéktéren. Most a mechjeink lépnek: igy elkezdhetünk elhárítani vagy elterelni a várost érő csapásokat, illetve irtani vagy éppen egymás megsebzésére rábírni az ellent. Alaposan át kell gondolni minden lépést, gyakran csak nagyon kacifántos megoldással érhető el, hogy ne szenvedjen visszafordíthatatlan sérülést a város, és mi magunk sem – olykor pedig csak csökkenteni tudjuk a kárt. Ha nagyon muszáj, visszaugorhatunk az időben a körünk elejére, de csatánként csak egyszer.
Elsősorban a háztömbökre kell vigyáznunk, mert ha ezek sérülnek, pusztul a lakosság (ez a pontszámunkat érinti), de ami még fontosabb, hogy csökken a Power Grid ereje is, márpedig ha ez nullára esik, a játéknak vége. Ha magas az érték, az azért is jó, mert nő a százalékos valószínűsége annak is, hogy néha lerázzák magukról a támadást az épületek, meg hát a végjátékra sem ártana bőven tartogatnunk belőle... Másodsorban a mechjeinket féltjük: ha elfogy az életerejük, az adott csata hátralévő részében csak ócskavasnak jók, és meghal a pilótájuk is. Pedig ők igen hasznos skilleket kapnak a játék folyamán (már egy első szintű emberünk elvesztése sem öröm, hát még egy teljesen kiképzetté).
Ezen felül minden csatatéren van valami plusz feladat is, ami megnehezíti az életünket, de új lehetőségeket kínál, és ha sikerül megoldani, jutalmakat jelent. Ilyen például egy lerombolandó gát, ami ha átszakad, a víz elsodorja a közelépen lévő Vekeket, vagy egy erőmű, ami nehezen védhető pozícióban állandó céltáblája a bogaraknak, de ha megvédjük, növeli a Power Grid értékét...
rwurl=https://www.youtube.com/watch?v=oaiFvuWsfy8A csatatérre olykor időkapszulák hullanak, amiket megmentve értékes fejlesztéseket szerezhetünk a mechekhez. Ha egy egész szigetet megtisztítunk, szintén ilyeneket vásárolhatunk a csaták során felhalmozott pontjainkból. A mechek fejlesztett reaktort kaphatnak, amivel új rendszereket üzemelhetnek be, vagy a meglevők plusz képességeit nyithatják meg. A maradék pontokból pedig valamelyest kijavítgathatjuk a Vekek tépázta Power Gridet.
Aztán nekiveselkedhetünk a következő szigetnek, ahol számottevően erősebb bogarak várnak ránk. Legalább két sziget kipucolása után megpróbálkozhatunk a végső küldetéssel – minél több szigeten fejlődtünk, annál hatékonyabban vehetjük fel itt a harcot, de annál ádázabbak a bogarak is.
Akár győzünk itt, akár elbukunk, a háború bármelyik pontján, egy pilótánkat visszaküldhetjük az időben, így egy pici előnnyel kezdhetünk a következő játékba. Ezen felül különböző achievementek teljesítésével újabb mech csapatokat nyithatunk meg, akik teljesen másképpen harcolnak, mint a többi – ezzel is növelve az újrajátszás élményét.
Megint egyszerű de nagyszerű kis játék született a Subset Gamestől, örülök neki, és várom a következőt!
rwurl=https://www.youtube.com/watch?v=awHepFu-YBk
Neural Networks II: How do they work, where can I use them?
Fórum:
In the second article in the series, I am attempting to:
- Very briefly mention a few examples of all the Neural Network types and branches, since there are many.
- Focus on the most oldest and most simple one, the “Fully Connected, Feed Forward Neural Network”
- Explain in great detail how it works using intuition and graphs, rather than math, to make it easy as possible to understand.
- Explain the commonly used related terminology.
- Show a real life example where and how you could use it.
The first steps to achieve artificial neural networks were made 75 years ago, and it became one of the hottest emerging technologies in recent years. The original idea was to produce a working mathematical abstraction, how would a biological brain function in theory, as I've mentioned in the previous article.
You don't have to be a neuroscientist to have at least a very basic understanding how would a biological brain work. Having a large number of brain-cells called "neurons", that can form connections called "synapses" between each other, based on the various signals that they receive from out body over our lifetime. If you receive a similar experience, the similar neurons will fire up with those connections, so you will remember the given situation easier and react to it faster, more accurately.
There are many-many types of Neural Networks branches and sub-branches nowadays, all of them trying to archive being closest to "perfect" solution for the given idea. The search is still ongoing, we still don't know how exactly the biological brain works, but we don't even know if that is the best way to achieve intelligence also. We may going to come up with even more efficient way than our current biological solution, like we did in many other areas in the modern industrial world.
Some of main aNN branch examples include the "Feed Forward Neural Networks", that are referred sometimes as "Conventional" neural networks. This is the earliest and oldest solution, based on the idea where neuron connection are "fed-forward" between neurons, so the information can travel through them in simple intuitive way, usually starting from leftmost and ending up in the rightmost positions.
The most well-known sub-branches here include the "Convolutional Neural Networks", where the connections are filtered and grouped between neurons, to simplify and scale down large amount of information to abstracted representations. This is generally used for image recognitions nowadays. Other well-known sub-branch is the "Fully Connected Neural Networks". Here, each neuron in a given layer is connected with every single neuron on the previous layer.
More modern main branch examples are the "Recurrent Neural Networks", where connections can form circles or reach similar non-conventional connections between each other. Some sub-branch examples can include "Bi-directional NN", or "Long Short-Term Memory NN". The latter example is generally used for speech recognition.
"Spiking Neural Networks" are sometimes referred as the third generation of NN, which can activate neuron connection in a seemingly randomly "spiking" way, and are probably the closest real representations of the biological brain solutions nowadays.
In this article we are going to deal with (you guessed it), the oldest and simplest one to tackle: the Fully Connected, Feed Forward Neural Networks.
Let’s understand step-by-step what do they consist of and how they work first, then later on we can talk about how we can use them.
What is a Fully Connected, Feed Forward Neural Network?
From the highest level, think of it as a calculating box where on one side you can feed in some information, and on the other side you can receive the calculated results:
rwurl=https://imgur.com/A0LWkLq
You can have more than one input and output values, technically any number of input or output values you would require, even very large ones:
rwurl=https://imgur.com/subBMJW
If you open the box, you will see all the neurons some layers separating them. The very first layer is the “input layer” and each neuron there will store an input value. Similarly the very last layer is the “output layer”, each neuron there will store the final output value:
rwurl=https://imgur.com/s5iHctX
Those in between layers are referred as “hidden layers”. They are called "hidden" because we never see (nor we really care) what happens in them, we just want them to help figure out the right results for out final “output layer”. The number of these hidden layers can be several, but usually a few is enough, as the larger this number gets, the slower all the calculations can take.
As I’ve said before, in FCNN each neurons in a given layer are connected to all the neurons in previous adjacent layers. One single connection has to be adjacent, we cannot skip over a layer, and so one connection between two neurons would be represented like this:
rwurl=https://imgur.com/x6Wk5VI
Connecting one neuron to all from the previous layer can be represented like this:
rwurl=https://imgur.com/qzTJiqO
After finishing populating all the rest of the connections, the network will look like this, hence the name “Fully connected”:
rwurl=https://imgur.com/0RfKlUy
Let’s break down this some more. Probably the most interesting component here is the “Neuron”. What would that be and how does it work?
This can get fairly “mathy”, but I will try to spare you by avoiding referring to math, and just giving the intuitive explanation whenever I can.
If we focus on one neuron, we can see that it can receive many values from one side, apply a summary function that adds these values up, and lastly it will apply a “Sigmoid” function to this sum, before releasing the neuron’s calculated output.
rwurl=https://imgur.com/IKKutPg
The sigmoid is an “S” shaped function as you can see in this graph, and the purpose of it to transform the sum value between 0 and 1. Even if the sum turns out to be a crazily large or crazily small number for instance, it will always be “corrected” back somewhere between 0 and 1 with this function. We are doing this to simplify working with the data in the network. It’s much simpler to understand the numbers beings closer to 1 as “perhaps yes”, and the numbers being close to 0 as “perhaps no”.
rwurl=https://imgur.com/Lz82eVY
What do I mean by “perhaps”? As I’ve said in the first article, neural networks by design are not meant for super precise calculations like we would expect from normal computers, but to do good approximations, and they will do better and better approximations as they train more.
Going back to our example, let’s assume we have 3 neurons with output values between 0 and 1 somewhere: 0.8, 0.3, 0.5:
rwurl=https://imgur.com/HpiYEUE
The sum function will add all the received values up.
sum(0.8, 0.3, 1.6) = 0.8 + 0.3 + 0.5 = 1.6
After that, the neuron will apply the Sigmoid function to this value so we will squeeze any result back between 0 and 1 somewhere, resulting 0.832 as the output value from this neuron:
sigmoid(1.6) = 0.832
This would be the formula for the Sigmoid function, for those who would like to see the math as well:
rwurl=https://imgur.com/p3Su53a
If we continue doing this over every neuron, until we get the final output layer, we will get our final calculated values, but you perhaps realized: we would have the same output results every time for the same given input values. In many practical cases we cannot modify the input value since we are receiving them from some other sources, also the summary and the sigmoid function’s behavior is fixed as well, but we would still like to influence and shape the outputted values somehow. Because of this need, we invented the idea of “Weights”, that are basically custom numbers, stored at the connections between the neurons. People usually refer to connections between neurons simply as “Weights”.
So how do “Weights” come in play?
Weights are getting multiplied with the neuron output, before that value gets summarized with the rest in the summary function, so for example if all the weights will be 1, nothing would change:
rwurl=https://imgur.com/yPchuhO
sum (0.8, 0.3, 0.5) = 0.8*1 + 0.3*1 + 0.5*1 = 1.6
But if we turn these weight values up or down somewhere, the outputted value can be very different:
rwurl=https://imgur.com/idwKgeJ
sum (0.8, 0.3, 0.5) = 0.8*-0.5 + 0.3*2.2 + 0.5*0.4 = -0.4 + 0.66 + 0.2 = 0.46
Now this solutions would be almost perfect, but people found out over time, that there may still be cases when even applying heavy Weight modifications all around the network, the final outputted values would still not be close to desired numbers, because of the Sigmoid function design. Here was the concept of “Bias” born.
“Bias” is very similar to Weights as being a single modifiable arbitrary number, but the difference is that it only applies to every neuron once, in the Sigmoid function, to translate it left or right.
Imagine a situation where your final values after applying the Summary function with Weights are converging to near 0. But after applying the Sigmoid function as well, it will bump back the output value to somewhere around 0.5, while you would rather keep that value indication 0. This is where a Bias can be applied and will basically translate the whole sigmoid function to a direction, modifying the output greatly. Let’s see the difference with a bias of -5 or +5:
rwurl=https://imgur.com/Lj0Rk3N
As we can see, if we would add a Bias of -5 (red graph) to the summary before applying the Sigmoid function would result the neuron output very close to 1, while with the bias of 5 (blue graph), the output would be very close to 0.
So we’re happy now, with all these flexibility we really could archive any desired final output values!
The basic concept of “Fully Connected, Feed Forward Neural Network” is established. How or where could we use it?
Let’s have a nice practical example: We want it to read written number from 0 to 9. How can we approach this problem with our newly setup Neural Network?
First of all, let’s clear our objective: to turn any of these written “three” number images, or any similar ones, to result “3”:
rwurl=https://imgur.com/lUsf7X9
That includes all these written “four” number images, to “4”:
rwurl=https://imgur.com/iecL0HO
… and so on, so on.
We would need to turn all these images to input values first.
Let’s take a closer look at one of them. We can see that it’s been made of individual pixels. 28 rows * 28 columns of them:
rwurl=https://imgur.com/zAKEqpT
Each of these pixels have a brightness value, some of them are very bright, and some of them are darker. If we represent the brightest “unused” pixels with 0.0 and as they got darker, with a number closer and closer to 1.0, indicating that they have some sort of “activated” values there:
rwurl=https://imgur.com/CeYu7a6
If we convert all the remaining pixels to number representations as well, and write these values down in one long row, we halve all the input neuron values ready to be processed with our NN, all 784 (28*28) of them!
As for the output neurons, the most straightforward is to have one for each desired number (0-9). So 10 neurons in total.
rwurl=https://imgur.com/KkJUhGQ
If we plug in the digitized values to the input layer, from the image that represents written number three, we would like to receive 0.0 on all of the output neurons, except on the fourth one, that would need to be 1.0 ideally, to clearly represent number “3” ideally. (Implying the first neuron represents “0”, the second “1”, and so on until the 10th neuron, representing “9”.)
rwurl=https://imgur.com/CyWDBrz
But if we do that, we will find out that the output layer’s neuron values are nowhere near this but show some utter garbage:
rwurl=https://imgur.com/30oMUWC
That’s because the network haven’t been “Trained” yet.
“Training” the network means (re)adjusting all the Weights and Biases over the network to certain positions, so if we plug in the said input values, the network should produce the calculated output closest to possible to the desired ideal output.
We could try to manually adjust any the Weight or Bias number to some positive or negative number values, but will quickly realize that with even a fair number of neurons, there are just so many combinations it’s humanly not comprehendible to do so.
This is where the concept of “Backpropagation” comes extremely handy.
Backpropagation is one of the key features at the neural networks and is a form of a learning algorithms. It’s probably one of the most confusing concepts of it however. Simplifying it as much as possible, the basic idea is to take that utter garbage output from the neural network output, try to compare it our initially desired output, and see how far each of those outputted values are from the desired ones.
This number is called the “Error Bias” and if we have this number, the algorithm will try to adjust the weights and biases accordingly, starting from the rightmost layers, and work themselves back until they reach the input layer. We start from the back because the final output is at the back, and the connected Weights and Biases that are affecting that layer directly are in the previous layer, and we apply this idea over each layer.
After the Backpropagation is finished, we re-do the Feed-Forward step again and see if we got closer to the given value, by comparing the actual and the desired number again. A proper training can take hundreds, or millions of Feed-Forward and Backpropagation steps, until the network is conditioned to give us the closest numbers to the desired ones. Now we need to do this training process for every new input values and still make sure that the network remains giving valid results for the previous input values as well. You can begin to understand, that properly training a network over large amount of input values, to always outputs accurately close to the desired outputs is extremely hard to archive and usually takes very large number of training steps. But this is where the whole industry is working hard by discovering creative and different ways to approach this difficult issue.
Megszavazta a nép: Monster Hunter: World betekintő
Fórum:
Kifejtés: Van valami ezekben a típusú játékokban, ami erősen vonz, megmagyarázhatatlanul. Óriás szörnyek, látványos fegyverek, beindul a nyálelválasztásom, ha ilyet látok. Nézegettem a MHW béta videókat, és nagyon tetszett, amit láttam. Aztán elkezdtek befutni az értékelések, és mindenki szerette: a kritikusok és a játékosok is. Pont eladtam pár társast, gondoltam egye fene, megveszem az Xbox One verziót. Ez volt a megjelenés napját követően, azaz úgy másfél hete. Azóta beletettem a játék szerint 79 órát, és még csak a 16. küldetésnél vagyok a 25 lépéses story questben.
Rögtön kezdem a negatívummal: Sima Xbox One S-en nem fut szépen a játék. Eleve nem is full HD, hanem valamivel kisebb felbontás (850p körüli, szerencsére nem 720p), és valami adaptív technikának „hála”, mozgás közben a játék grafikája erősen elmosódott, néhol majdnem szemet bántó. Viszont ennek köszönhetően a 25+ FPS stabilan megvan, szóval valamit valamiért, de azért ez kicsit csalódás volt. Nagyon nem olyan a kinézet, mint az újabb generációs konzolokon. Persze önmagában azért szép, meg 2 perc után megszokja az ember szeme, de azért mégis szomorkodtam miatta.
A játék maga viszont bőven kárpótolt. Röviden arról van szó, hogy a rendlekezésre álló 5-6 terület valamelyikére bemész, megkeresed az épp aktuálisan vadászott szörnyet, majd agyonvered egyedül, vagy 3 másik játékossal egyetemben. És ennyi. Nincs túlspilázva, nincs mindenféle nyakatekert indoklás, csak egy egyszerű kérés: „Tábort vernénk a dzsungelben, de ott ólálkodik ez a rohadék, verd má' le!”.
Viszont a felkészülés a vadászatra, na az megér egy misét. Először is, ott vannak a fegyverek. Az a helyzet, hogy a játékban 14 különböző fegyverfajta van, és mind a 14 teljesen másmilyen játékstílust igényel. És itt most nem arra gondolok, hogy pl. ott a négy lőfegyver, és mind a négy kb. ugyanazt csinálja, csak más animációval, meg az egyik AOE, a másik meg single target... Teljesen más animációk, kombók, gombok tartoznak az egyes fegyverekhez. A gunlance melee távban hatásos, lehet vele ütni is, és kitérés helyett pajzzsal blokkolsz. Az íj középtávú fegyver, rohangálsz a szörnytől pár méterre, ugrálsz előle, míg a két blowgun közül a heavy a nyers sebzést adja, a light pedig mindenféle státusz effektet. Ha ez nem lenne elég, a lőfegyvereknél mindenféle elemű és spec hatású lőszert használhatsz – és nyilván a fegyvereket fejleszteni is kell. Többféle fa áll rendelkezésre a fejlesztéshez... oké, hogy hosszúkard, de acél vagy csont? A csont többet sebez, de hamarabb életlenedik (fontos szerepe van a játékban az élességnek, rendszeresen fenni kell a fegyvert), az acél jobban bírja. És mindkettőnek számtalan fejlesztési ága van, elemi sebzések (tűz, víz, villám), kritikus sebzés, státusz effektek (mérgezés, fárasztás, stun); rengeteg a lehetőség. És akkor ott a számtalan páncél szett, minden darabon egy vagy több speciális skill, amit csak páncéllal/nyaklánccal lehet megszerezni...
rwurl=https://www.youtube.com/watch?v=DW2SAnxOBCMAztán ott vannak a mindenféle craftolható segédletek. Százféle ital, csapdák, kenőcsök, távcső. Sőt, van egy karra szerelhető csúzli; na ahhoz is ezer féle lövedéket lehet készíteni. És persze gyűjteni, a gyűjtögetés fontos! Persze indulás előtt még nem árt enni sem a kantinban, ahol a macskák (igen, japán játék, a szakácsok macskák, és szólóban van egy palico, azaz macska társad, akinek szintén gyártatod a felszerelést, cserébe segít a harcokban) a számtalan ételfajta közül elkészítik azt, ami éppen szükséges (sokféle buffot adnak).
Aztán elindulsz, és tátod a szád néha, olyan dolgok történnek a játékban. Az adott térképen általában 3-4-féle főboss van, de mellette számtalan kisebb hecsedli is – ezeket nem kötelező ütlegelni, simán elkerülhetők többségében. Szaladgálsz körbe, kutatod a nyomokat – minden megtalált nyomért pontokat kapsz, ha elég sokat elérsz, akkor az adott szörny ismerete bővül, ennek 3 szintje van. A jegyzeteid között ilyenkor meg tudod nézni az adott szörny adatait: általános viselkedése és taktikája, gyenge pontjai, illetve hogy mely elemekre érzékeny és immunis.
Ha kellően magas a szörnyismereted, akkor a térképre belépve az első nyom megtalálása után jelzi a játék a legrövidebb útvonalat a szörnyhöz. Odaérsz, és kezdődik a móka. Kifejezetten változatos, és sokszor nagyon nehéz ellenfelek vannak – nem annyira bonyolultak, mint egy Dark Souls boss, de azért nagyon nem áll messze. Remek képességekkel és animációkkal rendelkeznek, komolyan, élmény minden harc. Fontos szerepe van, hogy kiismerd a jeleket a nagyobb támadásai előtt, főleg, ha lassabb fegyverrel játszol. Nem árt a környezetre is figyelni, vannak használható elemek: magasról ugorva a szörny hátán találhatod magad, vagy éppen gyorsan kilőve egy békát elkábul az ellen – sok-sok ilyen van. Plusz – az előbb említetteknek megfelelően – vannak a szörnyeknek gyenge pontjai: van, amelyiknek a feje, másiknak a szárnya, lába, hasa... Sőt, némelyiket (farok, szárny) le is lehet metszeni, ezzel pl. a wyverneknél az egyik legkellemetlenebb mérgező támadását ki lehet zárni a küzdelemből. Gyakran bizonyos mennyiségű bevitt sebzés után megfutamodnak a szörnyek, és elmennek pihengetni – ilyenkor, ha nem követed elég gyorsan, biza lassan visszaregenerálja magát. Ja, és a legtöbb küldetésnek 50 perces időlimitje is van...
rwurl=https://www.youtube.com/watch?v=lmwxyM3sPwcAmi viszont óriási fun (elsőre): turf war! Nincs annál királyabb látvány, amikor épp egy lapítós gyíkot ölsz, és egyszerre csak beront egy hatalmas dínó, felkapja, megrázza és odébb hajítja – aztán persze futás, mert te leszel a következő célpont! Ugyanis az a helyzet, hogy a szörnyek nem statikusak, hanem mászkálnak a térképen. És simán előfordul, hogy amíg az egyik főellenelet gyepálod, megjelenik a pálya másik főellenfele, és a két szörny egymásnak esik (persze túl egyszerű lenne, ha kivárnád, így random téged is megtámadnak, sőt olyan is, hogy ketten egyszerre, ami nem egy vidám történés). Egészen elképesztő, az összes ilyen párosításnak saját animációi vannak, hihetetlen látvány tud lenni egy-egy küzdelem.
Igazából ennyi maga a játék. Questet elfogadod, felkészülsz, vadászol, legyőzöd, begyűjtöd a jutalmat, fejlesztesz, és mehet újra az unalomig ismert körforgás. Viszont valahogy sikerült olyan elegyet gyúrniuk, hogy egyáltalán nem lesz unalmas, még nekem sem, pedig én aztán nagyon nem bírom az ilyen farmolós játékokat. 79 óra, és még csak 5 fegyverrel játszottam a sokból, jelenlegi 32 főellenfélből eddig 13-ra vadásztam, és ez a 13 is csak az alap, gyengébb verzió volt. Sok-sok órányi játékot látok még ebben.
Részemről 5/7!
Az eredeti komment hivatkozása: http://www.rewired.hu/comment/157194#comment-157194
Neural Networks: Why do we care and what are they?
Fórum:
Neural Networks, among similarly high tech sci-fi sounding terms are used more and more commonly in the articles around the Internet.
In this article I am attempting to:
- Give a few examples why would we care about this technology at all.
- Demystify the terminologies like Neural Networks, Artificial Intelligence, Machine Learning and Deep Learning.
- Classify them with simple terms, where they belong and how do they relate to each other.
Let's have a quick overview about the current state of the technology:
Amazon Go
rwurl=https://www.youtube.com/watch?v=vorkmWa7He8
Last Monday, Amazon opened Amazon Go, a convenience store at Seattle. Their selling point focuses on cashier-less and cashier line-less experience, to greatly speed up the whole shopping process. You enter the store by launching their app and scanning the displayed QR code at the gate. When you walk out from the store, all the bought items will be charged to your Amazon account after a few moments.
The magic of this technology is in the store. They've installed hundreds of cameras at the celling, so they can track and process every item's position, whenever you pick them up or put them back. Behind this technology is a heavy processing power and a machine learning algorithm that can track and understand what happens at the store at any moment.
Amazon used similar machine learning technologies to suggest relevant product for potential customers, based on their previous buying or browsing behaviors. This approach made Amazon the number 1 e-commerce retailer in the world.
rwurl=https://www.youtube.com/watch?v=64gTjdUrDFQ
Project Veritas, an undercover journalist activist group presented to the public that Twitter is perhaps using machine learning algorithms that can suppress articles, stories, tweets with certain political views and promote ones that are different kind of political views. On similar idea, Facebook announced that it will battle the so called "fake news" stories and will suppress them from our feed, preventing them from spreading around.
YouTube
rwurl=https://www.youtube.com/watch?v=9g2U12SsRns
YouTube is using its own machine learning technology implementation, called Content ID, to scan the content of every user's uploaded videos and find the ones that are breaking their Terms of Services and Copyright laws. By the way, Google is using machine learning for almost all of their services. For search results, speech recognition, translation, maps, etc., with great success.
Self-Driving Cars
rwurl=https://www.youtube.com/watch?v=aaOB-ErYq6Y
Self driving cars is another emerging market for Artificial Intelligence, large number of companies are pushing out their own version of self-driving algorithms, so they can save time and money for many people and companies around the world. Tesla, BMW, Volvo, GM, Ford, Nissan, Toyota, Google, even Apple is working on their solutions and most of them aims to be street ready around 2020-2021.
Targeting ads using ultrasound + microphone
Targeting ads generally is a huge field nowadays and every ad company is trying to introduce more and more creative approaches to get ahead of the competition. One less known idea lays around the fact that the installed application can access most of the mobile phone hardware, so theoretically they can easily listen to microphone input signals. Retail stores can emit ultrasound signals from certain products and if that signal gets picked up by the app (for instance the person spend more than a few seconds in from of a certain item), it can automatically report to ad companies that the user was interested about the product, so a little extra push, in a form of carefully targeted ad may cause the person decide to buy it.
Blizzard
Blizzard announced that they may ban Overwatch players for "toxic comments" on social media, like YouTube, Facebook and similar places. Gathering and processing data this size, also making the required connections between them certainly needs their own machine learning strategies and processing power.
Facebook patented a technology that allows to track dust or fingerprints smudges on camera lenses, this way the image recognition algorithms can recognize if any of the presented pictures are made with the same camera or not. They claimed that they never put this patented technology in use, but nevertheless it’s a great idea, with many different application possibilities from development perspective.
Boston Dynamic
rwurl=https://www.youtube.com/watch?v=rVlhMGQgDkY
Boston Dynamic is one of the leaders in robotics by building one of the most advanced ones on earth. They are using efficient machine learning technologies to teach their robots for doing certain tasks and overcoming certain problems.
Ok… Artificial Intelligence, Machine Learning, and Neural Networks ... what do they exactly mean and how do these terms relate to each other?
We learned that these technologies are popping out almost everywhere and becoming more and more relevant to our normal days in every aspects that we do, aiding or controlling our lives in one way or another. Reading all these “buzzwords” in technical articles around the Internet, you probably noticed that many of these terms are used interchangeably, or without any explanatory context. So let’s demystify their meaning and let’s properly categorize them for future references.
First of all, let’s clear their meaning:
Artificial Intelligence, or AI has the broadest meaning of all the three mentioned.
It usually attempts to mimic "cognitive" functions in humans and other beings, for example learning, adapting, judgment, evaluation, logic, problem solving.
Generally speaking, an AI usually does:
- Learn - by observing, sensing, or any ways that it can gather data.
- Process - by logic, adapting, evaluating, or judging the data.
- Apply - by solving the given problem.
AI can be a chess player that tries to outsmart a human player.
AI can also be a search engine that gives you more relevant results to any of your search terms than any human could ever do, given the amount of constantly changing data and human behavior around the whole Internet.
Machine Learning or ML, has again many implementation and a fairly broad meaning.
Usually we can generalize the ideas behind it by stating: Machine Learning is subset of Computer Science, and its objective is to create systems that are programmed and controlled by the processed data, rather than specifically instructed by human programmers. In other words, Machine Learning algorithms are attempting to program themselves, rather than relying on human programmers to do so.
Neural Networks, or more accurately referred as Artificial Neural Networks, are a subset of Computer Science, and their objective is to create systems that resembles natural neural networks, like our human brains for instance, so they can produce similar cognitive capabilities. Again, there are many implementation of this idea, but generally it’s based on the model of artificial neurons spread across at least three, or more layers.
We will get into the details of the "how exactly" in the next article.
Neural network are great approach to identify non-linear patterns (for linear patterns, classical computing is better). Patterns where there is no clear one-to-one relation between the output and the input values. Neural networks are also excellent for approximations.
We also hear a lot about Deep Learning and that is just one, more complex implementation on the idea of Neural Networks involving much more layers. All that can create much greater level of abstraction than we normally would use for simpler tasks. Think of the complexity required for image recognition, search engines, translations.
We learned now the general meanings behind these few terms, but how do they relate to each other then?
Artificial Intelligence has been around quite some time now, and some implementations of Machine Learning is used to create much more efficient Artificial Intelligences that just wasn't a possibility before. Following this combining idea, Machine Learning is using the technologies of Neural Networks to implement its learning algorithms.
So as we can see, all of these technologies can function and work by themselves, but also they can be combined with each other to create more efficient solutions to certain problems. Most of the times however the latter is the case nowadays. All of the mentioned three technologies are combined and used together, as the currently most efficient and effective solution to the given problems: our currently most advanced versions of Artificial Intelligences are created with a Machine Learning algorithms that are using Neural Networks as their learning and data processing mechanism.
rwurl=https://imgur.com/oIVNOqB
In summary:
- We were given a few examples why would we care about this technology at all?
- We demystified the terminologies like Neural Networks, Artificial Intelligence, Machine Learning and Deep Learning.
- We classified them with simple terms, explained where they belong, and how do they relate to each other.
In next articles I will explain in simplified steps how Neural Networks work, and will provide a programming example that any of the readers could implement and try out themselves as well. Furthermore, I will talk about the relations and differences between Artificial Neural Networks and Natural Neural Networks (our human brain, for example). I will talk about the concept of consciousness, as a natural question that tipically follows these ideas.