Jelenlegi hely

Topikok

2018.02.27 - 02:50,k Cikkek Neural Networks II: How do they work, where can I use them?

rwurl=https://imgur.com/FC1QvBY
In the second article in the series, I am attempting to:
  • Very briefly mention a few examples of all the Neural Network types and branches, since there are many. 
  • Focus on the most oldest and most simple one, the “Fully Connected, Feed Forward Neural Network”
  • Explain in great detail how it works using intuition and graphs, rather than math, to make it easy as possible to understand.
  • Explain the commonly used related terminology.
  • Show a real life example where and how you could use it.
 
The first steps to achieve artificial neural networks were made 75 years ago, and it became one of the hottest emerging technologies in recent years. The original idea was to produce a working mathematical abstraction, how would a biological brain function in theory, as I've mentioned in the previous article.
 
You don't have to be a neuroscientist to have at least a very basic understanding how would a biological brain work. Having a large number of brain-cells called "neurons", that can form connections called "synapses" between each other, based on the various signals that they receive from out body over our lifetime. If you receive a similar experience, the similar neurons will fire up with those connections, so you will remember the given situation easier and react to it faster, more accurately.
 
There are many-many types of Neural Networks branches and sub-branches nowadays, all of them trying to archive being closest to "perfect" solution for the given idea. The search is still ongoing, we still don't know how exactly the biological brain works, but we don't even know if that is the best way to achieve intelligence also. We may going to come up with even more efficient way than our current biological solution, like we did in many other areas in the modern industrial world.
 
Some of main aNN branch examples include the "Feed Forward Neural Networks", that are referred sometimes as "Conventional" neural networks. This is the earliest and oldest solution, based on the idea where neuron connection are "fed-forward" between neurons, so the information can travel through them in simple intuitive way, usually starting from leftmost and ending up in the rightmost positions.
 
The most well-known sub-branches here include the "Convolutional Neural Networks", where the connections are filtered and grouped between neurons, to simplify and scale down large amount of information to abstracted representations. This is generally used for image recognitions nowadays. Other well-known sub-branch is the "Fully Connected Neural Networks". Here, each neuron in a given layer is connected with every single neuron on the previous layer.
 
More modern main branch examples are the "Recurrent Neural Networks", where connections can form circles or reach similar non-conventional connections between each other. Some sub-branch examples can include "Bi-directional NN", or "Long Short-Term Memory NN". The latter example is generally used for speech recognition.
 
"Spiking Neural Networks" are sometimes referred as the third generation of NN, which can activate neuron connection in a seemingly randomly "spiking" way, and are probably the closest real representations of the biological brain solutions nowadays.
 
In this article we are going to deal with (you guessed it), the oldest and simplest one to tackle: the Fully Connected, Feed Forward Neural Networks.
 
Let’s understand step-by-step what do they consist of and how they work first, then later on we can talk about how we can use them.
 
What is a Fully Connected, Feed Forward Neural Network?
 
From the highest level, think of it as a calculating box where on one side you can feed in some information, and on the other side you can receive the calculated results:
 
rwurl=https://imgur.com/A0LWkLq
 
You can have more than one input and output values, technically any number of input or output values you would require, even very large ones:
 
rwurl=https://imgur.com/subBMJW
 
If you open the box, you will see all the neurons some layers separating them. The very first layer is the “input layer” and each neuron there will store an input value. Similarly the very last layer is the “output layer”, each neuron there will store the final output value:
 
rwurl=https://imgur.com/s5iHctX
 
Those in between layers are referred as “hidden layers”. They are called "hidden" because we never see (nor we really care) what happens in them, we just want them to help figure out the right results for out final “output layer”. The number of these hidden layers can be several, but usually a few is enough, as the larger this number gets, the slower all the calculations can take.
 
As I’ve said before, in FCNN each neurons in a given layer are connected to all the neurons in previous adjacent layers. One single connection has to be adjacent, we cannot skip over a layer, and so one connection between two neurons would be represented like this:
 
rwurl=https://imgur.com/x6Wk5VI
 
Connecting one neuron to all from the previous layer can be represented like this:
 
rwurl=https://imgur.com/qzTJiqO
 
After finishing populating all the rest of the connections, the network will look like this, hence the name “Fully connected”:
 
rwurl=https://imgur.com/0RfKlUy
 
Let’s break down this some more. Probably the most interesting component here is the “Neuron”. What would that be and how does it work?
 
This can get fairly “mathy”, but I will try to spare you by avoiding referring to math, and just giving the intuitive explanation whenever I can.
 
If we focus on one neuron, we can see that it can receive many values from one side, apply a summary function that adds these values up, and lastly it will apply a “Sigmoid” function to this sum, before releasing the neuron’s calculated output.
 
rwurl=https://imgur.com/IKKutPg
 
The sigmoid is an “S” shaped function as you can see in this graph, and the purpose of it to transform the sum value between 0 and 1. Even if the sum turns out to be a crazily large or crazily small number for instance, it will always be “corrected” back somewhere between 0 and 1 with this function. We are doing this to simplify working with the data in the network. It’s much simpler to understand the numbers beings closer to 1 as “perhaps yes”, and the numbers being close to 0 as “perhaps no”.
 
rwurl=https://imgur.com/Lz82eVY
 
What do I mean by “perhaps”? As I’ve said in the first article, neural networks by design are not meant for super precise calculations like we would expect from normal computers, but to do good approximations, and they will do better and better approximations as they train more.
 
Going back to our example, let’s assume we have 3 neurons with output values between 0 and 1 somewhere: 0.8, 0.3, 0.5:
 
rwurl=https://imgur.com/HpiYEUE
 
The sum function will add all the received values up.
 
sum(0.8, 0.3, 1.6)  = 0.8 + 0.3 + 0.5 = 1.6
 
After that, the neuron will apply the Sigmoid function to this value so we will squeeze any result back between 0 and 1 somewhere, resulting 0.832 as the output value from this neuron:
 
sigmoid(1.6) = 0.832
 
This would be the formula for the Sigmoid function, for those who would like to see the math as well:
 
rwurl=https://imgur.com/p3Su53a
 
 
If we continue doing this over every neuron, until we get the final output layer, we will get our final calculated values, but you perhaps realized: we would have the same output results every time for the same given input values. In many practical cases we cannot modify the input value since we are receiving them from some other sources, also the summary and the sigmoid function’s behavior is fixed as well, but we would still like to influence and shape the outputted values somehow. Because of this need, we invented the idea of “Weights”, that are basically custom numbers, stored at the connections between the neurons. People usually refer to connections between neurons simply as “Weights”.
 
So how do “Weights” come in play?
Weights are getting multiplied with the neuron output, before that value gets summarized with the rest in the summary function, so for example if all the weights will be 1, nothing would change:
 
rwurl=https://imgur.com/yPchuhO
 
sum (0.8, 0.3, 0.5) = 0.8*1 + 0.3*1 + 0.5*1 = 1.6
 
But if we turn these weight values up or down somewhere, the outputted value can be very different:
 
rwurl=https://imgur.com/idwKgeJ
 
sum (0.8, 0.3, 0.5) = 0.8*-0.5 + 0.3*2.2 + 0.5*0.4 = -0.4 + 0.66 + 0.2 = 0.46
 
Now this solutions would be almost perfect, but people found out over time, that there may still be cases when even applying heavy Weight modifications all around the network, the final outputted values would still not be close to desired numbers, because of the Sigmoid function design. Here was the concept of “Bias” born.
 
“Bias” is very similar to Weights as being a single modifiable arbitrary number, but the difference is that it only applies to every neuron once, in the Sigmoid function, to translate it left or right.
 
Imagine a situation where your final values after applying the Summary function with Weights are converging to near 0. But after applying the Sigmoid function as well, it will bump back the output value to somewhere around 0.5, while you would rather keep that value indication 0.  This is where a Bias can be applied and will basically translate the whole sigmoid function to a direction, modifying the output greatly. Let’s see the difference with a bias of -5 or +5:
 
rwurl=https://imgur.com/Lj0Rk3N
 
As we can see, if we would add a Bias of -5 (red graph) to the summary before applying the Sigmoid function would result the neuron output very close to 1, while with the bias of 5 (blue graph), the output would be very close to 0.
 
So we’re happy now, with all these flexibility we really could archive any desired final output values!
 
The basic concept of “Fully Connected, Feed Forward Neural Network” is established. How or where could we use it?
 
Let’s have a nice practical example: We want it to read written number from 0 to 9. How can we approach this problem with our newly setup Neural Network?
 
First of all, let’s clear our objective: to turn any of these written “three” number images, or any similar ones, to result “3”:
 
rwurl=https://imgur.com/lUsf7X9
 
 
That includes all these written “four” number images, to “4”:
 
rwurl=https://imgur.com/iecL0HO
 
… and so on, so on.
 
We would need to turn all these images to input values first.
Let’s take a closer look at one of them. We can see that it’s been made of individual pixels. 28 rows * 28 columns of them:
 
rwurl=https://imgur.com/zAKEqpT
 
Each of these pixels have a brightness value, some of them are very bright, and some of them are darker. If we represent the brightest “unused” pixels with 0.0 and as they got darker, with a number closer and closer to 1.0, indicating that they have some sort of “activated” values there:
 
rwurl=https://imgur.com/CeYu7a6
 
If we convert all the remaining pixels to number representations as well, and write these values down in one long row, we halve all the input neuron values ready to be processed with our NN, all 784 (28*28) of them!
 
As for the output neurons, the most straightforward is to have one for each desired number (0-9). So 10 neurons in total.
 
rwurl=https://imgur.com/KkJUhGQ
 
If we plug in the digitized values to the input layer, from the image that represents written number three, we would like to receive 0.0 on all of the output neurons, except on the fourth one, that would need to be 1.0 ideally, to clearly represent number “3” ideally. (Implying the first neuron represents “0”, the second “1”, and so on until the 10th neuron, representing “9”.)
 
rwurl=https://imgur.com/CyWDBrz
 
But if we do that, we will find out that the output layer’s neuron values are nowhere near this but show some utter garbage:
 
rwurl=https://imgur.com/30oMUWC
 
That’s because the network haven’t been “Trained” yet.
 
“Training” the network means (re)adjusting all the Weights and Biases over the network to certain positions, so if we plug in the said input values, the network should produce the calculated output closest to possible to the desired ideal output.
 
We could try to manually adjust any the Weight or Bias number to some positive or negative number values, but will quickly realize that with even a fair number of neurons, there are just so many combinations it’s humanly not comprehendible to do so.
 
This is where the concept of “Backpropagation” comes extremely handy.
 
Backpropagation is one of the key features at the neural networks and is a form of a learning algorithms. It’s probably one of the most confusing concepts of it however. Simplifying it as much as possible, the basic idea is to take that utter garbage output from the neural network output, try to compare it our initially desired output, and see how far each of those outputted values are from the desired ones.
 
This number is called the “Error Bias” and if we have this number, the algorithm will try to adjust the weights and biases accordingly, starting from the rightmost layers, and work themselves back until they reach the input layer. We start from the back because the final output is at the back, and the connected Weights and Biases that are affecting that layer directly are in the previous layer, and we apply this idea over each layer.
 
After the Backpropagation is finished, we re-do the Feed-Forward step again and see if we got closer to the given value, by comparing the actual and the desired number again. A proper training can take hundreds, or millions of Feed-Forward and Backpropagation steps, until the network is conditioned to give us the closest numbers to the desired ones. Now we need to do this training process for every new input values and still make sure that the network remains giving valid results for the previous input values as well. You can begin to understand, that properly training a network over large amount of input values, to always outputs accurately close to the desired outputs is extremely hard to archive and usually takes very large number of training steps. But this is where the whole industry is working hard by discovering creative and different ways to approach this difficult issue.
 

 
2018.01.29 - 15:38,h Cikkek Neural Networks: Why do we care and what are they?

rwurl=https://imgur.com/FC1QvBY
Neural Networks, among similarly high tech sci-fi sounding terms are used more and more commonly in the articles around the Internet.

In this article I am attempting to:
  • Give a few examples why would we care about this technology at all.
  • Demystify the terminologies like Neural Networks, Artificial Intelligence, Machine Learning and Deep Learning.
  • Classify them with simple terms, where they belong and how do they relate to each other.


Let's have a quick overview about the current state of the technology:

Amazon Go
rwurl=https://www.youtube.com/watch?v=vorkmWa7He8
Last Monday, Amazon opened Amazon Go, a convenience store at Seattle. Their selling point focuses on cashier-less and cashier line-less experience, to greatly speed up the whole shopping process. You enter the store by launching their app and scanning the displayed QR code at the gate. When you walk out from the store, all the bought items will be charged to your Amazon account after a few moments.

The magic of this technology is in the store. They've installed hundreds of cameras at the celling, so they can track and process every item's position, whenever you pick them up or put them back. Behind this technology is a heavy processing power and a machine learning algorithm that can track and understand what happens at the store at any moment.

Amazon used similar machine learning technologies to suggest relevant product for potential customers, based on their previous buying or browsing behaviors. This approach made Amazon the number 1 e-commerce retailer in the world.

Twitter
rwurl=https://www.youtube.com/watch?v=64gTjdUrDFQ
Project Veritas, an undercover journalist activist group presented to the public that Twitter is perhaps using machine learning algorithms that can suppress articles, stories, tweets with certain political views and promote ones that are different kind of political views. On similar idea, Facebook announced that it will battle the so called "fake news" stories and will suppress them from our feed, preventing them from spreading around.

YouTube
rwurl=https://www.youtube.com/watch?v=9g2U12SsRns
YouTube is using its own machine learning technology implementation, called Content ID, to scan the content of every user's uploaded videos and find the ones that are breaking their Terms of Services and Copyright laws. By the way, Google is using machine learning for almost all of their services. For search results, speech recognition, translation, maps, etc., with great success.

Self-Driving Cars
rwurl=https://www.youtube.com/watch?v=aaOB-ErYq6Y
Self driving cars is another emerging market for Artificial Intelligence, large number of companies are pushing out their own version of self-driving algorithms, so they can save time and money for many people and companies around the world. Tesla, BMW, Volvo, GM, Ford, Nissan, Toyota, Google, even Apple is working on their solutions and most of them aims to be street ready around 2020-2021.

Targeting ads using ultrasound + microphone
Targeting ads generally is a huge field nowadays and every ad company is trying to introduce more and more creative approaches to get ahead of the competition. One less known idea lays around the fact that the installed application can access most of the mobile phone hardware, so theoretically they can easily listen to microphone input signals. Retail stores can emit ultrasound signals from certain products and if that signal gets picked up by the app (for instance the person spend more than a few seconds in from of a certain item), it can automatically report to ad companies that the user was interested about the product, so a little extra push, in a form of carefully targeted ad may cause the person decide to buy it.

Blizzard
Blizzard announced that they may ban Overwatch players for "toxic comments" on social media, like YouTube, Facebook and similar places. Gathering and processing data this size, also making the required connections between them certainly needs their own machine learning strategies and processing power.

Facebook
Facebook patented a technology that allows to track dust or fingerprints smudges on camera lenses, this way the image recognition algorithms can recognize if any of the presented pictures are made with the same camera or not. They claimed that they never put this patented technology in use, but nevertheless it’s a great idea, with many different application possibilities from development perspective.

Boston Dynamic
rwurl=https://www.youtube.com/watch?v=rVlhMGQgDkY
Boston Dynamic is one of the leaders in robotics by building one of the most advanced ones on earth. They are using efficient machine learning technologies to teach their robots for doing certain tasks and overcoming certain problems.

Ok… Artificial Intelligence, Machine Learning, and Neural Networks ... what do they exactly mean and how do these terms relate to each other?

We learned that these technologies are popping out almost everywhere and becoming more and more relevant to our normal days in every aspects that we do, aiding or controlling our lives in one way or another. Reading all these “buzzwords” in technical articles around the Internet, you probably noticed that many of these terms are used interchangeably, or without any explanatory context. So let’s demystify their meaning and let’s properly categorize them for future references.

First of all, let’s clear their meaning:

Artificial Intelligence, or AI has the broadest meaning of all the three mentioned.

It usually attempts to mimic "cognitive" functions in humans and other beings, for example learning, adapting, judgment, evaluation, logic, problem solving.

Generally speaking, an AI usually does:
  • Learn - by observing, sensing, or any ways that it can gather data.
  • Process - by logic, adapting, evaluating, or judging the data.
  • Apply - by solving the given problem.
     
AI can be as simple for instance, as the ghosts in Pacman. Each have their own objectives and each tries to accomplish them by observing the player's behavior and position, so they can process that data and react upon it.

AI can be a chess player that tries to outsmart a human player.

AI can also be a search engine that gives you more relevant results to any of your search terms than any human could ever do, given the amount of constantly changing data and human behavior around the whole Internet.

Machine Learning or ML, has again many implementation and a fairly broad meaning.

Usually we can generalize the ideas behind it by stating: Machine Learning is subset of Computer Science, and its objective is to create systems that are programmed and controlled by the processed data, rather than specifically instructed by human programmers. In other words, Machine Learning algorithms are attempting to program themselves, rather than relying on human programmers to do so.

Neural Networks, or more accurately referred as Artificial Neural Networks, are a subset of Computer Science, and their objective is to create systems that resembles natural neural networks, like our human brains for instance, so they can produce similar cognitive capabilities. Again, there are many implementation of this idea, but generally it’s based on the model of artificial neurons spread across at least three, or more layers.

We will get into the details of the "how exactly" in the next article.

Neural network are great approach to identify non-linear patterns (for linear patterns, classical computing is better). Patterns where there is no clear one-to-one relation between the output and the input values. Neural networks are also excellent for approximations.

We also hear a lot about Deep Learning and that is just one, more complex implementation on the idea of Neural Networks involving much more layers. All that can create much greater level of abstraction than we normally would use for simpler tasks. Think of the complexity required for image recognition, search engines, translations.

We learned now the general meanings behind these few terms, but how do they relate to each other then?

Artificial Intelligence has been around quite some time now, and some implementations of Machine Learning is used to create much more efficient Artificial Intelligences that just wasn't a possibility before. Following this combining idea, Machine Learning is using the technologies of Neural Networks to implement its learning algorithms.

So as we can see, all of these technologies can function and work by themselves, but also they can be combined with each other to create more efficient solutions to certain problems. Most of the times however the latter is the case nowadays. All of the mentioned three technologies are combined and used together, as the currently most efficient and effective solution to the given problems: our currently most advanced versions of Artificial Intelligences are created with a Machine Learning algorithms that are using Neural Networks as their learning and data processing mechanism.
 
rwurl=https://imgur.com/oIVNOqB
 
In summary:
  • We were given a few examples why would we care about this technology at all?
  • We demystified the terminologies like Neural Networks, Artificial Intelligence, Machine Learning and Deep Learning.
  • We classified them with simple terms, explained where they belong, and how do they relate to each other.

In next articles I will explain in simplified steps how Neural Networks work, and will provide a programming example that any of the readers could implement and try out themselves as well. Furthermore, I will talk about the relations and differences between Artificial Neural Networks and Natural Neural Networks (our human brain, for example). I will talk about the concept of consciousness, as a natural question that tipically follows these ideas.
 
2017.05.13 - 02:27,szo Bugok bug: kommentmozgatás

kommentmozgatással időnként megkeverednek a kommentsorrendek és a válaszra mutatások is. Az hogy használnak e a lakók választ vagy sem, nem játszik szerepet ebben a kettő kérdésben, tesztelve is lett: http://www.rewired.hu/comment/121233#comment-121233

Infók amelyek esetleg segíthetnek a végére járni:
- a fórumunk a Drupal 7-t használja mint alapmotor. Az osszes probléma ami a komment mozgatással kapcsolatos csak backend kérdés, így kizárólag PHP-ben kell botorkálni ha valaki segíteni akar.

- komment rendszer működése "under the hood" a Drupal-ban: http://shutterfreak.net/blogs/olivier-biot/2010-06-24/rearranging-commen... (a cikk ugyan Drupal 6-ban meséli a dolgokat de a 7-ben is nagyon hasonlóan müxik)

- két modulunk van ami szerintem közrejátszhat a komment szituban, így ha otthon natúrban valaki szeretne ránézni, Drupal 7telepítés után ezeket érdemes rányomni még:
https://www.drupal.org/project/advanced_forum - forum funkciókat terjeszti ki, de köze lehet a komment sorrend kimutatásban talán, stb.
https://www.drupal.org/project/comment_mover - magát a komment mozgatást funkciót segíti elő.

2017.05.13 - 01:24,szo Teszt mozgatas teszt cel - psi
2017.05.13 - 01:20,szo Teszt mozgatas teszt - psi

h1

2017.04.29 - 09:02,szo Hírek Boldog Nyulat / RW szülinapot / Locsolkodást!

rwurl=https://i.imgur.com/EuOTlTT.jpg

Bazira elfoglaltak és rövidek most a napjaim, kb mindennel le vagyok maradva, épp év végi vizsgák mennek (igen, a képek reflektáljak a tudatállapotomat)., viszont ami késik, nem múlik. Mindenkibe boldog nyulat és emberséges mennyiségű alkoholt a kölni mellé, szintúgy másfél hetes késéssel RW szülinapot is egyúttal!!

A különleges alkalomból indítványozok is egy újabb kalapozást az elmúlt hosszú évek után, mivel kissé kezdünk leapadni a szerverzsetonnal mostanára. A procedúra a megszokott PayPales, akik segíteni tudnak a Donációk topikban megtalálják a szükséges infókat. Minden hozzájárulást köszönünk a Rewired team és a szorosan kapcsolódó közösség nevében is!

Megyek, gyúrom a leadandó melók tömegét tovább, amíg még van élet bennem. Húsvéti logó is majd lesz hamarosan, csak jussak oda. :D

bump ápdét:

Végre kész a suli az év végi hajtással együtt és odajutottam hogy minden mással is foglalkozzak.

A kalapozásunk eredményét részletesen közzétettem az ápdételt google docs-ban is (Donációs topikban megtaláljátok a szokásos helyen). Egyúttal itt is szeretném megköszönni minden önzetlen és lelkes hozzájárulónak - Lefty, Dulakh, eLeM, Mastodon, Neowindir és Chiller tesóknak – az RW életének további fenntartását. A költségekre fenntartott kassza megint gömbölyű és szervereink pár évig még így biztosan duruzsolni fognak ahogy nézem! Donátori plecsniket is kiosztottam akinek még nem lett volna.

2017.01.16 - 23:13,h Bugok (bug) bug: képes linkeléses gubanc

Ez a bug a média beágyazás ficsőr miatt jön ki,

Ha kikapcsolom a klasszikus képbeágyazást (régi "img" tag, amit a linkeléses képbeágyazás is használ), akkor nem lehet abuse-olni az "img" tagot, azaz nem lehet automatikusan animált giffeket bedobni a media beágyazással, de sajna nem működik a képes linkelés funkciónk.

képes linkelés funkcióra ezt a trükkör értem, csak ha nem lenne valakinek világos:
rwurl=http://i.imgur.com/3LN7Rf9.png

Ha bekapcsolva hagyom a klasszikus képbeágyazást, akkor simán abuse-olható az "img" tag, de nem törik a linkeléses képbeágyazás funkció. A "media" tag mindenképp sokkal jobb kód, mivel direct link védelmet és egyebeket is át tud játszani ugye, csomó más formátum lejátszásával is megkenve, de nem kompatibilis univerzálisan egyből értelemszerűen minden más kóddal.

A megoldás hogy össze kell hozni valahogy hozni hogy a linkelés funkció kompatibilis legyen a media taggal is, és akkor a linkeléssel tudnám a "media" tagot használni, a régi "img" helyett. Talán végső lépésnek össze is vonhatnám majd egyszer a linkelés gombot is a media beszúrással, hogy még egyszerűbb legyen az interface ugye.

Egyelőre azt választottam hogy visszakapcsoltam a klasszikus képbeágyazást is a háttérben, hogy ne törjön a képes linkelés funkció, de mindenképp kizárólag a media tagot használjátok bármiféle a normális kép/media formátum beágyazására, hogy elkerüljük az abuse clusterfuckot.

2016.06.22 - 01:29,sze Hírek A REWiRED elérhetetlen volt pár napig

Gondolom észrevettétek páran hogy a Rewired teljesen elérhetetlen volt pár napig. Az oka hogy a webtárhelyes hostunkat (elméletileg) DDOS-al támadták valahol, és a tech team sorra próbálta lekapcsolni a szolgáltatott oldalakat hogy kiderítsék hol a gubanc. Gondolom egy idő után sikerült megoldaniuk a gondot, de azóta sem szóltak egy kukkot sem hogy mi volt a gond, vagy esetleg hagytak volna valami minimális "bocsi fiúk a downtime-ért" üzit emailben. Ilyen szituban jelenteni se tudjuk sajna a többieknek hogy mi van, mivel az oldal mellett a fejlesztői szerszámok is mind elérhetetlenek voltak, hogy egy figyelmeztető üzit feldobhattunk volna legalább az embereknek hogy ne stresszeljenek, kis türelem és dolgoznak a problémán az emberek. Végén kiderült hogy ha Vajk nem húzza fel magát és hívja fel őket akkor lehet most is offline lennénk, mivel nemes egyszerűséggel "elfelejtettek" visszakapcsolni minket. :D

Eleve megindultunk beszélgetni potenciális átvándorlásról más webhosting céghez a jelenlegitől. A nagy negatívumuk, hogy mint többször tapasztaltuk már, szinte csak "megtűrt/leszart" státuszban vagyunk náluk, abszolút nem kapunk semmilyen prioritást ha segítséget kérünk, múltkori pár belassulós problémával is a tipikus hozzáállásuk az volt hogy talán vonuljunk át privát szeróra náluk 10x-es áron, minthogy a végére járjunk kis energiaráfordítással hogy mi is a gyökere a gondnak. De megértem persze hogy valószínűleg teljesen üzleti oka van neki, Strato tesó jóvoltából teljesen baráti áron kapjuk a szolgáltatást az egyik haverja cégétől, így ennek jegyében nem éreznek ingerenciát hogy a másik nagycégek mellett sztesszeljenek bármiért is miattunk. Persze kérdéses hogy ez mennyire professzionális hozzáállás, mivel nem ingyen lakunk náluk, hanem független hogy kevesebbet, de ugyanúgy minden évben fizetünk.

Hasonló galibás és egyéb (terjeszkedéses) esetekre terveztünk majd egy facebook oldalt is nyitni ahol ha offline is a főoldal, tudjuk értesíteni az embereket hogy mi van, ezek mellett erősebb szociális hálózatos kapcsolatot is biztosítana a fórumunknak, de ez majd a jövő meséje hogyan alakul, egyelőre kettő másik kapcsolattartási platformunk van hogy ha ekkora téma is robbanna ránk, tudjunk továbbra is szervezkedni a hogyan továbbon.

A legaktívabban erre a célra a zárt Facebook Rewired csoportot használjuk, aki be akar lépni csak jelentkezzen fel és valamelyik admin a sok közül majd hozzáadja: https://www.facebook.com/groups/215567891805226/

Azután esetleg ott van a HW/RW Steam csoport is: http://steamcommunity.com/groups/HW_raksz

Vágom hogy vannak sokan akik nem akarnak se egyikre se másikra sem feljelentkezni, de szeretnék tudni hogy eddig is mik voltak a hozzászólások a potenciális költözéssel kapcsolatban, nekik kimentettem sorrendben azokat képbe:
- http://i.imgur.com/V6M4n0g.png
- http://i.imgur.com/fQUYTzb.png
- http://i.imgur.com/bbBLYJC.png
- http://i.imgur.com/io3PdSf.png

Mint látjátok egyelőre abban a fázisban vagyunk hogy mérlegelni próbáljuk mind2 oldalt, a maradásnak az előnye hogy nagyon olcsóak maradunk továbbra is, de lehet hogy hasonló gubancokkal kell szembesülnünk a jövőben is. Viszont én és páran mások is szintúgy azon a véleményen vannak hogy 1-1.5 évente ilyen párnapos gubanc még belefér a maradás tűréshatárába, cserébe az olcsóságért. Mindez persze ha azzal számolunk hogy az ilyesmi időnkénti reguláris dolog lesz, nem csak épp most csapott össze a leves a hosnak, azután annyi.

A másik oldal hogy költözzünk mivel kivagyunk velük, de akkor keresnünk kell egy másik helyet és össze kell dobnunk a hiányzó felárat rá ugye értelemszerűen.

A topik arra van hogy minden kapcsolatos gondolatot ki tudjunk beszélni egymással.

2016.07.05 - 20:35,k Technika HQ DRM-beszélgetések

bármi kapcsolatosról, akár elméleti vagy filozófiai hozzáállásokról.

2021.01.24 - 04:02,v Oldalügyek Beágyazás-támogatott oldalak listája es kapcsolatos ugyek

Gondoltam jó ha van ilyen topik hogy ne kelljen találgatni/kérdezgetni az embernek hogy mit lehet és mit nem beágyaztatni a motorral. Mindenfele mas bug es kapcsolatos tema is johet ide, legalabb 1 helyen lesz vezetve.

- Coub: video
- Youtube: video
- Streamable: video
- SoundCloud: track
- Bandcamp: track, album
- Facebook:
-- video: Csak 10 perc alatti tamogatott, de cserebe hosszutavon is permanensen elerheto lesz az RWn beagyazva mert a motor atmenti Streamable-re, ha le is torolnek a forraslinkrol.
-- kep: A kep, hasonloan a videohoz hosszutavon elerheto marad fuggetlen ha le is torolnek a forrasrol, mert a motor atmenti azokat az Imgurra.
-- album: Elso kep fog latszodni.
- Vimeo: video
- Imgur: video, kep, gif, album, gallery, tag
- Gfycat: video, gif
- Redgifs: video, gif
- Reddit: video, kep, gif, gallery
- Giphy: video, gif
- Instagram: video, kep, album
- 9gag: video, kep
- Twitter: video (a szitu mint a face-nel fentebb), kep, album
- Spotify: album, track (mindkettobol csak 30mp lesz elonezetbe lejatszhato)
- Bármelyik random oldal amelyen direkt linkben kép (jpg, png, gif, stb) van.
- Bármelyik random oldal amelyen direkt linkben videó (MP4, WebM) van.
- Bármelyik random oldal amelyen direkt linkben audió (mp3, ogg, wav) van.
- "Yo-dawg" beagyazasok tamogatottak. Azaz RWre beagyazott oldal1, amelyik oldal1-en szintugy beagyazott oldal2-van, stb. Bovebben itt: https://rewired.hu/comment/268045#comment-268045

REWiRED - Kutyus felfedő szétszéledés - 2014-2057 © Minden Jog Fenntartva!
Virtuális valóság és Kecskeklónozó központ - Oculus MegaRift - PS21 - Mozi - 4D - Bajuszpödrés
Médiaajánlat/Borsós Brassói Árak
Rohadt Impresszum!