Jump to content
IGNORED

AI


Lil

Recommended Posts

@hazard

 

Pogledaj do kraja - ili sam kraj:

 

 

Jos jedno, uopste ne delim optimizam Kurzweila (i Misnskog), kao ni dosta njihovih stavova. Strana mi je vera u spasenje sveta kroz razvoj inteligentnih masina. Medjutim jako mi je strano i ovo etiketiranje neistomisljenika etiketama tipa "sarlatan" ako nisu potkrepljeni nekim argumentima, ili makar nekim autoritetom iz oblasti o kojoj se radi. Ne slazem se sa Kurzweil-om, ali poznajem oblast neuporedivo slabije nego on, i sto je jos vaznije, svestan sam njenog neverovatnog razvoja. 

 

Kad smo kod tehnoloskih preduslova za Kurzweil-ova mastanja, tu smo tek na klizavom putu. Pricu o tome da Murov zakon samo sto nije prestao da vazi sam cuo previse puta da bih to (uspravanje brzine tehnoloskog napretka tehnologije) uzimao zdravo za gotovo. Takodje, niko (osim mozda Kurzweila) se ne usudjuje da predvidi pocetak ere kvantnih racunara. To moze da se desi za 3 godine, za 10 godina, za 50, ...(, nikada jer se nesto ozbiljno previdelo u razumevanju fizickih zakona). Medjutim, kada se to desi, sva pravila igre se iz korena menjaju.

 

Religijski moment sam upotrebio u ovoj diskusiji u smislu u kome ga je ovde upotrebio i Minski - ako nas razum, svest, osecanja, ... nisu prozvod naseg fizickog tela, ostaje jedino da su bazirani na duhovnom.

Link to comment

Mislio sam na price o singularitetu. To je tema po kojoj je Kurcvajl najvise poznat u javnosti, a koliko mi je poznato, Gugl njega ne placa da radi na tome (bio je hajp kada su ga zaposlili, Gugl ulaze u singularitet i sl.). Izmedju deep learning-a koji "uci" da prepoznaje obrasce i Kurzvajlovih prica spajanju ljudi i kompjutera u neku besmrtnu superinteligenciju postoji provalija od nekoliko svetlosnih godina, rekao bih

ma jasno, tj. nisam ni komentarisala njegove priče i tvoj komentar na njih, tj. slažem se. :)

 

Sigurno korstis Matematiku ili neki slican program za simbolicko "racunanje", oni su klasicni primeri heuristickog pretrazivanja. 

ne, ne koristim Matematiku, niti neki slični program.  :huh:

mada nemam ni GPU cluster u stanu (mada se radi na apgrejdu  :happy:).

 

U drugim oblastima korste se i jedni i drugi metodi, cesto u simbiozi. Ako imas prepoznavanje slike (osobe, predmeti, ...) nerualne mreze (ML) su trenutno zakon. Sasvim druga stvar je ako je potrebno da se na osnovu slike napravi model fizckog sveta, da bi se na osnovu njega, recimo, isplaniralo kretanje robota. Dobar primer simbioze je ovaj program koji je pobedio svetskog sampiona u GO-u, sah sa druge strane je cist ne-ML AI.

 

upoznata sam sa svim gorepomenutim primerima. možda nam se definicije ML razlikuju, tvoja je izgleda malo uža od moje, pošto izdvajaš pomenutu simbiozu iz ML-a (program za go deep neural networks: supervised, pa reinforcement learning tj. spada valjda u ML za nas laike).

Edited by mei
Link to comment

@hazard

 

Pogledaj do kraja - ili sam kraj:

 

Jos jedno, uopste ne delim optimizam Kurzweila (i Misnskog), kao ni dosta njihovih stavova. Strana mi je vera u spasenje sveta kroz razvoj inteligentnih masina. Medjutim jako mi je strano i ovo etiketiranje neistomisljenika etiketama tipa "sarlatan" ako nisu potkrepljeni nekim argumentima, ili makar nekim autoritetom iz oblasti o kojoj se radi. Ne slazem se sa Kurzweil-om, ali poznajem oblast neuporedivo slabije nego on, i sto je jos vaznije, svestan sam njenog neverovatnog razvoja. 

 

Kad smo kod tehnoloskih preduslova za Kurzweil-ova mastanja, tu smo tek na klizavom putu. Pricu o tome da Murov zakon samo sto nije prestao da vazi sam cuo previse puta da bih to (uspravanje brzine tehnoloskog napretka tehnologije) uzimao zdravo za gotovo. Takodje, niko (osim mozda Kurzweila) se ne usudjuje da predvidi pocetak ere kvantnih racunara. To moze da se desi za 3 godine, za 10 godina, za 50, ...(, nikada jer se nesto ozbiljno previdelo u razumevanju fizickih zakona). Medjutim, kada se to desi, sva pravila igre se iz korena menjaju.

 

Odgledacu kasnije klip, hvala.

 

Kurcvajl je prodavac magle*. Ovo misljenje sam stekao citajuci kritike od strane ljudi koji su zapravo strucnjaci u oblastima na koje se Kurcvajl poziva kada izbacuje svoja predvidjanja.

 

Ovde je par ljudi skupilo dosta kritika (ima i linkova) od ljudi koji bi trebalo da znaju o cemu pricaju. Ima i ovde clanak iz New Yorkera koji je pisao profesor kognitivnih studija sa NYU. Gugl ce dati jos primera.

 

Imas i ovde tipa koji zapravo tvrdi da ce se ljudi ,,kiborgizovati" delimicno, i koji zapravo radi na takvim stvarima, ali kaze da kompjuter ne moze da zameni mozak:

 

 

But Nicolelis is in a camp that thinks that human consciousness (and if you believe in it, the soul) simply can’t be replicated in silicon. That’s because its most important features are the result of unpredictable, nonlinear interactions among billions of cells, Nicolelis says.
 
“You can’t predict whether the stock market will go up or down because you can’t compute it,” he says. “You could have all the computer chips ever in the world and you won’t create a consciousness.”

 

Da li Murov zakon ima ikakve veze sa mogucnoscu da napravimo "jak AI" ili "mislecu masinu"? Nisam siguran u to. Postoji ta ideja da ako uspemo da strpamo zilion tranzistora na cip, dobicemo mislecu, svesnu masinu...na osnovu cega se to tvrdi? Na osnovu ideje da je mozak zapravo nekako veliki kompjuter. Ali dokaz za to ne postoji. I ono sto znamo o mozgu nam govori da je struktura mnogo drugacija od kompjutera. Kakve veze onda ima da li ce Murov zakon nastaviti da vazi ili ne? To moze biti potpuno irelevantno.

 

Dalje, da li ce kvantni kompjuteri zaista promeniti sve? Koliko ja shvatam, kvantni kompjuteri (kada postanu prakticni) moci ce da nevidjeno brzo resavaju neke probleme (da rade neku suludu dekripciju i sl.) ali ce u nekim drugim slucajevima biti sporiji od klasicnih kompjutera (ili cak neupotrebljivi). Npr. vidi sta ovde pise tip koji je potpisan kao 

 

Allan Steinhardt, PhD, co-author "Radar in the Quantum Limit", formerly DARPA's chief scientist

 

 

Quantum computers will never replace classical computers. Never ever no way! They have awesome powers but also horrific theoretical limitations.
 
Among their limitations:
 
No ability to monitor computations directly or store results (see no cloning theorem)
No ability to interact with other computers or the internet
Limited to time reversible unitary operations making many tasks messy 

 

Vidi npr. i ovaj clanak. Ima i ovaj rad (jeste iz 2005., ali nisam uspeo da nadjem do sada nista sto ih opovrgava).

 

Druga je opet prica o tome da li bi kvantni kompjuteri, ako postoje, bili korisni za AI? Mozda bi, ako je Penrouz u pravu pa se u mozgu desavaju kvantni fenomeni koji su kljucni za svest i sl. (mada koliko znam, to nije dokazano ni na koji nacin). Ako nije, i ako mozak veze nema sa kvantnim fenomenima, vec je sve "klasicna" fizika, kako onda tu nuzno pomazu kvantni kompjuteri?

 

* - kada prica o singularitetu - ima, kao sto sam rekao, dosta realnih i opipljivih doprinosa tehnologiji. Mislim da je on tipicni primer jako inteligentne osobe pa cak i genija sa ekstremnom fiksacijom. Nije ni prvi ni poslednji takav.

Link to comment

1) U cemu se sustinski razlikuje stav Minskog od Kurzweil-ovog. Minski se nije usudjivao da previdja godine? Ipak se Odiseja odigrava 2001. godine sto je, kada je snimana, bilo veoma optimisticno predvidjanje. Ovo za AI zimu je potpuno bez veze, naknadna pamet od 40 godina. Skoro sam odgledao kurs AI Patrika Vinsotna (MIT). Kurs je snimljen 2011. godine, pri cemu su predavanja o neuralnim mrezama zamenjena novijim (mislim iz 2015.) Vinston priznaje da je jednostavno pet godina ranije potcenio razvoj neuralnih mreza, Minskom se stavljaju na dusu sto ih je podcenio jos sedamdesetih.

 

 

 

Nije Minsky bio nevin što se tiče AI zime iako to pokušavaju da ispeglaju, ipak je on veličina. Bilo je tu velikih napada na Rosenblata i ceo koncept višeslojnih neuralnih mreža, borbe oko fondova za istraživanja, nekolegijalnog podmetanja nogu...

 

Citat sa Quore:

 

 

Minsky was right about the hype, irritated that no formal reasoning was supporting claims, that models allowed weights grow without limit, that no limitation was being considered, no cost was estimated to compare with non connectionist methods. He said countless times in interviews.

He wrote in Perceptrons:

“the appraisal of any particular scheme of parallel computation cannot be undertaken rationally without tools to determine the extent to which the problems to be solved can be analyzed into local and global components”

But not all in Perceptrons is proven, one of the most important statements is not proved and it is a belief. Minky wrote it in 13.2 of Perceptrons, with all the words :

“The perceptron has shown itself worthy of study despite (and even because of!) its severe limitations. It has many features to attract attention: its linearity; its intriguing learning theorem; its clear paradigmatic simplicity as a kind of parallel computation. There is no reason to suppose that any of these virtues carry over to the many-layered version. Nevertheless, we consider it to be an important research problem to elucidate (or reject) our intuitive judgement that the extension is sterile. Perhaps some powerful convergence theorem will be discovered, or some profound reason for the failure to produce an interesting “learning theorem” for multilayered machine will be found.

While saying it was worthy, he never tried to give any formal proof for it.

He only studied briefly a very restrict class of multilayered perceptrons called Gamba machines, what he called “two layers of perceptrons”.

He discarded neural nets with loops as redundant, already covered by the theory of automata and even proved in his book Computation: Finite and Infinite Machines that a net composed of McCulloch-Pitts neurons under certain conditions was equivalent to a finite automata.

Lets remember that McCulloch-Pitts neurons are not suitable for backpropagation algorithm as its transfer function is not diferentiable. In fact using diferentiable transfer functions is a kind of Kobayashi Maru cheat, an almost obvious one after you think of it.

While several of his opinions about the hype were justified AT THE TIME, and that there was bogus research on perceptrons, we cannot ignore that there was a battle for research funds involved too (detailed in Margaret Boden book referenced ahead).

One thing is certain the influence of perceptrons use made him uncomfortable, again in Perceptrons:

“Appalled at the persistent influence of perceptrons (and similar ways of thinking) on 
practical pattern recognition
, we determined to set our work as a book. Slightly ironically, the first results obtained in our new phase of interest were the pseudo-positive applications of stratification.”

It took months to explain why they had initially failed. This passage show that he had the intent of attacking it even before beginning and insisted on this position even failing initially. This shows as Margaret Boden says “the hint of the passion that perfused the early drafts” (reference ahead).

Minsky had been circulating copies of their work for a while but with a lot of vitriol against Rosenblatt. People convinced him to expunge the attacks and remove it in the final version. In fact the book is dedicated to Rosenblatt now dead.

Lets Margaret Boden explain it in the book Mind as a Machine:

“However that may be, and even though the later printings (after Rosenblatt’s death) were dedicated to Rosenblatt’s memory, it’s pretty clear that ‘the whole thing was intended, from the outset, as a book-length damnation of Rosenblatt’s work and… neural network research in general’ (Hecht-Nielsen again:ibid.). Eventually Minky and Pappert followed ‘the strong and wise advice of colleagues’, and expunged almost all of the vitriol.

Those wise colleagues weren’t trying to protect Rosenblatt. On the contrary, they wanted to ensure that the damning was all the more effective for being seen as objective”

While not the only thing to have stopped perceptron research, it was always cited as justification, normally by people that never read or understood his book.

The very same non rational behavior that irritated him about the hype about perceptrons in the first place, but this never irritated him when used against connectionism.

Besides research in neural nets did not ended, but continued, with more discretion (Margaret Boden book documents it).

He stood against connectionism to the end of his life, citing pathetic errors in perceptron research (like one in that a perceptron was watching for the brightness in pictures in trying to detect images of camouflaged tanks with a biased training set) while ignoring its successes.

Today connectionism is one of the main lines of research in computer learning and AI. And the “extension” proved its value and the “learning theorem” discovered.

If we are going to rewrite history and ignore that even big researchers can be partly wrong and are motivated both by their rationalism and knowledge, but also by preconceptions, jealousy and greed, being just human; then is better to erase all their books so we cannot prove that by citing them.

Besides all polemic, Perceptrons is important, well written and a must read. But just like Finnegan’s Wake and Ulysses, it is very cited but rarely really read. But let’s not pretend that it was perfect, complete, comprehensive or without motivations besides pure research. He admits in Perceptrons:

“We are not convinced that the time is ripe to attempt a very general theory broad enough to encompass the concepts we have mentioned and others like him”

Instead he chooses to study thoroughly a small set of specific situations.

They considered only locally connected attributes with no good reason. This reduced very much the generality of the neural nets, very much like considering a neuron alone, specially when you have only one layer to do the converging of information. The argument that all biologic neurons having a limited radius do not stand as several local functions repeatedly applied layer after layer can compute global predicates. What cannot be done if only consider one layer systems (almost maliciously).

In 1988 on the second edition of the book they said that nothing had changed, even if it was proved by theorems that a 3 layer network could solve any problem, the problem of parity was solved and a “powerful convergence theorem” had been found.

Let’s not pretend that all research funding and research directions are led exclusively by rational reasons. I think the subject was extensively studied by Kuhn in its The Structure of Scientific Revolutions.

I cite two references to answer better your question:

Olarazan, Michael. A Sociological Study of the Official History of the Perceptrons Controversy. Social studies of Science. Vol 26 (1998), 611–59 (pay-walled unfortunately, but it is good).

Boden, Margaret A. Mind as a Machine: A History of Cognitive Science. A great, comprehensive, fantastic and long very long (more than 1600 pages), two volume account of the history of cognitive science. It has an extensive discussion on this specific subject in page 911 of Volume 2. Very expensive but it worth.

Edited by slow
Link to comment

upoznata sam sa svim gorepomenutim primerima. možda nam se definicije ML razlikuju, tvoja je izgleda malo uža od moje, pošto izdvajaš pomenutu simbiozu iz ML-a (program za go deep neural networks: supervised, pa reinforcement learning tj. spada valjda u ML za nas laike).

Da, sve to, i jos dosta toga spada u ML. ML je sve gde program uci iz podataka uz pomoc sa strane, ili bez pomoci. 

 

Ovaj program za go niie baziran na deep neural networks, ako to mislis. Nisam nasao tacnu arhitekturu programa, ali u analizama koje sam citao odmah posle pobede je bilo da se uz klasicnu ai u njemu koristi i ML (mozda i dnn). 

Link to comment

@Hazarad

 

Procitaj sta sam tacno napisao. Nigde nisam napisao da nema kritika struke na racun Kurzweil-ove stavove, ima ih, brojne su i po mom misljenju i opravdane. Ono sto meni smeta je upotreba termina "sarlatan" u tvom postu, kao i ton u drugim postovima ("nemoj stavljati njega i Minskog u  istu recenicu", pa majkumu moram kad imaju slicne stavove). Razgovor koji sam okacio se zavrsava:

 

Kurzweil: Da li mislite da je singularitet blizu?

Minski: Zavisi sta podrazumevate pod blizu blizu, ali jeste moguc za vreme vaseg zivota.

 

Sto se tice Murovog zakona, naveo sam ga jer je jedna od (ozbiljnih) kritika Kurzweila bazirana na tome da se previse oslanja na predvidjanja na osnovu ovog zakona. To se pominje i u ne pljuvackom clanku koji si okacio.

 

Sto se tice kvantnih komjutera, tacno je da je trenutno preovladjujuce misljenje da nece biti pogodni za sve primene, medjutim dosta je izvesno da ce biti pogodni za veci broj poslova iz domena AI. Ovo potkerpljuje i primena (a koliko se zna i konstrukcija) komercijalnih D-Wave masina. https://en.wikipedia.org/wiki/Quantum_annealing

Link to comment

@slow

 

Minskog sam ubacio u pricu samo u odbranu Kurzweila i tezu o sarlatanstvu. Tek pre neki dan sam cuo (verovatnije je da sam bio cuo, ali da sam potpuno zaboravio) na pricu o njegovoj odgovornosti za "AI zimu".

 

EDIT: Licno mi je od price o Kurzweilu daleko zanimljivija prica o vezi Gedelove teoreme i jake AI hipoteze.

Edited by Aion
Link to comment

 

 

 Licno mi je od price o Kurzweilu daleko zanimljivija prica o vezi Gedelove teoreme i jake AI hipoteze.

 

Slažem se. Od svih poznatih autora koji su se bavili tim pitanjem najrealniji mi je bio i ostao Gregory Chaitin. Milina ga je i gledati i slušati. Spojio je sve, i teoriju i praksu, od rada u IBM Watson centru, preko teorijskih doprinosa teoriji algoritama i kompleksnosti, do pisanja knjiga namenjenih širem informatičkom auditorijumu

 

Na primer njegovi eseji o Gedelu i Tjuringu

 

http://bookzz.org/book/503591/2982c5

 

Ono što si ti spomenuo kao problem on vidi kao slučajan događaj:

 

 

Chaitin claims that algorithmic information theory is the key to solving problems in the field of biology (obtaining a formal definition of 'life', its origin and evolution) and neuroscience (the problem of consciousness and the study of the mind).

In recent writings, he defends a position known as digital philosophy. In the epistemology of mathematics, he claims that his findings in mathematical logic and algorithmic information theory show there are "mathematical facts that are true for no reason, they're true by accident. They are random mathematical facts".

Edited by slow
Link to comment
Slučajnost kao centrani fenomen:
In Chaitin’s AIT, undecidability and uncomputability take centre stage. 
Most mathematical problems turn out to be uncomputable. Most  
mathematical questions are not, even in principle, decidable. “Incompleteness 
doesn’t just happen in very unusual, pathological circumstances, as many 
people believed,” says Chaitin. “My discovery is that its tendrils are  
everywhere.” 
In mathematics, the usual assumption is that, if something is true, it is 
true for a reason. The reason something is true is called a proof, and the 
object of mathematics is to find proofs, to find the reason things are true. 
But the bits of Omega—AIT’s crowning jewel—are random. Omega cannot 
be reduced to anything smaller than itself. Its 0s and 1s are like  
mathematical theorems that cannot be reduced or compressed down to simpler 
axioms. They are like bits of scaffolding floating in mid-air high above the 
axiomatic bedrock. They are like theorems which are true for no reason, 
true entirely by accident. They are random truths. “I have shown that 
God not only plays dice in physics but even in pure mathematics!” says 
Chaitin. 
 (This is a reference to Einstein. Appalled by quantum theory, which maintained that 
the world of atoms was ruled by random chance, he said: “God does not play dice with 
the universe.” Unfortunately, he was wrong! As Stephen Hawking has wryly pointed 
out: “Not only does God play dice, he throws them where we cannot see them.” )
 
Chaitin has shown that G¨odel and Turing’s results were just the tip of 
the iceberg. Most of mathematics is composed of random truths. “In a  
nutshell, G¨odel discovered incompleteness, Turing discovered uncomputability, 
and I discovered randomness—that’s the amazing fact that some  
mathematical statements are true for no reason, they’re true by accident,” says 
Chaitin. 
Randomness is the key new idea. “Randomness is where reason stops, 
it’s a statement that things are accidental, meaningless, unpredictable and 
happen for no reason,” says Chaitin.
 Chaitin has even found places where randomness crops up in the very 
foundation of pure mathematics—“number theory”. “If randomness is even 
in something as basic as number theory, where else is it?” says Chaitin. “My 
hunch is it’s everywhere.” 
Chaitin sees the mathematics which mathematicians have discovered 
so far as confined to a chain of small islands. On each of the islands are 
provable truths, the things which are true for a reason. For instance, on 
one island there are algebraic truths and arithmetic truths and calculus. 
And everything on each island is connected to everything else by threads of 
logic so it is possible to get from one thing to another simply by applying 
reason. However, the island chain is lost in an unimaginably vast ocean. 
The ocean is filled with random truths, theorems disconnected forever from 
everything else, tiny “atoms” of mathematical truth. 
Chaitin thinks that the Goldbach conjecture, which has stubbornly  
defied all attempts to prove it true or false, may be just such a random truth. 
We just happened to have stumbled on it by accident. If he is right, it will 
never be proved right or wrong. There will be no way to deduce it from any 
conceivable set of axioms. Sooner or later, in fact, the Goldbach conjecture 
will have to be accepted as a shiny new axiom in its own right, a tiny atom 
plucked from the vast ocean of random truths. 
In this context, Calude asks an intriguing question: “Is the existence of 
God an axiom or a theorem?” ! 
Chaitin is saying that the mathematical universe has infinite  
complexity and is therefore not fully comprehensible to human beings. “There’s 
this vast world of mathematical truth out there—an infinite amount of 
information—but any given set of axioms only captures a tiny, finite amount 
of this information,” says Chaitin. “This, in a nutshell, is why G¨odel’s  
incompleteness is natural and inevitable rather than mysterious and  
complicated.” 
Not surprisingly, the idea that, in some areas of mathematics,  
mathematical truth is completely random, unstructured, patternless and  
incomprehensible, is deeply upsetting to mathematicians. Some might close their 
eyes, view randomness as a cancer eating away at mathematics which they 
would rather not look at but Chaitin thinks that it is about time people 
opened their eyes. And rather than seeing it as bad, he sees it as good. 
Randomness is the real foundation of mathematics,” he says. 

 

 

Edited by slow
Link to comment

 

 

Ovaj program za go niie baziran na deep neural networks, ako to mislis. Nisam nasao tacnu arhitekturu programa, ali u analizama koje sam citao odmah posle pobede je bilo da se uz klasicnu ai u njemu koristi i ML (mozda i dnn).

 

AlphaGo (ako mislimo na isti program) koristi deep NN za trening (odabir poteza i procenu pozicija), pa kombinuje NN sa MC tree search algoritmom (simbioza :)). Publikovali su rad ovi is googla u Nature časopisu (ima da se nađe besplatno na netu da drugim sajtovima).

Link to comment

Ono sto meni smeta je upotreba termina "sarlatan" u tvom postu, kao i ton u drugim postovima 

 

Pa, OK, razlika u vidjenju :) moje je misljenje da Kurcvajl nije vredan ozbiljne diskusije i da, kada krene da teoretise o mozgu i nanotehnologiji npr., lupeta. Misljenje je potrekpljeno citanjem kritika od strane ljudi koji su eksperti. Za mene je Kurcvajl u istoj ekipi kao Dipak Copra, fon Deniken, i onaj Englez sto je pisao knjige i pravio dokumentarce o izgubljenoj civilizaciji koja je nestala kada se ceo antarkticki kontinent pomerio s mesta na mesto. Mislim, nemam stvarno vise sta da kazem na tu temu.

Link to comment

AlphaGo (ako mislimo na isti program) koristi deep NN za trening (odabir poteza i procenu pozicija), pa kombinuje NN sa MC tree search algoritmom (simbioza :)). Publikovali su rad ovi is googla u Nature časopisu (ima da se nađe besplatno na netu da drugim sajtovima).

Na isti program mislimo, promakao mi je taj rad, citao sam samo neke analize odmah posle meca. Pogledacu, zanima me, posebno ovo sa MCTS. Hvala.

Link to comment
  • 3 months later...

Direktor Mercedesa  dao zanimljiv intervju. Ovde sam prvi put pročitala da postoji platforma IBM Watson, koja  već funkcioniše i koja će u budućnosti zameniti advokaturu. Proverila sam, samo dva forumaša su pomenula ovu platformu- Idemo i Slow. Baš mi se svidelo sve što je direktor Mercedesa najavio.

 

http://www.blic.rs/vesti/ekonomija/direktor-mercedesa-predvidja-buducnost-necemo-imati-vozacke-zivecemo-100-godina-i/qkq093m

Link to comment
  • 2 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...