I was recently thinking about the common failings of attempts at the Turing test, lack of spontaneity, nonsensical answers, specific and unchanging topics, things like that, and it all boils down to two things lack of content, and as a result, lack of correct and proper answers. Let's face it, getting the amount of data required to mimic millions of possible outcomes from any of the 100 billion neurons in your brain is never going to happen. Perhaps not. I think Facebook could already have it.
First a bit about what the Turing test is and involves if you're unfamiliar. The Turning test was designed by British scientist Alan Turing (you may know him for his work cracking German code in WW2) to test whether or not a computer has intelligence on par with a humans. In other words, it determines whether we could class a computer as a person, under the assumption that the definition of a 'person' rests solely on the mind, and not any physical qualities (from hair length to being human.)
So how is this test carried out? The main and original test is the 'Imitation Game.' In this there are three players A, B and C. The computer is player A, and humans fill B and C (they are unaware of who has which position.) Player C has to try and work out which player is male and which is female. Player A has to try and trick player C into making the wrong decision and Player B has to try and help C make the right one. They are confined to text communication only. The success of the experiment (in proving person-hood in an artificial being) is determined by comparing the results when player A is a man, and when player A is a computer. The diagram might help you if you're struggling.
It's worth noting that there is a variation in which player C has to guess which is Human and which is a computer, this is perhaps more realistic because it allows any and all things to be discussed, making the test more accurate on a whole.
That's enough on the Turing test, on with the show!
My theory goes a little something like this (if someone has already thought of this, my apologies):
Research suggests that if we were to attribute a digital capacity to our brains, it would vary between 1 and 1000 terabytes (1 terabyte = 1000 gigabyte) of data (source) which means there must be some sort of finite limitation on all of the possible outcomes of input to a person, right? Granted, that might be in the hundreds of trillions, but then apply logic, people are predictable. Countless illusionists and psychiatrists have proved that it is possible to predicted ones actions by accounting for other factors. Hometown, education, social trends, music, all of these and more point to our predictability in a specific situation.
So what's this got to do with Facebook? If you're asking that right now, then I'm slightly worried.
In one single day, Facebook records over 500 terabytes of data on its 950 million users. Data on what you click and when. Data on what you click when you're talking about a certain thing in a supposedly private chat. Data on what you say when you're viewing a certain thing. All sorts.
As you can see, Facebook records enough data to be able to say with near certainty that a given person is going to do in a given situation. Hell, Facebook even knows what music you like, what TV shows you watch (and probably when, where and who with you watch them) even who you went on holiday within 2005!
The general idea is, Facebook knows a lot about 950 million people, and in my opinion, they know enough to be able to recreate users as machines quite accurately.
But what if they pooled all of the data from every user? Or better still, the most active 100 Million users who share at least one specific common interest. Suddenly they have hundreds of thousands of terabytes of data that could be combined to make one unique personality, in a computer.
And there you go, there's all the data you'll ever need, all you have to do now is write a few algorithms to determine what to say and when to say it based on the data collected (for example, look at how people replied to certain messages in private messages etc.) and apply some sentiment analysis to it for accuracy, and apply some sort of ranking algorithm to ensure correct response is given, and then I (personally) think you'd have a pretty strong AI that might just give the test a run for its money.
Of course this'll only ever be theoretical, Facebook already rapes its user’s privacy without harvesting it for a computer project, and they’d have law suits coming out of ever orifice. The only way would be to ask users to volunteer, but then you're losing masses of data, the random factor, all sorts, you'd cut the chances tenfold for every user lost.
Anyway, that's what I've been thinking about this evening. Given myself a minor headache. Happy days.