Google 'Incepts' Dreams Into Its AI with Stunning Results
The lead engineers behind the A.I. of Google are using "inception" to test out their artificial neural networks - a strategy that has lead to some very beautiful, and slightly disturbing, artwork.
Google's artificial neural networks work much like the ones you have in your brain; they're trained to sort and classify different kinds of information. Google uses these for software such as Google Photos; for instance, if you search the word "dog," the processor will pull out every image it can find that contains the characteristics it learned to associate with "dog" - four legs, tail, etc.
Google explained in a blog post, "we train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives is the classifications we want." So, in the aforementioned example, that network would have been shown millions of pictures of dogs, and thus, come to understand which properties they all have in common. The artificial neural network would then use these parameters to search Google's massive database of photos in order to come up with accurate results.
One particularly visually stunning way to then test the network's understanding is to ask it to discover a certain object in white noise, like a banana:
No comments:
Post a Comment