Site icon Tourmaline .

Revisiting the Deep Dream Generator

Floral shot edited with Deep Dream generator

Floral shot edited with Deep Dream generator

I last blogged about Google’s Deep Dream in 2015. So the ai has had 5 years to learn. With the recent ai blog experiment, it seemed like it was time to take a look back. And the Deep Dream Generator does have a lot more options now.

I started with this image:

And generated these renditions:

I got the best results from Thin Style, with similar result from Deep Style where you select an art style to merge with your uploaded image. The standard Deep Dream results look quite similar to those in 2015, although extensively less trippy (as they called the filter back then). Thin style doesn’t change the original image as immesely so it really depends on what look you’re going for. Check the trending page and you’ll see people have created really amazing images with flowers and other organic shapes. And I think that’s truly the best way to use this tool, if it suits any sort of style you’re goin for.

This floral result, from one of my images, through the Deep oil painting style, I really like.

The one really interesting thing about the Deep Dream results, is that while the generator was very focused on dogs in 2015, it seems to prefer ducks and maybe spiders here in 2020. I ‘went deeper’ 10 times to try to emulate the results I got last time around when I sent my image through the generator that same number of times. I selected the extra high enhance and deep inception options each time and using the same image both years

Miniature clay mushrooms image

Is this useful for my toy photography? Eh, well no, probably not. But regardless, it’s time to do a deep-ish dive into neural networks. How do they actually work? And well, I’m not smart enough for this, but here’s what I’ve gathered (a ton more information can be found on Google’s ai blog here, but Pathmind has an even more understandable guide here).

“Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input. The patterns they recognize are numerical, contained in vectors, into which all real-world data, be it images, sound, text or time series, must be translated.”

-pathmind

Deep learning, i.e. neural networks, create patters from available data. They learn as more data becomes available to pull from.

You can see uses of this in facebook being able to recognize you in pictures. It’s learned the patterns of your face and can use what it’s learned to find those same facial patterns in other images. Your photo app on your phone may do something similar.

Data companies also hope to use consumer data in a similar way to assist with things like shopping. A plan laid out by a company I used to work for went something like this – you’ll have an app or maybe something like Google glass, you go into a grocery store and the technology shows you the path to take to the aisle you need and points out the item you want on the shelf, whether through an entered shopping list or learned shopping behavior. It can also tell you about the ingredients, etc of the item you’ve picked up, and help you avoid certain ingredients. This would involve a huge product and store database, but it’s on its way in some form.

__

What do you think? Will you use the Deep Dream Generator to edit your photos? What ai do you find useful? What do you think the impact of this kind of technology will be on the world as it progresses?

 

Exit mobile version