Revisiting the Deep Dream Generator

I last blogged about Google’s Deep Dream in 2015. So the ai has had 5 years to learn. With the recent ai blog experiment, it seemed like it was time to take a look back. And the Deep Dream Generator does have a lot more options now.

I started with this image:

And generated these renditions:

I got the best results from Thin Style, with similar result from Deep Style where you select an art style to merge with your uploaded image. The standard Deep Dream results look quite similar to those in 2015, although extensively less trippy (as they called the filter back then). Thin style doesn’t change the original image as immesely so it really depends on what look you’re going for. Check the trending page and you’ll see people have created really amazing images with flowers and other organic shapes. And I think that’s truly the best way to use this tool, if it suits any sort of style you’re goin for.

This floral result, from one of my images, through the Deep oil painting style, I really like.

The one really interesting thing about the Deep Dream results, is that while the generator was very focused on dogs in 2015, it seems to prefer ducks and maybe spiders here in 2020. I ‘went deeper’ 10 times to try to emulate the results I got last time around when I sent my image through the generator that same number of times. I selected the extra high enhance and deep inception options each time and using the same image both years

Miniature clay mushrooms image

Is this useful for my toy photography? Eh, well no, probably not. But regardless, it’s time to do a deep-ish dive into neural networks. How do they actually work? And well, I’m not smart enough for this, but here’s what I’ve gathered (a ton more information can be found on Google’s ai blog here, but Pathmind has an even more understandable guide here).

“Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input. The patterns they recognize are numerical, contained in vectors, into which all real-world data, be it images, sound, text or time series, must be translated.”

-pathmind

Deep learning, i.e. neural networks, create patters from available data. They learn as more data becomes available to pull from.

You can see uses of this in facebook being able to recognize you in pictures. It’s learned the patterns of your face and can use what it’s learned to find those same facial patterns in other images. Your photo app on your phone may do something similar.

Data companies also hope to use consumer data in a similar way to assist with things like shopping. A plan laid out by a company I used to work for went something like this – you’ll have an app or maybe something like Google glass, you go into a grocery store and the technology shows you the path to take to the aisle you need and points out the item you want on the shelf, whether through an entered shopping list or learned shopping behavior. It can also tell you about the ingredients, etc of the item you’ve picked up, and help you avoid certain ingredients. This would involve a huge product and store database, but it’s on its way in some form.

__

What do you think? Will you use the Deep Dream Generator to edit your photos? What ai do you find useful? What do you think the impact of this kind of technology will be on the world as it progresses?

 

12 thoughts on “Revisiting the Deep Dream Generator

Add yours

    1. Thank you so much! You could totally use the deep dream generator too of you want to. Just upload your image at the top of the page, choose the art style below, then click generate and it does the rest for you!

  1. I really don’t like AI, or see a reason why it’s even desireable in many cases. It feels like it takes two things away.
    One is intentionality. I actually like to read labels and think about what I am buying, eating, wearing, and producing. Yes, it is slower, but I’m not sure faster is always better… And sometimes I change my mind.
    The second, the one I most miss now that the Google-monster , Amazon, Facebook, and whoever else are controlling what I see with algorythms of what they think I want (or what they think I’ll buy), is serendipity. In earlier internet days I used to go off and search on some topic or the other, then wind up all over the place learning about things I never thought of, some more interesting than the original topic. Like when I pick up a physical encyclopedia to look up something and wind up finding something on the same page, connected only by a shared letter of the alphabet, that intrigues me.

    1. I completely understand. I’m on a bit of an ai research kick at the moment, but my tastes ebb and flow. Things like the possibilities of deep fakes terrify me, but I try to look for he usefulness in technology as it advances.

    1. Oh I’m so gonna try and learn this, it is fascinating. I just watched a tutorial about it. I’m gonna dig “deeper” 😉😁 thank you for the introduction

Leave a Reply

Create a website or blog at WordPress.com

Up ↑

Discover more from Tourmaline .

Subscribe now to keep reading and get access to the full archive.

Continue reading