summaryrefslogtreecommitdiffstats
path: root/content/notebook/fediverse/ai-art.md
blob: 294bcfc69b691c437081eda6d70a82e0c9041e6c (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
schema_type: DigitalDocument
title:       On AI Art -- Artists Weren't Happy When Photography Was Invented, Too
date:        2022-09-30
last_update: 2022-09-30

references: 
  - label: Original Post
    url:   https://merveilles.town/@jbauer/109088036845654325
  - label: Original Reply
    url:   https://mk.vulpes.one/notes/95s8h9ajtp
---

::: alert
I posted the title of this page on fedi. Someone replied and I elaborated on my views with the following post.
:::

The primary reason AI art is widely getting banned is because a lot of people are wowed by the novelty of this technology and post their shitty results. But there are also people who spend a lot of time on tuning their prompts, running the results through img2img several times and touching things up manually in Photoshop. They're spending real effort on getting good results.

AI is making good art a lot more accessible for everyone, including artists. I could run a drawing of mine through it to see what lighting or background works well, without having to spend hours on doing it myself and still not getting close to the quality of ""real"" artists.
People who have amazing ideas but lack the skills to draw them themselves finally have a tool that works for them.

Also regarding sentience, this topic is so complex I don't even know where to begin, but I've made an interesting observation: The way this AI works isn't actually much different from us humans.

Artists often use references and look up to other artists and adapt qualities from them in their own art. The way artist A draws scenery, the shading technique from B and C, and character designs from D -- of course not straight up copied and with clear boundaries, it all flows together as they see more art and feel inspired to adopt some qualities from it. This statement is true for me as well.

Stable Diffusion was trained by looking at art as well. It doesn't have a database of every single picture. Instead it recognizes various concepts and puts them in some N-dimensional space -- it's memories. So one point in this space captures one concept it has seen in many different pictures. The prompt simply determines which of these concepts the AI will try to use (up to 75 with SD).

Do you see the parallels? At least from my limited understanding, this doesn't seem much different from humans looking at pictures and recognizing shading, lighting, painting,... techniques and selectively using them in their own art.