Lately, I’ve been playing with the AI art/image generator, Stable Diffusion (webUI version) along with models/checkpoints from Huggingface and Civitai; to my astonishment, this stuff was free/open source and relatively easy to run with cheap consumer video cards since I’m primarily using an Nvidia GeForce RTX 2060 and an AMD Radeon RX 6600XT. However, documentation is a bit sparse given the complexity of the tool and models, at least for people who are not specialists in machine learning or generative AI … as of this writing, I still can’t find any official documentation explaining what exactly are the sampling methods (e.g. euler, Heun, PLMS, DPM2, etc.) that can be chosen to generate images.