EN - AI Artist Spotlight & Tutorial: Citrus (@AI_Illust_000)

Greetings from Japan!
Nice to meet you all, my name is Citrus (@AI_Illust_000). I live in Japan, and I frequently use AI image generation to create various AI art illustrations!
It was around October 2022 when NovelAI became quite the hot topic in Japan, with a booming community creating illustrations using NovelAI’s newly released artificial intelligence image generation based on Stable Diffusion — NAIDiffusion.
I was one of them, and I remember that, at the time, there was no other image-generating AI that could produce illustrations of the same quality as NovelAI. A lot of people were fascinated by the previously unheard-of experience of easily generating aesthetic images via AI.
In the midst of this major movement, the 1st AI Image Contest (https://blog.novelai.net/ai-contest-pre-exhibit-d7448e46e96f) was held; participants created illustrations using image-generating AI. The top winning entries, selected via community vote, were exhibited in a physical gallery in Yokohama, Japan. With over 2000 participants, it made this a very large-scale contest!
My work was unexpectedly well received, and I was delighted to win first place. The AI Image contest was also featured in an NHK (Japan Broadcasting Corporation) program, live on TV, and it attracted a great deal of positive attention!
I am the type of person who, up until then, had not had a life that focused on art or creativity in any way. I love to watch anime and play video games, which are a part of Japanese culture; however, I had little experience in drawing or creating something myself.
One of the reasons for this is my disability.
When I was 19 years old, I was diagnosed with schizophrenia, and when I turned 21, I also was diagnosed with a heart condition. I was fortunately able to receive treatment, and am able to control my symptoms and lead a normal life. However, this treatment can cause mild hand tremors.
I was convinced that there was no way that I could ever do delicate work, like holding a paintbrush or drawing lines.
I had reached the depths of despair and was completely losing confidence, not knowing what to do to help myself…
That is when I came across image-generating AI art. This is a new method of expression that is completely different from the current conventional methods of art, giving shape to images by explaining them in text and tags. Using this method requires no special training and can be done by anyone, including myself. For the first time ever, I was able to give form to the images in my head through the use of generative AI.
With the existence of this tool, image-generating AI, it made possible what I previously believed was impossible. No special skills or capabilities are required; just an idea, or a picture in your head is all you need to create an illustration!
In the not-too-distant future, a new generation of artists will emerge who are not bound by conventional wisdom but rather by how they utilize AI image generators. I believe that new art will come from people who have never been involved in the artistic world, or who, for various reasons, have lost their purpose and goals.
People who are handicapped or those who lack special talents can now make it to the starting line.
It is my personal belief that generative AI will always continue to be a tool to assist humans and expand their capabilities, in the future.
Well… that was a long introduction!
For now, I would like to share with you a part of my workflow that I use while creating some of my artwork!
Let’s get started!
Workflow with Text to Image
I create most of my illustrations using Text to image (txt2img).
Presently, this is also the most common method for visual generative AI, which creates illustrations through commands; here we are also making use of NovelAI’s tags, in combination with regular text.
These are illustrations created using txt2img, during the NovelAI Diffusion V3 private beta test phase.
Here is a description of what I do when creating T2I illustrations, taking into account my conscious considerations:
When creating an illustration with txt2img, it is important to figure out the characteristics of the illustration you want to create.
It is also important to organize the characteristics of the image in some form with tagging, as a “prompt”, which is a linguistic command given such that the AI can understand, and generate an image for you.
This is an example of how I organize the aspects of the image I want to create:
- An android girl who looks like an angel is being destroyed.
- A place that seems to be abandoned, to evoke the feeling of destruction, such as a fierce battle.
I also desired that some of her mechanized parts imitate the human body, in order to bring out an element of contrast: part machine, but nearly human.
These are the words and definitions that I further decomposed and classified into tags, which are easily processed by NovelAI Diffusion V3:

After we’ve organized the information into NovelAI tag form, we make some minor adjustments by repeating the sorting and weighting of the tag order.
Since the effectiveness of tags can be broken or enhanced depending on their combination, position, and order in the text prompt: it’s better to use elements with the highest degree of relevance, first.
That’s how this illustration was completed.
Here is the text prompt I actually used:
Prompt:
1girl, Android, mechanical parts, very aesthetic, best quality, {transparent body}, {{circuits through the transparent skin}}, {{{amputee, circuit board}}}, cyborg, ruin, {X-Ray}, halo, detached wings
Undesired Content:
worst quality, very displeasing, bad image, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality

NovelAI Diffusion V3 has very high performance; txt2img alone already allows for a very wide range of artistic expression.
From cool robots to cute girls and abstract paintings, you can make all kinds of AI illustrations depending on your visual goal!
Still, there are limits of txt2img alone, depending on the content of your prompt and how you express yourself through tags and text.
So next, I will explain the workflow I use for AI illustrations using NovelAI’s Image to Image (img2img) feature!
Workflow using Image to Image
“What is Image2Image?
When you hit Upload Image, you have the option to provide the AI an image to work off of. After selecting an image of your choice, you can customize how much your uploaded image vs. your written prompt will affect your next generation, via the Settings; or, you can just hit Generate and see what happens!”
I know this is out of the blue, but this is a kanji character called “彩”, which means “to look colorful” or “to color” in a variety of ways.
The reading is ”Aya” or “Sai”.
While familiar to Japanese people, this symbol may look strange to those who are not Japanese. Many Japanese kanji characters are called hieroglyphics, which are symbols that represent the shape of things.
The kanji for “彩” consists of two elements: “采 + 彡”.
“采” on the left, depicts of harvesting fruits and nuts from trees.
“彡” on the right, is a depiction of long, flowing hair.
From here, the kanji “彩” literally represents a woman with long hair, selecting and picking amongst colorful nuts and fruits; figuratively, the phrase is turned around to mean “having a bright color.”
Although this symbol is only one glyph, there is a lot of history behind it; the shape of kanji itself also carries layers of meaning.
So, what if you wanted to create an illustration that, like a kanji, is full of subtle messaging?
First of all, it is nearly impossible to achieve the same result with the previous text 2 image prompting method.
The method I will introduce in the following section is based on the Image to Image (I2I) generation process.
A ‘noisy’ textured original file is required for this purpose.
To create it, it will be our output once an illustration is brought close to decomposing into mere visual noise, using I2I.
Since there are multiple colors superimposed on each other, this base image has the characteristic of having its whole area readily transformed into virtually any color by the AI.
There are three advantages to utilizing noisy illustrations as output material:
First, is that the AI has freedom to traverse difficult aspects of image composition while generating ideas
Second, it is easy to make additions and edits using external tools (such as the built-in NovelAI image editor)..
Third, although not used in this instance, is the ability to intentionally leave abstract elements such as fractures or a sense of digital noise in our final illustration.
At this time, I would like to share an idea of mine for creating artwork using noise textures created with NovelAI.
I will then show you how to create an “illustration that looks like a letter” from that point, onwards.
Creating the Noise Texture
As a preliminary step, we will create a noise texture file to be used for the image to image process.
Prepare an appropriate illustration and run it through img2img with the following settings to turn the illustration into visual noise.
Strength:0.01
Noise:0.99
By repeating this process several times over, the base image we’ll be using for the final img2img process of this tutorial has been completed: it has turned into complete visual noise.
Processing the Noise Texture
Clip the created noise texture into the shape of whichever grapheme you desire. This process can be done with external paint tools, or one could try to ‘draw’ the shape by erasing areas in the NovelAI image editor..
The key concept in processing noise textures is:
Noisy areas → Areas which easily change into various colors
Filled areas → Areas which have difficulty changing color
Our aim is to control the output results by intentionally narrowing the areas in which the AI finds color change to be malleable. By using img2img on a glyph-shaped noise texture, you can easily create illustrations which resemble those glyphs.
Here is a quick example of creating blocked areas in NovelAI’s built in Canvas. You can make use of the layers to further refine your shapes as well if you do not have an external image editing software that you can use.

Output with I2I
Set your glyph-shaped noise texture as your base image for I2I generation, and control the output results with fine adjustments to the Strength and Noise values of your generation settings.
Your own Noise value should be set to 0; it is the Strength value that is particularly important here!
Lower Strength settings will result in outputs that preserve the shape of the original glyph, while higher settings will result in more stable outputs resembling illustrations.
When creating images using this technique, I often adjust Strength to values of between 0.8 and 0.9.

The output will be very unstable, because the area in which the AI can draw has intentionally been narrowed. Repeat the process of adjusting the settings and observing the outputs patiently, until you achieve the vision you intend to create.

This is how the following picture was created.
In addition to glyphs, there are many other uses for this img2img process, using noise textures blocked out according to your own preferences.
This technique can be applied to any idea you might have; so please, give it a try!
You can follow Citrus on X (@AI_Illust_000) or his Note Blog (https://note.com/aiillust000) to stay up to date with his latest NovelAI and general AI Art findings! He periodically posts amazing tips and tricks just like the tutorial above!