Just released: FoldPix screen art piece sets

General / 09 December 2025

Another news on the digital product publication front: a series of 5 screen art piece sets called FoldPix has been released. Screen art sets in the FoldPix series feature images of the 5:6 aspect ratio designed for viewing on the inner screen of a foldable phone from a number of manufacturers such as Google and Samsung. Master images in the series are of high resolution (4K and up), each featuring a unique visual idea and exquisitely rich detail. Creating these images involved both cutting edge AI technology and laborious manual editing (also known as inpainting). In the course of preparing this digital product, each image in the set has been individually curated for the ultimate quality of the composition and fine detail not found normally in a typical AI-generated production. 

Some images among those released as FoldPix have been recast for this new format from artworks previously published on ArtStation, others are completely new. The 5 sets include:

Mucha meets Klimt: celebrating the Eros

A newborn in the care of an alien

Frosted glass relief erotica

Fractal cameos

The involute canvas

Some examples of FoldPix images (scaled down):

Born in space

The snow globe infinity 

Fractal sniffing I 



Just released: ultra-wide wallpaper sets, the first series of digital art products by Encore Art AI

News / 06 December 2025

Exciting news: the 1st series of digital art products by Encore Art AI has just been published. The series includes a set of 5K WUXGA 32:9 widescreen wallpapers and three sets of 8K UWUHD 32:9 widescreen wallpapers, each set comprising 5 items (plus a bonus one). These sets are:

The hidden world of CallistoIn faraway waters

Water lilies and frogs

Vietnamese fishing boats in the mist

Many more still to come!

Pixel morsels, what are they?

Article / 27 November 2025

Pixel morsel I call a fragment of a (usually very large) digital artwork published as a separate creation and designed to serve as a visual sample before the artwork itself is published. Such a fragment usually features a particularly striking part of the artwork - in other words, it’s a kind of a teaser. A pixel morsel fragment can be creatively cut out of the master image and in that way get small embellishments on its edge that weren't part of the original picture. I quite enjoy making pixel morsels and consider them an art form on their own. 

I came up with this idea while trying to figure out how to attract attention to my works amidst the voluminous ArtStation image feed (or any other digital art-hosting platform, for that matter). To be efficient for the purpose, I thought, a pixel morsel should have an eye-catching, intriguing shape cast against a flat black or white background. If it’s a teaser of an upcoming release, it would include an image title and/or the planned date. It could be also made as a reminder to visit a previously published artwork that the viewer might have missed. 

Originally, pixel morsels were devised to represent images of the HyperPixel standard which I define as a large creation of at least 8K pixels at each dimension. Such images, since they are given no particular visual indicator of their radical size when published as deviations, can be easily overlooked in the image feed, especially when they are of a non-horizontal orientation or have no distinct, eye-catching subject or pattern. Later on, I thought there is no reason why pixel morsels couldn’t be created for smaller images as well, including non AI-generated ones. (Although, to be fair, not every image will have fragments that would make sense as a pixel morsel, while the process of making one requires substantial Photoshop experience and can be laborious.)

But we shall see if these pretty creations can make any difference in grabbing the attention of the perpetually overwhelmed AS site visitor.

Below are some examples of pixel morsels I created for my upcoming HyperPixel images on AS:

The alien artefact (1% of the actual image)

The grand spectacle (1% of the actual image)

Meditation (3% of the actual image)


Introducing HyperPixel, a new standard in AI imaging

Article / 18 November 2025

AI-assisted imaging can be an inventive and very engaging process, capable of producing stunning visual creations, we all know that. Sadly, it remains far from being accepted as a proper art form by the traditional artistic community.  I myself come from such a community (speaking of digital art), but had no reservations about switching to the new AI imaging tools. After being pretty involved with AI-assisted art making for longer than two years, I came to the conclusion that introduction of a new imaging standard might be necessary, a standard that could help AI-assisted creations and their authors to gain more attention - and hopefully respect - from digital artists of the traditional kind. 

Since making high and ultra high resolution images has long become my specialty and area of expertise, I would like to introduce such a standard first for this category of digital works. Not to make this blog post too long, I will just list the two main criteria that in my mind should define such a standard. An AI-assisted image can be regarded as a HyperPixel if:

  • it has at least 8K pixels in each dimension or minimum 64 MP in total

  • it features a fine, meaningful detail across the entire image most of which is a result of manual editing (also known as ‘inpainting’) or other form of curated refinement

The 1st criterion is a simple (but high) technical threshold to define the Hyper category. The 2nd one is meant to  prevent routinely upscaled images - including those upscaled using the most advanced and expensive upscaling platforms such as magnific.ai and Topaz Labs - from qualifying automatically. It should also help such artworks to be recognized by the traditional artistic community.

To illustrate what kind of creations can be defined as HyperPixel, I will share with you a few examples from my AI-assisted image creation library (most of them are still awaiting to be published here). The first one is The stage angel demo artwork featured in the previous journal post on quality upscaling. It’s a result of a multi-stage process of upscaling, refining and manual inpainting which I use to demonstrate the techniques from my kitchen, so to speak. 

The full 16K (165 MP) version is too large to be shown on ArtStation efficiently, so I'm inviting you to see it on EasyZoom (no logging in required):

https://www.easyzoom.com/embed/0192621b1d5f4a5fac9e51727760e457?show-annotations=false 

Another one is Flowers and butterflies: 


the full 16K (144 MP) version of which is also available on EasyZoom:

https://www.easyzoom.com/embed/64c502581bc140ffb37a810c5bb687f4?show-annotations=false

In case you can’t be bothered clicking away to EasyZoom, below are a few 1:1 scale fragments from the master image; they should give you an idea about the level of manually impainted detail I am talking about.

Flowers and butterflies, fragment 1 (0.7% of the main image's pixel space):

Flowers and butterflies, fragment 2 (0.5% of the main image's pixel space):

Flowers and butterflies, fragment 3 (1.2% of the main image's pixel space):

Many more are still to come, so stay tuned! And of course, your reactions and comments will be much appreciated.



Quality image upscaling: what it is and how to do it

Article / 13 November 2025

 When you generate an image with a web-based AI image generator, the result is usually a 1 MP picture: that is, in the most common case of the square aspect ratio, it will have 1 K pixels in each dimension. By choosing a non-square ratio, you can get an image larger than 1K in one dimension, but at the price of having the other one proportionally smaller. Some platforms will allow you to choose a resolution substantially higher than 1K in either or even both dimensions, perhaps up to 2.5 MP combined, but it will usually lead to distortions or artifacts in the output. This is because most AI imaging models, still to this day, are trained on images of 1 MP-limited sizes. To have your artwork in a substantially larger resolution, you need to upscale it.

What do you need to upscale it for actually, and what sizes should you target? Well, for starters, if you want to make an art print, it demands a really high resolution, like 6-8K and up, depending on the print format. Then, you might want to produce that gorgeous wallpaper for your super ultra wide gaming screen - that might require upscaling it to 4K at the very least. And of course, when you want to sell your creations on the platform like ArtStation, many buyers would like to receive an upscaled version from you along with the regular one. The upscaled size may vary then from 4 to up to 16K, depending on wishes and circumstances.

Before the dawn of the AI imaging era, upscaling was only possible in a program like Photoshop using its built-in resizing method such as Buciubic, Lanczos or some other of the standard variety. With these methods though, quality of the result is always limited, as the image becomes visibly blurred when enlarged, and the larger target size you try, the more degraded the output becomes.   

Not so with AI, luckily. Since AI imaging became mainstream, a lot of specialized models have been created that assist in algorithmic image upscaling that is much more clever than any non-AI one and so can produce a 2x or even 4x times larger image from the just-generated basic picture, with the output appearing significantly sharper and more detailed than from any older (standard) method, when compared side by side. AI-assisted image upscaling is now pretty ubiquitous, with just about every AI image generator platform offering a range of excellent upscaling options or models, and also reasonably affordable. Problem solved? Not so fast.

As someone who has for many years been professionally busy with image processing, including image compressing and, more recently, upscaling, I feel qualified enough to explain common misunderstandings about the subject. 

Firstly, if you have access to a platform like TensorArt or SeaArt that offers various AI upscaling models for its users, or if you generate locally, despite the wide range of such models there is no single one that will work equally well for all image types there are. Indeed, there are specialized models for most kinds of images, some of them are better than others in upscaling the material they were trained for (like AnimeSharp for anime, UltraSharp for digital art, FFHQDAT for faces, UltraMixRestore for photo restoration, and so on, and so on), and then there are some good universalist models like the popular fulhardi_Remacri. For the particular image type class you are working with, you should try various dedicated models until you find the one whose output suits you the best: never assume that there is a universal upscaling solution for all images. No such solution exists, trust me, no matter what you hear on Discord, reddit or wherever. If there is a single big lesson I learned in my experience with upscaling, it is that each and every image is unique and requires its own refining and upscaling solution. Experiment, experiment and then experiment some!

Secondly, upscaling at the highest scale factor that the model or the platform allows (typically, 4x) is usually a bad idea. Upscaling 4x in one go might look like a good shortcut to reach the target resolution, but the output quality is almost always guaranteed to be inferior. Always choose a smaller factor, like 2x at highest. (The full story about stepwise upscaling is a bit longer though.)

Thirdly, always use your own eyes to judge the output quality of the upscaled image, studying it up close everywhere. No matter how smart an AI upscaling model or tool could appear to you, and no matter how expensive it might be, it cannot work universally well across the whole image. Typically for a cloud-based upscaler, there will always be areas where it performs badly, often hallucinating or leaving visible seams - practically guaranteed if you crank up the parameter called Creativity. I’ve seen enough bad output from the famous, and pretty expensive platforms like Leonardo.ai, krea.ai (especially), magnific.ai and Topaz Labs (the most expensive of the bunch), to claim this with confidence.

And finally, for the ultimate best quality upscaling, using only AI upscaling models or upscaling options on platforms like listed above is never sufficient. The best results possible are achieved only with local refinement within the image, optionally combined with manual inpainting using locally generating Stable Diffusion tools like Forge WebUI, ComfyUI and - especially  - Krita AI. But that’s a story for another post perhaps. It might come as a bad surprise to some of you reading this, but good upscaling requires a bit of work, and good tools too! In fact, everything of quality in art and craft in general requires work and specialized tools - which is something that most generative AI artists still have to figure out, judging but what I have seen on many platforms, over and over again.

Now, to show you what I mean by quality upscaling, I will demonstrate it with a few examples from my ultra high resolution collection (what I call HyperPixel standard). The first is a 16K image called Flowers and butterflies. The full version is too large to be shown here efficiently, so you are invited to see it on EasyZoom (no logging in required):

https://www.easyzoom.com/embed/64c502581bc140ffb37a810c5bb687f4?show-annotations=false

Second is an artwork called The stage angel (also 16K). Similarly to the previous one, it’s a result of multi-stage work of upscaling, refining and manual inpainting, I use it to demonstrate my techniques (also on EasyZoom):

https://www.easyzoom.com/embed/0192621b1d5f4a5fac9e51727760e457?show-annotations=false

(and I have more there of course, for future publication and demos.) The images on EasyZoom, despite their sheer size, all feature the kind of detail and clarity that is the result of multi-step refinement and manual inpainting done using locally generating Stable Diffusion tools I mention above. I realise that this kind of finesse could be difficult to achieve for majority of generative AI artists, for a number of objective reasons, but at the very least I hope it can serve as some kind of reference or standard for striving.

In case you feel uncomfortable clicking away to EasyZoom, below are a few 1:1 scale fragments from the master image; they should be self-explanatory, I think. Looking forward to your comments and reactions!





A Manifesto for AI-Assisted Creation

Article / 10 November 2025

I believe that true art should remain scarce. A single author cannot produce thousands of genuine artworks in a year; if art is to carry meaning, it must come from time, attention, and commitment. Quantity is oftentimes an enemy of quality and uniqueness.

For AI art, this means that a generated work should not be rushed for publication as the immediate product of one AI prompt, however well crafted, let alone as a batch of variations of the same. To be recognised as art, it should involve substantial manual labor and a series of artistic decisions — editing, curating, refining — that reflect the author’s creative involvement.

I also think that an artist should maintain a lasting relationship with their works, including those that make use of generative AI. An artwork is not something disposable, but a piece of a broader vision that develops over time. The author should be able to remember each of their creations, no matter how long ago they were produced.

Finally, every artwork should show something unique: a personal style, a technique, or a perspective that distinguishes it from others. Without this distinctiveness, there is no real authorship.

These are the principles I have come to recognise after two years of my involvement in generative AI imaging. I would be interested to hear how others see it: what, in your view, makes a digital or AI-assisted piece of work truly art?


(A disclosure: I may not be always following this manifesto to every point  in my own practice, but I am striving to. Practically each image you see here in my portfolio is a result of thorough selection among tens of generated variants, and more often than not I edit a visibly imperfect work before publishing, by way of inpainting. And I am gradually converging on a few distinct styles and techniques of my own. Or so it seems.)