TERESIUS_VIEW Page

Teresius AI - BITCOIN Forecasts and AI Image Identification

IDENTIFICATION OF IMAGES GENERATED BY ARTIFICIAL INTELLIGENCE

“TERESIUS-AI”

User Manual

2024


Table of Contents

1.     DESCRIPTION OF THE PHYSICO-MATHEMATICAL PRINCIPLES OF IDENTIFICATION

2.     WORKING WITH THE SYSTEM

3.     GRAPHS OF TYPE I AND TYPE II ERRORS


INTRODUCTION

This version (TERESIUS_AI.exe) of the software complex “TERESIUS-AI” (hereinafter referred to as the System) is designed to solve the problem of IDENTIFYING ARTIFICIAL INTELLIGENCE (AI) BASED ON VIDEO IMAGES GENERATED BY IT (Web version of the system for individual users). Currently, there are several systems that address similar expert assessment tasks. The proposed version of the system, based on preliminary testing data, demonstrates high effectiveness in terms of decision-making error rates (0.01-5%). The software system aims to determine whether a given image was created by an artificial intelligence application or generated without AI assistance. Additionally, it provides a probability of decision-making error, which is a variable that depends on the specific AI application, the nature of the image, and various other factors.

The system operates for any resolution of video matrices exceeding 100x100 pixels, specifically for color video recordings in RGB format of any kind. It is not necessary to select video frames with identical resolutions for identification. The system will analyze any video files and frames of varying resolutions (greater than 100x100) in any graphical format.


1. DESCRIPTION OF THE PHYSICO-MATHEMATICAL PRINCIPLES OF IDENTIFICATION

The ideology of the identification system at the physical level is as follows. Any image generated by artificial intelligence is created by a specific software application (such as DALL-E, Bin, Midjourney, etc.) based on a defined sequence of algorithms. These are different algorithms, although their construction principles may be common. An image generated by these algorithms exhibits certain statistical patterns of color palette transitions between pixels and possibly numerous other individual characteristics specific to that application.

These characteristics are, in most cases, not visually distinguishable by humans. At the same time, regardless of the image generation algorithms employed by AI, these algorithms cannot be identical to those by which nature forms our surrounding world (at least at the current level of scientific understanding).

Let us denote the entire set of integral characteristics of the algorithms of a specific AI application as the “Artistic Style of the AI Application.” In this context, this style bears no relation to the conventional understanding of style, such as that of artists, renowned photographers, or any human expressions of style.

The characteristics of such AI-generated image styles are virtually impossible to perceive visually. To highlight the statistical individual features of the AI application style, it is necessary to employ a mathematical framework for image processing that allows for the quantification of style characteristics. In this system, several combinations of wavelet decomposition of images are utilized as such a mathematical apparatus. Our long-term research has shown that several parameters of these mathematical transformations are correlated with the algorithms of AI applications that generate images. The verbal exposition of the mathematical approach is, of course, a hypothesis. The effectiveness of this hypothesis can only be confirmed by the performance of the developed identification system for AI applications.

After extracting parameters from the wavelet decomposition of images, a specific function of these parameters for a given image is formed in the system. This function served as input during the training of a deep learning neural network for binary image identification. The identification criterion is whether the image was CREATED BY ARTIFICIAL INTELLIGENCE or NOT. The neural network was trained on several hundred thousand different images. The process of augmenting the dataset for further training continues (evolving the neural network model and its modification). The resulting neural network model is applied to identify images generated by artificial intelligence.


2. WORKING WITH THE SYSTEM

The operation of the system is initiated on the website through the TERESIUS_AI option. The user will be presented with a file selection option for analysis (see Figure 1).

Image 1

Fig 1. File Selection Window for Analysis

After selecting a file, the user must choose the load option. Upon completion of the analysis, a graphical window will open displaying the analysis results (see Figure 2).

Image 2

Fig 2. Identification Results

The error probability presented after calculations on the graph is a variable quantity. This probability depends on the AI application that generated the image, the resolution and nature of the image, and various other factors.

When reanalyzing the same image, this probability value may differ slightly. Such a difference results from the image analysis technology. In this analysis, templates of images created by AI for specific applications are occasionally used, and these templates are selected randomly each time. However, this technology does not significantly affect classification results.


3. GRAPHS OF TYPE I AND TYPE II ERRORS

The graphs of Type I and Type II errors for the current efficiency of the system are presented in Figure 3. These graphs were obtained through testing the system on a large volume of statistical data.

Image 5

Fig 3. Graphs of Type I and Type II Errors

The x-axis represents the value P, which is the calculated probability of error for a specific image based on the neural network model. The values for Type I and Type II errors on the graphs are derived from test trials of the system on independent test data for a large volume of photographic images.