콘텐츠로 건너뛰기

2021년 7월 13일.

# Preface

After leaving the company after a long time, I am going to research some rendering techniques and post them on Github. By writing this pages, I think I will be able to remind myself of my old memories.

So… I want to say that I didn’t want to do anything too difficult from the start. It seemed that I would get tired first if I dealt with difficult things from the beginning. 😢 Back to the main text… We try to explain each effect section as simply as possible.

As I have been paying attention to team management for several years, personally, my ability to explain in detail technical aspects seems to have decreased, so I will keep the attitude of reminding myself.

Most of the explanations are not for programmers, but for artists interested in shading.

Rather than relying entirely on rendering programmers or technical artists, if you understand the implementation aspects, artists will be able to organize their thoughts before communicating with them.

# What we can learn from this content. 我们可以从这个内容中学到什么。

• You can understand the simple Toon Shader processing method. 可以理解简单的Toon Shader处理方法。
• You will get to know the roughness of the PBR lighting model. 您将了解 PBR 照明模型的粗糙度。
• You can understand the use of NodtL. 可以理解NodtL的使用。
• You can see what the vertex attribute is. 你可以看到顶点属性是什么。
• You will also learn the spatial transformation process. 您还将学习空间转换过程。
• You can understand a little bit of the very common HLSL shading syntax and structure. 您可以了解一些非常常见的 HLSL 着色语法和结构。
• You can use Desmos. 您可以使用 Desmos。

# 基本准备

As an example, I used a character from XRD obtained from the Internet. Of course, you can guess that Normal is edited in the DCC tool. We’re not going to use the Normal information we computed directly by Unity. There are some things to check in the mesh’s inspector information. Whenever possible, I’ll use Import .

Since we will not be building our shader with multipass shading we will need 2 materials.

1. OutlineMat.mat for outline rendering. OutlineMat.mat 用于轮廓渲染

Add these two materials to the Assets directory. When you are ready, register two materials in one mesh as shown in the picture below.

The order of the materials doesn’t matter, as the shaders applied to the OutlineMat will be rendered as Cull Front. 材质的顺序无关紧要，因为应用于 OutlineMat 的着色器将渲染为 Cull Front。

Multi-pass implementation in one shader is equivalent to applying two materials like this. It could be simply conceptually called Multi-pass. I’m going to render the same Mesh Entity twice anyway… Personally, I prefer this method rather than Multi-pass when shading for character effects or other effects.

Let’s get some good information from TA Jongpil Jeong’s very friendly URP shader course.

# Created Outline Rendering shader. 创建轮廓渲染着色器。

Simply put, there are three major outline processing techniques. 简单地说，轮廓加工技术主要有3种。

1. Offset the vertex of the mesh in the normal vector direction (direction pointed by the normal) and fill it with color.
2. How to apply the rim light technique.
3. How to use Post Process (edge detection processing using depth normal information + utilization of Sobel filter).

1.在法向量方向（法线指向的方向）偏移网格的顶点并用颜色填充它。 2. 如何应用边缘光技术。 3.如何使用Post Process（使用深度法线信息的边缘检测处理+Sobel filter的使用）。

You can categorize them like this: Once you know that you are doing things like these above. I’ll just implement it in method 1.

I’m going to create something like the one above. Intermediate interim auxiliary theories will add external links. The internet is full of good resources.

### 执行。

``````Shader "LightSpaceToon2/Outline LightSpace"
{
Properties
{

[Space(8)]
[Enum(UnityEngine.Rendering.CompareFunction)] _ZTest ("ZTest", Int) = 4
[Enum(UnityEngine.Rendering.CullMode)] _Cull ("Culling", Float) = 1

_Color ("Color", Color) = (0,0,0,1)
_Border ("Width", Float) = 3
[Toggle(_COMPENSATESCALE)]
_CompensateScale            ("     Compensate Scale", Float) = 0
[Toggle(_OUTLINEINSCREENSPACE)]
_OutlineInScreenSpace       ("     Calculate width in Screen Space", Float) = 0
_OutlineZFallBack ("     Calculate width Z offset", Range(-20 , 0)) = 0

}
{
Tags
{
"RenderPipeline" = "UniversalPipeline"
"RenderType"="Opaque"
"Queue"= "Geometry+1"
}
Pass
{
Name "StandardUnlit"
Tags{"LightMode" = "UniversalForward"}

Blend SrcAlpha OneMinusSrcAlpha
Cull[_Cull]
ZTest [_ZTest]
//  Make sure we do not get overwritten
ZWrite On

HLSLPROGRAM
// Required to compile gles 2.0 with standard srp library
#pragma prefer_hlslcc gles
#pragma exclude_renderers d3d11_9x
#pragma target 2.0

// -------------------------------------
// Lightweight Pipeline keywords

// -------------------------------------
// Unity defined keywords
#pragma multi_compile_fog

//--------------------------------------
// GPU Instancing
#pragma multi_compile_instancing
// #pragma multi_compile _ DOTS_INSTANCING_ON // needs shader target 4.5

#pragma vertex vert
#pragma fragment frag

// Lighting include is needed because of GI

CBUFFER_START(UnityPerMaterial)
half4 _Color;
half _Border;
half _OutlineZFallBack;
CBUFFER_END

struct VertexInput
{
float4 vertex : POSITION;
float3 normal : NORMAL;
UNITY_VERTEX_INPUT_INSTANCE_ID
};

struct VertexOutput
{
float4 position : POSITION;
half fogCoord : TEXCOORD0;

UNITY_VERTEX_INPUT_INSTANCE_ID
UNITY_VERTEX_OUTPUT_STEREO
};

VertexOutput vert (VertexInput v)
{
VertexOutput o = (VertexOutput)0;
UNITY_SETUP_INSTANCE_ID(v);
UNITY_TRANSFER_INSTANCE_ID(v, o);
UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);

half ndotlHalf = dot(v.normal , _MainLightPosition)*0.5+0.5;

//  Extrude
#if !defined(_OUTLINEINSCREENSPACE)
#if defined(_COMPENSATESCALE)
float3 scale;
scale.x = length(float3(UNITY_MATRIX_M[0].x, UNITY_MATRIX_M[1].x, UNITY_MATRIX_M[2].x));
scale.y = length(float3(UNITY_MATRIX_M[0].y, UNITY_MATRIX_M[1].y, UNITY_MATRIX_M[2].y));
scale.z = length(float3(UNITY_MATRIX_M[0].z, UNITY_MATRIX_M[1].z, UNITY_MATRIX_M[2].z));
#endif
v.vertex.xyz += v.normal * 0.001 * (_Border * ndotlHalf);
#if defined(_COMPENSATESCALE)
/ scale
#endif
;
#endif

o.position = TransformObjectToHClip(v.vertex.xyz);
o.fogCoord = ComputeFogFactor(o.position.z);

//  Extrude
#if defined(_OUTLINEINSCREENSPACE)
if (_Border > 0.0h) {
float3 normal = mul(UNITY_MATRIX_MVP, float4(v.normal, 0)).xyz; // to clip space
float2 offset = normalize(normal.xy);
float2 ndc = _ScreenParams.xy * 0.5;
o.position.xy += ((offset * (_Border * ndotlHalf)) / ndc * o.position.w);
}
#endif

o.position.z += _OutlineZFallBack * 0.0001;
return o;
}

half4 frag (VertexOutput input ) : SV_Target
{
UNITY_SETUP_INSTANCE_ID(input);
UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(input);
_Color.rgb = MixFog(_Color.rgb, input.fogCoord);
return half4(_Color);
}
ENDHLSL
}
}
}``````

### Create an outline rendering mask. 创建轮廓渲染蒙版。

Concept: Nothing special to say. Usually, outline thickness treatment is painted in one of the vertex colours to make it thicker or not rendered at all.

``````struct Attributes//appdata
{
float4 vertex : POSITION;
float3 normal : NORMAL;
float4 color  : COLOR;// Related of vertex color rgba attribute from mesh.
UNITY_VERTEX_INPUT_INSTANCE_ID
};

struct Varyings//v2f
{
float4 position : POSITION;
float4 color    : COLOR;// Related of vertex color rgba attribute data delivering to vertex stage.
half fogCoord   : TEXCOORD0;
half4 dubugColor : TEXCOORD1;
half3 dotBlend   : TEXCOORD2;
UNITY_VERTEX_INPUT_INSTANCE_ID
};``````

Let’s review the code briefly.让我们简要回顾一下代码。

``struct Attributes//appdata{float4 vertex : POSITION;float3 normal : NORMAL;float4 color : COLOR;// Related of vertex color rgba attribute from mesh.UNITY_VERTEX_INPUT_INSTANCE_ID};``

struct stands for struct. Attributes means information to pass from the modeled mesh to the shader.

struct 代表结构。 属性是指从建模网格传递到着色器的信息。

It’s like this. 就像这样。

In Unity3D, we create a structure like this and get information of mesh data, such as vertices, normals, and UV information. 在Unity3D中，我们创建一个这样的结构并获取网格数据的信息，例如顶点、法线和UV信息。

When talking to programmers, when we talk about vertex attributes, it means that we use them as an industry-standard so that there is no problem in communicating with each other. No matter the engine…

If you look at Attributes, is there a color? What color is the mesh? If you think about it, do you know the vertex color we are familiar with? That’s the guy. You need to put the vertex color attribute in the structure so that it can be delivered to the vertex stage. That’s easy, right? Creating a structure defines which Attributes to make into an Attributes package. If you put it in the Attributes package, you should actually make it a package that can be transferred to the Shader Stage, right?

For a brief explanation, please take a look at the picture below. 有关简要说明，请看下面的图片。

1. Let’s see how the vertex shader and pixel shader go through the process of drawing an image to the screen. Let’s assume that there is a rectangular Mesh as shown in the picture. Assuming that this is drawn as an image on the screen, it must first pass through the Vertex Attribute processing unit, that is, Vertex – Shader. You can also add a ripple effect to the vertices using Sin() in the Vertex – Shader process. To make it easier to understand conceptually, I tried to think of each process unit in the factory. Vertex Attribute is packaged in Packet (Bundle, Bundle) in Vertex-Shader process. At this time, new location values or other information will be packetized. At this time, the position value (Position) among the properties of each vertex is transmitted as a required property. 让我们看看顶点着色器和像素着色器如何完成将图像绘制到屏幕的过程。 让我们假设有一个如图所示的矩形网格。 假设这是在屏幕上绘制的图像，首先要经过Vertex Attribute处理单元，即Vertex – Shader。 您还可以在 Vertex – Shader 过程中使用 Sin() 向顶点添加涟漪效果。 为了在概念上更容易理解，我尝试考虑工厂中的每个工艺单元。 Vertex-Shader过程中将Vertex Attribute封装在Packet（Bundle，Bundle）中。 此时，新的位置值或其他信息将被打包。 此时，将每个顶点的属性之间的位置值（Position）作为所需属性进行传输。
2. Packetized data goes through the Rasterizer Stage in the Vertex-Output stage (Stage). To put it simply, you can think of the Rasterizer Stage as a pixelation stage. More precisely, image information is composed of pixels in a two-dimensional array, and one image information is expressed by combining these dots and pixels at regular intervals. In other words, it can be said that it is a set of consecutive pixels in one line, and processing this is called a rasterizer. If a triangle is drawn as shown in the figure above, the rasterizer collects three (XYZ) positions of the vertices one by one to make a triangle, and then Find the pixels that will fit inside. 打包后的数据经过顶点输出阶段（Stage）中的光栅化阶段。 简单地说，您可以将 Rasterizer Stage 视为像素化阶段。 更准确地说，图像信息由二维阵列的像素组成，通过将这些点和像素以一定间隔组合来表示一个图像信息。 换句话说，可以说它是一行中的一组连续像素，对其进行处理的称为光栅化器。 如果如上图绘制一个三角形，光栅化器将顶点的三个（XYZ）位置一个一个地收集起来构成一个三角形，然后找到适合里面的像素。
3. Then, the Rasterizer-Output is sent to the Fragment stage, and finally the Pixel-Shader performs the calculation to determine the final color. 然后，将 Rasterizer-Output 送到 Fragment 阶段，最后由 Pixel-Shader 进行计算，确定最终的颜色。

It is also recommended that you refer to the PPT that I have prepared separately. 也建议大家参考我单独准备的PPT。

What you create at this time is a struct Varyings structure. 你此时创建的是一个 struct Varyings 结构。

``struct Varyings//v2f{float4 position : POSITION;float4 color : COLOR;// Related of vertex color rgba attribute data delivering to vertex stage.half fogCoord : TEXCOORD0;half4 dubugColor : TEXCOORD1;half3 dotBlend : TEXCOORD2;UNITY_VERTEX_INPUT_INSTANCE_ID};``

Does creating a struct mean that you can use this struct as a type? (This is a bit difficult, isn’t it?) It’s easier to just memorize it at this time…. It means you can make a Varyings type vertex stage. Anyway, you can define a type using a structure like this, so make it a Varyings structure type when creating a vertex shader stage as shown below. And it is defined as Attributes type input as a list of arguments and delivered. Let’s interpret the Varyings vert (Attributes input)…

It can be understood that an input list of type Attributes is passed to the vert function of type Varyings.

``````Varyings vert (Attributes input)
{
Varyings o = (Varyings)0;
UNITY_SETUP_INSTANCE_ID(v);
UNITY_TRANSFER_INSTANCE_ID(v, o);
UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
float ndotlLine = dot(input.normal , _MainLightPosition);

half vertexColorMask = input.color.a;//여기에 어트리뷰트 중에서 받아온 컬러 중에서 A체널을 넣어줍니다.
input.vertex.xyz += input.normal * 0.001 * lerp(_BorderMin , _BorderMax , 1);
input.vertex.xzy *= vertexColorMask;//버택스 컬러 마스크를 곱해줍니다.
o.position = TransformObjectToHClip(input.vertex.xyz);
o.position.z += _OutlineZSmooth * 0.0001;
return o;
}
``````

Add a variable called half vertexColorMask and put input.color.a here. If this value is multiplied by input.vertex.xzy *= vertexColorMask in this way, the value of 0 to 1 stored in the vertexColorMask variable is multiplied by outline thickness processing, so the vertex color part colored with 0 value is the return value. Since it is 0, the outline thickness will be 0, right?

Grasp the characteristics of outer contour line thickness： 掌握外轮廓线粗细的特点：

• Vertex color use for Line variable: The outer contour line is recorded in the A channel of Vertex Color. The closer it is to white, the thicker it is, and the closer it is to black, the thinner it is. Line 变量的顶点颜色使用：外部轮廓线记录在顶点颜色的 A 通道中。 越接近白色越厚，越接近黑色越薄。
``````input.vertex.xyz += input.normal * 0.001 * lerp(_BorderMin * vertexColorMask , _BorderMax , 1);
``````

In the above format, you can multiply _BorderMin by vertexColorMask or multiply _BorderMax by vertexColorMask according to your purpose.

### Concept 概念

When drawing a picture using lines, like when drawing a cartoon, plaster drawing, or line drawing using various writing instruments, the line on the parts receiving the light is thinly drawn or omitted, and vice versa. You can express your own three-dimensional effect by drawing darker or thicker on the sides.
This was done so that the expression part could be processed according to the lighting direction.

The reference image above is an image from a book called How to draw sold on Amazon.

Here are some cartoon character drawing references that are much easier to understand! Anyway, this is how it is expressed. 这里有一些更容易理解的卡通人物绘图参考！ 无论如何，这就是它的表达方式。

### Implementation

Let’s take a look at which part of the above code is related to the Light Space outline. This is how the implementation will look like.

Naturally, the outline is being processed in the vertex stage. Let’s look at the code below first.

``````Varyings vert (Attributes input)
{
Varyings o = (Varyings)0;
UNITY_SETUP_INSTANCE_ID(v);
UNITY_TRANSFER_INSTANCE_ID(v, o);
UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
float ndotlLine = dot(input.normal , _MainLightPosition);
o.dotBlend = ndotlLine;
half vertexColorMask = input.color.a; //Put the A channel among the colors received from the attributes here.
input.vertex.xyz += input.normal * 0.001 * lerp(_BorderMin * vertexColorMask , _BorderMax , (ndotlLine ));
o.position = TransformObjectToHClip(input.vertex.xyz);
o.position.z += _OutlineZSmooth * 0.0001;
return o;
}``````

If you look at the code, it also contains junk code that has been added to work. A lot more than you think, you will use the ndotl operation to create a mask or weight value.

Don’t think of the above image as a 3D rendering, but think of it as a Quick Mask in Photoshop. It can be easily understood by thinking that the weight closer to white has a weight of 1, and the weight closer to black converges to 0. In the end, if you put the above result value in the blending weight weight of the lerp function, which is a linear interpolation, the result value of lerp(A , B , blending Weight) will be returned according to the weight. If we interpret the above figure, the value of A gets closer to 1 as we go to the left in the circular shape, and the value of B gets closer to 1 as we go to the right. Because I put ndotl value in blending weight. Let’s look at the code below again.

I think the most important part of the code is the float ndotl-Line part.

``float ndotlLine = dot(input.normal , _MainLightPosition);``

Outline thickness consists of two values, a minimum and a maximum. We will mix these two values. The ndotl-Line value will be used as the weight.

``input.vertex.xyz += input.normal * 0.001 * lerp(_BorderMin , _BorderMax , ndotlLine);``

The vertex position is offset in the normal direction. At this time, linear interpolation is performed by adding a weight ndotl-Line between the two values of _BorderMin and _BorderMax. The reason why _BorderMin and _BorderMax are separated is to allow the artist to flexibly set the change values of the thin and thick sides when implementing the function of changing the line thickness according to the distance.

In this case, you can directly use input.normal, that is, object space normal.
There is no need to change this to world space.

# Work in clip space 在剪辑空间工作

It is to convert the vertex position and normal vector to clip space before transforming the vertex position. This allows you to counteract object resizing (as long as you normalize the Normal after transformation) by bypassing the model transformation into world space. The order should transform normal to world space. This is because we need to convert it to world space before converting it to View Projection space.

full code

``````Varyings vert (Attributes input)
{
Varyings o = (Varyings)0;
UNITY_SETUP_INSTANCE_ID(v);
UNITY_TRANSFER_INSTANCE_ID(v, o);
UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
float ndotlLine = dot(input.normal , _MainLightPosition)  * 0.5 + 0.5;

//Generated ClipNormals
o.normalWS = TransformObjectToWorldNormal(input.normal).xyz;
half3 normalVS = TransformWorldToViewDir(float4(o.normalWS,0) , 1 ).xyz;
float2 clipNormals = TransformWorldToHClipDir(float4(normalVS,0) , 1 ).xy;

o.dotBlend = ndotlLine;
half vertexColorMask = input.color.a;//Put the A channel among the colors received from the attributes here.//将A通道放在从这里的属性接收到的颜色中。
o.position = TransformObjectToHClip(input.vertex.xyz);

half2 offset = ((_BorderMax * vertexColorMask) * o.position.w) / (_ScreenParams.xy / 2.0);
offset *= o.dotBlend;
o.position.xy += clipNormals.xy * offset * 5;
o.position.z += _OutlineZSmooth * 0.0001;
return o;
}``````

### Vertex matrix transformation 顶点矩阵变换

When the attributes of the mesh are packaged and entered into the vertex stage, spatial transformation must be performed right before the rasterizer stage.

Vertex Transformation – OpenGL Wiki (khronos.org)

There is a very well-organized blog, so I linked it. 有一个组织非常好的博客，所以我链接了它。

Incidentally, when all vertex attributes leave the vertex stage, they exist in clip space, which is also called NDS or Normalized devices space.

If you look at the interview, the word nds sometimes come up, but you can understand that clip space is a normalized device space.

### A note on world-space space transformation.

Use the built-in function:

Let’s take a look at SpaceTransforms.hlsl.

``float3 normalWS = TransformObjectToWorldNormal(input.normalOS);``

Internally this is the same as:

``float3 TransformObjectToWorld(float3 positionOS){#if defined(SHADER_STAGE_RAY_TRACING)return mul(ObjectToWorld3x4(), float4(positionOS, 1.0)).xyz;#elsereturn mul(GetObjectToWorldMatrix(), float4(positionOS, 1.0)).xyz;#endif}``

TransformObjectToWorld

In the code above, the SHADER_STAGE_RAY_TRACING part is the branch used when ray tracing is enabled, and the method normally used in `TransformObjectToWorld()` is `mul(GetObjectToWorldMatrix(), float4(positionOS, 1.0)).xyz .`

``//Generated ClipNormalso.normalWS = TransformObjectToWorldNormal(input.normal).xyz;half3 normalVS = TransformWorldToViewDir(float4(o.normalWS,0) , 1 ).xyz;float2 clipNormals = TransformWorldToHClipDir(float4(normalVS,0) , 1 ).xy;``

clipNormals

### Maintain thickness according to camera distance..

The thickness of the Shader’s inner and outer outlines are now maintained with camera distance. In fact, it’s better to deal with the game and fix it yourself within the development team. You need to test if it’s better to control the script (component) or better control it in the shader.

Then let’s implement it directly in the code. 那我们直接在代码中实现吧。

Unity – Manual: Built-in shader variables (unity3d.com)

### _ScreenParams.xy

Use to linearize the Z-buffer values. 用于线性化 Z 缓冲区值。

x is (1-far/near), y is (far/near), z is (x/far), w is (y/far).

``half2 offset = ((_BorderMax * vertexColorMask) * o.position.w) / (_ScreenParams.xy / 2.0);``

What is `o.position.w` here? Would you like to get more detailed knowledge? Then we need to understand Homogeneous Coordinates.

### Z-Correction.

When rendering outlines, you can correct the part of the line overlapping inside the object by adding an offset function to the Z-axis in NDS. 在渲染轮廓时，您可以通过在 NDS 中向 Z 轴添加偏移函数来校正对象内部重叠的部分线。

``o.position = TransformObjectToHClip(input.vertex.xyz); //NDS spaceo.position.z += _OutlineZSmooth * 0.0001; //Z correction 은 여기서 간단하게 …``

OUTLINE WIDTH VARIATION BY GRAZING ANGLE ( WIP )

For training, we usually use the Lambert equation, but we need two vectors. 对于训练，我们通常使用兰伯特方程，但我们需要两个向量。

• Normal vector. ( Normal Vector : Normal direction ) 法线向量。 （法线向量：法线方向）
• lights vector. ( Light Vector : light direction ) 灯矢量。 （光矢量：光方向）
• Dot product 点积
• You can also use the Step or SmothStep function. 您还可以使用 Step 或 SmothStep 功能。

Find the dot product between two vectors. Let’s check the expression max(0,dot(n,l)) in desmos .

dot(x,y) is a function that finds the dot product. The max(0 , X ) function discards values less than or equal to 0.
dot(x,y) 是一个求点积的函数。 max(0 , X ) 函数丢弃小于或等于 0 的值。

Want to know more about NdotL Lambert brdf? 想了解更多关于 NdotL Lambert brdf 的信息？

GLSL vertex-by-vertex lighting – Programmer Sought

Assume that vector n and vector l are both normalized to 1. What is vector normalization? If you are curious about this, you can see more from the link below.

벡터 크기와 정규화 (개념 이해하기) | 심화 JS: 내추럴 시뮬레이션 | Khan Academy

The essence of this topic is not intended to be exhaustive, either mathematically or in many ways, so you can skip it. 本主题的本质并非详尽无遗，无论是在数学上还是在许多方面，因此您可以跳过它。

diffuse lambert dot (desmos.com)

Personally, I tend to use ndotl in many places. This is because the value obtained by the dot product is very useful. I use this as mask information.
If you’re a fast-witted artist, you’d probably already understand. Anyway, if you look at the 2D graph curve shown above, can you see it as a weight?
The lerp weight can be a very important input when using lerp in shaders.

If NdotL is added, you can use the Step( ) function to make it look like a cartoon effect. Freelance TA Madumpa’s excellent explanation will come first. He explains in great detail the steps used in cartoon rendering.

Added half lambert Wrapped lighting. 添加了半兰伯特包裹照明。

Distinguish between Diffuse and Shadow areas. 区分漫反射和阴影区域。

``````half4 LitPassFragment(Varyings input, half facing : VFACE) : SV_Target
{
...
float3 halfDir = normalize(viewDirWS + mainLight.direction);
float NdotL = (dot(normalDir,mainLight.direction));
float NdotH = max(0,dot(normalDir,halfDir));
float halfLambertForToon = NdotL * 0.5 + 0.5;
half atten = mainLight.shadowAttenuation * mainLight.distanceAttenuation;

half3 brightCol = mainTex.rgb * ( halfLambertForToon) *  _BrightAddjustment;
...
}``````

### Added PBR specular reflection 添加了 PBR 镜面反射。

If your overall lighting model is not PBR, this may not make much sense.
A traditional Blinn-Phong Specular NDF will suffice as it doesn’t even cover Glossy Reflection. However, I dared to use BeckMann-NDF for learning.

For a cartoon-style Specular expression, learn what the SmoothStep function is through the link below. 对于卡通风格的镜面反射，请通过以下链接了解 SmoothStep 函数是什么。

Just take a look at my previous post. 看看我之前的帖子就知道了。

Add a definition for _PI . 添加 _PI 的定义。

``define _PI 3.14159265359``

This has to do with the energy conservation of the Specular method. 这与 Specular 方法的能量守恒有关。

See below for related topics on energy conservation. 有关节能的相关主题，请参见下文。

Energy conserved specular blinn-phong. – MY NAME IS JP (leegoonz.blog)

### 贝克曼 NDF 实现。

Graphic Rants: Specular BRDF Reference

``// Beckmann normal distribution function here for Specualrhalf NDFBeckmann(float roughness, float NdotH){float roughnessSqr = max(1e-4f, roughness * roughness);float NdotHSqr = NdotH * NdotH;return max(0.000001,(1.0 / (_PI * roughnessSqr * NdotHSqr * NdotHSqr)) * exp((NdotHSqr-1)/(roughnessSqr * NdotHSqr)));}``

First of all, we are developing Toon shading, so forget about the environmental reflection and consider the above result only as a weight. The white part becomes the glossy part. Using the NDFBeckmann function appropriately would be one way, as adding a specular color rather than adding the specular white of the toon shading might be a bit more natural. (Don’t forget to use it as the weight Value.)

For example, adding specular directly to Diffuse ramp shading would result in something like this:

### Specular texture creation. 镜面纹理创建。

How it is implemented or what is decided in this section can be very personal for a given situation or purpose.
Depending on the genre of the game or limited circumstances, the range that can be changed may be too diverse.
In this chapter, I think it would be good to see it as a study case for some understanding.

``half SpecularWeight = smoothstep( 0.1 , _SpecEdgeSmoothness, spec );``

Add a variable with an appropriate variable name and include smoothstep( 0.1 , _SpecEdgeSmoothness, spec ) .

Let’s open Substance Designer and do a simulation in advance. To simulate, we need two rendering passes: a Diffuse pass and a pass for SpecularWeight.

In fact, there is no need to capture individual passes of the render buffer, just use a suitable screen capture tool to set the window capture setting and use the screen capture. At this time, in the pixel shader stage of the shader code, after an appropriate break line, return half4() is the return value. If you throw it to the screen, you will get a result similar to the desired pass.

The texture size of the two passes was changed in advance by modifying the canvas size in Photoshop in the Power of Two format. Because Substance Designer only supports Graph canvas in Power of Two format.

Two Bitmaps like this (Diffuse and SpecularWeight captured in Unity)

Specular Color map for testing. 用于测试的高光颜色图。

I used the Replace Color Range node to generate the specular color map.

In Blend Node, I checked the Mode with Copy still fixed. Shall we just use the basic Lerp ? Right.

When the two passes are combined, I think this kind of feeling should be output. Now let’s modify the Shader.

### Added Specular color map sampler.添加了高光颜色贴图采样器。

Code.

``````Properties
{
[Space(5)]
[MainTexture]
_MainTex ("Diffuse Map", 2D) = "white" {}
_SpecColorTex ("Specular Color Map", 2D) = "white" {} // Added Specular Color map descriptor property.
_SSSTex("SSS (RGB)", 2D) = "white" {}
_ILMTex("ILM (RGB)", 2D) = "white" {}
[Space(5)]``````

Confirm the addition of the property UI. 确认添加属性 UI。

``sampler2D _MainTex;sampler2D _SpecColorTex;``

``float4 mainTex = tex2D(_MainTex,input.uv);float4 specClorTex = tex2D(_SpecColorTex, input.uv);``

Connect the specular color texture sampler descriptor in the pixel shader stage.

So now we can lerp mainTex and specColorTex, right? Let’s do it. If you can’t remember about lerp, please see [here] (https://www.notion.so/Stylized-Toon-WIP-82208f2bd96f45968981ae6306908476) once more.

Code implementation. ( Pixel shader stage ) 代码实现。 （像素着色器阶段）

``````half spec = NDFBeckmann(_Roughness , NdotH);
half SpecularWeight = smoothstep( 0.1 , _SpecEdgeSmoothness,  spec );
half3 ToonDiffuse = brightCol * shadowContrast;
half3 mergedDiffuseSpecular = lerp(ToonDiffuse , specClorTex , SpecularWeight * (_SpecularPower * SpecularMask));

For Tone Fertilizer, we temporarily deactivated the addition of mainLight.color.rgb and compared it. It shows the same result as predicted by Substance Designer. 对于 Tone Fertilizer，我们暂时停用了 mainLight.color.rgb 的添加并进行了比较。 它显示的结果与 Substance Designer 预测的结果相同。

Enabled directional light intensity 2.0. 启用定向光强度 2.0。

Testing variable values after combining specular. (The preview test above was executed after adding the tone mapping below). 结合镜面反射后测试变量值。 （上面的预览测试是在添加下面的色调映射后执行的）

Now we need to add tone mapping. 现在我们需要添加色调映射。

# Simple Tone mapping applied. 应用了简单的色调映射。

When working on a project at the company, there are areas where there are frequent discussions between the scene team, the background team, and the effects development department. Usually, when Tone mapping is applied, the original colour of the background, character, and effect tends to be discoloured (?) because it is processed with the rendering buffer in the post-processing stage. Please read the attached two documents for detailed information.

Tone Map simple tone mapping. (shadertoy.com)

However, I will not use Real style hdr tone mapping, where color discoloration is too severe because it is pursuing cartoon-style rendering. I’ll handle it very simply inside the shader.

Tone mapping implementation.色调映射实现。

``````//Simple Tone mapping
finCol.rgb = finCol.rgb /(finCol.rgb + 1);``````

You can also visualize the tone map curve. Try the Shadertoy mentioned above.

I checked the overall feeling in the simulator mode. The overall color and tone look complete.

Unity does not render shadows unless there is a pass called Shadow-Caster inside the Shader. I will add one more pass to URP Toon.shder.

The Unity engine has already decided for what purpose it will be rendered according to the tag type. This is to use “LightMode” = “ShadowCaster” . That’s why the Pass Name “ShadowCaster” doesn’t really matter.

``````Pass//Shadow Caster Pass
{
Tags
{
}

ZWrite On
ZTest LEqual
Cull Off

HLSLPROGRAM
#pragma exclude_renderers gles gles3 glcore
#pragma target 2.0

#pragma multi_compile_instancing

ENDHLSL
}``````

If you add pass , the shader actually works by calling the code below.

The code is below. 代码如下。

``````#ifndef UNIVERSAL_SHADOW_CASTER_PASS_INCLUDED

float3 _LightDirection;
float3 _LightPosition;

struct Attributes
{
float4 positionOS   : POSITION;
float3 normalOS     : NORMAL;
float2 texcoord     : TEXCOORD0;
UNITY_VERTEX_INPUT_INSTANCE_ID
};

struct Varyings
{
float2 uv           : TEXCOORD0;
float4 positionCS   : SV_POSITION;
};

{
float3 positionWS = TransformObjectToWorld(input.positionOS.xyz);
float3 normalWS = TransformObjectToWorldNormal(input.normalOS);

float3 lightDirectionWS = normalize(_LightPosition - positionWS);
#else
float3 lightDirectionWS = _LightDirection;
#endif

float4 positionCS = TransformWorldToHClip(ApplyShadowBias(positionWS, normalWS, lightDirectionWS));

#if UNITY_REVERSED_Z
positionCS.z = min(positionCS.z, UNITY_NEAR_CLIP_VALUE);
#else
positionCS.z = max(positionCS.z, UNITY_NEAR_CLIP_VALUE);
#endif

return positionCS;
}

{
Varyings output;
UNITY_SETUP_INSTANCE_ID(input);

output.uv = TRANSFORM_TEX(input.texcoord, _BaseMap);
return output;
}

{
Alpha(SampleAlbedoAlpha(input.uv, TEXTURE2D_ARGS(_BaseMap, sampler_BaseMap)).a, _BaseColor, _Cutoff);
return 0;
}

#endif
``````

If you want to modify ShadowCasterPass or implement additional implementations such as the engine team, you can implement it here. In the case of an artist, I think it is only necessary to roughly understand this structural mechanism.

Added ShadowCaster. You can see it casting shadows on the floor or other objects. If you can’t see the shadows cast, there are two things to check and move on.

1. Are the shadows on the lights turned on? 灯上的阴影是否打开？
2. Are shadows turned on in the URP Rendering setting? 在 URP 渲染设置中是否打开了阴影？

Code implementation. ( Pixel shader stage ) 代码实现。 （像素着色器阶段）

``````#if defined(MAIN_LIGHT_CALCULATE_SHADOWS)
float3 positionWS = input.positionWS.xyz;
#endif

#else
float4 shadowCoord = float4(0, 0, 0, 0);
#endif

half atten = mainLight.shadowAttenuation * mainLight.distanceAttenuation;``````

``half shadowContrast = step(shadowThreshold * _ShadowRecieveThresholdWeight,NdotL * atten);``

Well, in general, this is a function that is often unnecessary in cartoon rendering. I will not actually use this function for the Translucent scattering effect, but I plan to use the mask obtained using this function to transform the shadow color, etc. For example, I would like to use lerp(resultColor,resultColor * saturation,thisFunction) in this way.

GDC Vault – Approximating Translucency for a Fast, Cheap and Convincing Subsurface Scattering Look

Well, in general, this is a function that is often unnecessary in cartoon rendering. I will not actually use this function for the Translucent scattering effect, but I plan to use the mask obtained using this function to transform the shadow color, etc. For example, I would like to use lerp(resultColor,resultColor * saturation,thisFunction) in this way.

Now we need to create one more function to control the saturation of the color.

And, as you may already know, there is no Thickness value yet. The property of the part through which light can pass must create a Thickness value. I’ll make it inside the Shader. I haven’t really paid attention to the optimization phase yet.

The complete code for the implementation of the LitPassVertex function.

LitPassVertex 函数实现的完整代码。

``````//--------------------------------------

Varyings LitPassVertex(Attributes input)
{
Varyings output = (Varyings)0;
UNITY_SETUP_INSTANCE_ID(input);
UNITY_TRANSFER_INSTANCE_ID(input, output);

float3 positionWS = TransformObjectToWorld(input.positionOS.xyz);
float3 viewDirWS = GetCameraPositionWS() - positionWS;
output.uv = TRANSFORM_TEX(input.texCoord, _MainTex);
float3 normalWS = TransformObjectToWorldNormal(input.normalOS);
output.normalWS = normalWS;
output.viewDirWS = viewDirWS;

#if defined(REQUIRES_WORLD_SPACE_POS_INTERPOLATOR)
output.positionWS = float4(positionWS , 0);
#endif

#endif

output.positionCS = TransformWorldToHClip(positionWS);
output.color = input.color;
return output;
}``````

``````//--------------------------------------

#define _PI 3.14159265359
// Beckmann normal distribution function here for Specualr
half NDFBeckmann(float roughness, float NdotH)
{
float roughnessSqr = max(1e-4f, roughness * roughness);
float NdotHSqr = NdotH * NdotH;
return max(0.000001,(1.0 / (_PI * roughnessSqr * NdotHSqr * NdotHSqr))  * exp((NdotHSqr-1)/(roughnessSqr * NdotHSqr)));
}

// Fast back scatter distribution function here for virtual back lighting
half3 LightScatterFunction ( half3 surfaceColor , half3 normalWS ,  half3 viewDir , Light light , half distortion , half power , half scale)
{
half3 lightDir = light.direction;
half3 normal = normalWS;
half3 H = lightDir + (normal * distortion);
float VdotH = pow(saturate(dot(viewDir, -H)), power) * scale;
half3 col = light.color * VdotH;
return col;
}``````

LitPassFragment function part implementation complete code.

LitPassFragment 函数部分实现完整代码。

``````half4 LitPassFragment(Varyings input, half facing : VFACE) : SV_Target
{
UNITY_SETUP_INSTANCE_ID(input);

//  Apply lighting
float4 finCol = 1;//initializing

float4 mainTex = tex2D(_MainTex,input.uv);
float4 specClorTex = tex2D(_SpecColorTex, input.uv);
float4 sssTex = tex2D(_SSSTex,input.uv);
float4 ilmTex = tex2D(_ILMTex,input.uv);

float3 normalDir = normalize(input.normalWS);
float3 lightDir = _MainLightPosition.xyz;//normalize(_WorldLightDir.xyz);
float3 viewDirWS = GetWorldSpaceViewDir(input.positionWS.xyz);
float3 halfDir = normalize(viewDirWS + lightDir);

float NdotL = (dot(normalDir,lightDir));
float NdotH = max(0,dot(normalDir,halfDir));
float halfLambertForToon = NdotL * 0.5 + 0.5;
halfLambertForToon = saturate(halfLambertForToon);
float3 positionWS = input.positionWS.xyz;
#endif

#else
float4 shadowCoord = float4(0, 0, 0, 0);
#endif

half atten = mainLight.shadowAttenuation * mainLight.distanceAttenuation;
half3 brightCol = mainTex.rgb * ( halfLambertForToon) *  _BrightAddjustment;
half3 shadowCol =  mainTex.rgb * sssTex.rgb;
half3 scatterOut = LightScatterFunction(shadowCol.xyz , normalDir.xyz , viewDirWS , mainLight , _Distortion , _Power ,_Scale);

half spec = NDFBeckmann(_Roughness , NdotH);
half SpecularWeight = smoothstep( 0.1 , _SpecEdgeSmoothness,  spec );
half3 ToonDiffuse = brightCol * shadowContrast;
half3 mergedDiffuseSpecular = lerp(ToonDiffuse , specClorTex , SpecularWeight * (_SpecularPower * SpecularMask));

finCol.rgb *= mainLight.color.rgb;
float DetailLine = ilmTex.a;
DetailLine = lerp(DetailLine,_DarkenInnerLine,step(DetailLine,_DarkenInnerLine));
finCol.rgb *= DetailLine;

//Simple Tone mapping
finCol.rgb = finCol.rgb /(finCol.rgb + _Exposure);
return finCol;
}``````

``````Shader "LightSpaceToon2/ToonBase"
{
Properties
{
[Space(5)]
[MainTexture]
_MainTex ("Diffuse Map", 2D) = "white" {}
_SpecColorTex ("Specular Color Map", 2D) = "white" {}
_SSSTex("SSS (RGB)", 2D) = "white" {}
_ILMTex("ILM (RGB)", 2D) = "white" {}
[Space(5)]
_DarkenInnerLine("Darken Inner Line", Range(0, 1)) = 0.2
[Space(5)]
_Roughness ("Roughness", Range (0.2, 0.85)) = 0.5
_SpecEdgeSmoothness ("Specular Edge Smoot",Range(0.1,1)) = 0.5
_SpecularPower("Specular Power", Range(0.01,2)) = 1
[Space(5)]
_Distortion("Distortion",Float) = 0.28
_Power("Power",Float)=1.43
_Scale("Scale",Float)=0.49
[Space(5)]
_Exposure ( " Tone map Exposure ", Range(0 , 1)) = 0.5
[Space(8)]
[IntRange] _QueueOffset     ("Queue Offset", Range(-50, 50)) = 0

//  Needed by the inspector
[HideInInspector] _Culling  ("Culling", Float) = 0.0
}

{
Tags
{
"RenderPipeline" = "UniversalPipeline"
"RenderType" = "Opaque"
"Queue" = "Geometry"
}
LOD 100

{
Name "ForwardLit"
Tags{"LightMode" = "UniversalForward"}

HLSLPROGRAM
// Required to compile gles 2.0 with standard SRP library
#pragma prefer_hlslcc gles
#pragma exclude_renderers d3d11_9x

//  Shader target needs to be 3.0 due to tex2Dlod in the vertex shader or VFACE
#pragma target 3.0

// -------------------------------------
// Material Keywords

// -------------------------------------
// Universal Pipeline keywords

// -------------------------------------
// Unity defined keywords
#pragma multi_compile_fog

//--------------------------------------
// GPU Instancing
#pragma multi_compile_instancing

sampler2D _MainTex;
sampler2D _SpecColorTex;
sampler2D _SSSTex;
sampler2D _ILMTex;

//  Material Inputs
CBUFFER_START(UnityPerMaterial)
half4  _MainTex_ST;
//  Toon
half    _DarkenInnerLine;
half    _SpecEdgeSmoothness;
half    _Roughness;
half	_SpecularPower;
//  Scatter
half _Distortion;
half _Power;
half _Scale;
//  Tone Map
half _Exposure;
CBUFFER_END

#pragma vertex LitPassVertex
#pragma fragment LitPassFragment

struct Attributes //appdata
{
float4 positionOS : POSITION;
float3 normalOS : NORMAL;
float4 color : COLOR; //Vertex color attribute input.
float2 texCoord : TEXCOORD0;
UNITY_VERTEX_INPUT_INSTANCE_ID
};

struct Varyings //v2f
{
float4 positionCS : SV_POSITION;
float2 uv : TEXCOORD0;
float4 color : COLOR;
float3 normalWS : NORMAL;
float4 vertex : TEXCOORD1;
float3 viewDirWS : TEXCOORD2;
float4 shadowCoord    : TEXCOORD3; // compute shadow coord per-vertex for the main light
#endif
float4 positionWS : TEXCOORD4;
UNITY_VERTEX_INPUT_INSTANCE_ID
};
//--------------------------------------

Varyings LitPassVertex(Attributes input)
{
Varyings output = (Varyings)0;
UNITY_SETUP_INSTANCE_ID(input);
UNITY_TRANSFER_INSTANCE_ID(input, output);

float3 positionWS = TransformObjectToWorld(input.positionOS.xyz);
float3 viewDirWS = GetCameraPositionWS() - positionWS;
output.uv = TRANSFORM_TEX(input.texCoord, _MainTex);
float3 normalWS = TransformObjectToWorldNormal(input.normalOS);
output.normalWS = normalWS;
output.viewDirWS = viewDirWS;

#if defined(REQUIRES_WORLD_SPACE_POS_INTERPOLATOR)
output.positionWS = float4(positionWS , 0);
#endif

#endif

output.positionCS = TransformWorldToHClip(positionWS);
output.color = input.color;
return output;
}

//--------------------------------------

#define _PI 3.14159265359
// Beckmann normal distribution function here for Specualr
half NDFBeckmann(float roughness, float NdotH)
{
float roughnessSqr = max(1e-4f, roughness * roughness);
float NdotHSqr = NdotH * NdotH;
return max(0.000001,(1.0 / (_PI * roughnessSqr * NdotHSqr * NdotHSqr))  * exp((NdotHSqr-1)/(roughnessSqr * NdotHSqr)));
}

// Fast back scatter distribution function here for virtual back lighting
half3 LightScatterFunction ( half3 surfaceColor , half3 normalWS ,  half3 viewDir , Light light , half distortion , half power , half scale)
{
half3 lightDir = light.direction;
half3 normal = normalWS;
half3 H = lightDir + (normal * distortion);
float VdotH = pow(saturate(dot(viewDir, -H)), power) * scale;
half3 col = light.color * VdotH;
return col;
}

//--------------------------------------

half4 LitPassFragment(Varyings input, half facing : VFACE) : SV_Target
{
UNITY_SETUP_INSTANCE_ID(input);

//  Apply lighting
float4 finCol = 1; //initializing

float4 mainTex = tex2D(_MainTex,input.uv);
float4 specClorTex = tex2D(_SpecColorTex, input.uv);
float4 sssTex = tex2D(_SSSTex,input.uv);
float4 ilmTex = tex2D(_ILMTex,input.uv);

float3 normalDir = normalize(input.normalWS);
float3 viewDirWS = GetWorldSpaceViewDir(input.positionWS.xyz);

float3 positionWS = input.positionWS.xyz;
#endif

#else
float4 shadowCoord = float4(0, 0, 0, 0);
#endif

float3 halfDir = normalize(viewDirWS + mainLight.direction);
float NdotL = (dot(normalDir,mainLight.direction));
float NdotH = max(0,dot(normalDir,halfDir));
float halfLambertForToon = NdotL * 0.5 + 0.5;
half atten = mainLight.shadowAttenuation * mainLight.distanceAttenuation;

half3 brightCol = mainTex.rgb * ( halfLambertForToon) *  _BrightAddjustment;
half3 shadowCol =  mainTex.rgb * sssTex.rgb;
half3 scatterOut = LightScatterFunction(shadowCol.xyz , normalDir.xyz , viewDirWS , mainLight , _Distortion , _Power ,_Scale);

halfLambertForToon = saturate(halfLambertForToon);
half spec = NDFBeckmann(_Roughness , NdotH);
half SpecularWeight = smoothstep( 0.1 , _SpecEdgeSmoothness,  spec );
half3 ToonDiffuse = brightCol * shadowContrast;
half3 mergedDiffuseSpecular = lerp(ToonDiffuse , specClorTex , SpecularWeight * (_SpecularPower * SpecularMask));

finCol.rgb *= mainLight.color.rgb;
float DetailLine = ilmTex.a;
DetailLine = lerp(DetailLine,_DarkenInnerLine,step(DetailLine,_DarkenInnerLine));
finCol.rgb *= DetailLine;

//Simple Tone mapping
finCol.rgb = finCol.rgb /(finCol.rgb + _Exposure);
return finCol;
}
ENDHLSL
}

{
Tags
{
}

ZWrite On
ZTest LEqual
Cull Off

HLSLPROGRAM
#pragma exclude_renderers gles gles3 glcore
#pragma target 2.0

#pragma multi_compile_instancing

ENDHLSL
}

}
}``````

### Thickness map production for translucency mask. 半透明遮罩的厚度图制作。

What is a thickness map or a curvature map? It is necessary for the operation anyway, and among the information related to the operation result, the weight I mentioned is also used here. In fact, you can calculate it mathematically, or you can get information in the form of a baked texture using ray tracing as a pre-calculation method. In any case, this is all used as a weight input. Easy, right? Unless it is a special mathematical computational process, weight information is really used in many places, and even if you know how to create weight information, there is a lot of scope that you can utilize, and you can also do your own special shading processing.

Seriously, it really matters. However, in fact, it is essential to use it extensively, although there is a pitfall that the wider the mathematical knowledge, the better.

### Added Toon Ramp effect using 2D LUT. 添加了使用 2D LUT 的 Toon Ramp 效果。

If you have completed the above chapter, you will also learn about Toon Ramp using 2D LUT as an appendix. We will copy the Shader we created and rename it. Copy URP Toon.shader and rename it to URP Toon Lut.shader.

The reason we separated the shader is simple. In practice, there are cases where general-purpose external plug-ins are used, and there are cases where Unity internal functions are used… I try not to use it. Uber shader types can give artists flexibility, but there are certainly times when they can be very complex and confusing for artists. Also, because of the large number of branches, the amount of memory taken up by the shader is often a serious problem. For example, there is no need to have a fog-related multi-compile in the LOD0 shader that never goes inside the fog. This is because, if you accurately understand the scene visibility, depth of fog, and other related situations, you should focus on uniqueness rather than versatility and optimize memory.

### 执行。

``````half3 ToonRamp(half halfLambertLightWrapped)
{
half3 rampmap = tex2D(_RampTex , _RampOffset + ((halfLambertLightWrapped.xx - 0.5) * _RampScale) + 0.5).rgb;
return rampmap;
}``````

A function that processes the ramp texture with Toon Ramp shading. 使用 Toon Ramp 着色处理渐变纹理的函数。

Debug the result of applying 2D Ramp texture and ndotl light wrapped as UV coordinates. 调试应用 2D Ramp 纹理和包裹为 UV 坐标的 ndotl 光的结果。

Ramp toon texture map made with a width of 256 pixels and a height of 2 pixels.

Ramp toon 纹理贴图，宽度为 256 像素，高度为 2 像素。

You can create it in Substance designer or simply in Photoshop. 您可以在 Substance Designer 中创建它，也可以直接在 Photoshop 中创建。

### Experimental result.

I tried to debug shading using halfLambert diffuse and Toon Ramp together.

``````half3 debugShading = rampToon * (halfLambertForToon * halfLambertForToon * 1.25) ;

Shading is not smooth because it uses Vertex Normal without using normal map. 着色不平滑，因为它使用顶点法线而不使用法线贴图。

## Appendix 附录

### Color Correction. color correction. 色彩校正。 色彩校正

The concept of Color Correction mentioned in this topic refers to the correction of the color space representation of the actual device. To put it simply, the monitor I’m working on supports only the sRGB color space, but in fact, iPhones and latest Android phones use the Display p3 color space. I think it will be easier to understand if you look at the concept of a computer-generated image that is printed in a different color. There are reasons why I take color correction so seriously.

When developing an MMORPG in 2018, we found a big problem. The bandwidth settings for monitor colors were all different between the operators. Even the color gamut of the PC screen and the Smart-Phone screen was different. We had a hard time deciding which color was the right choice for our expression.

2018年开发一款MMORPG的时候，发现了一个大问题。 运营商之间监视器颜色的带宽设置都不同。 甚至PC屏幕和智能手机屏幕的色域也不同。 我们很难决定哪种颜色最适合我们的表情。

So I saw that the lighting artist had bought a new monitor. It was the latest Dell monitor that supports AdobeRGB and DCI-P3, and the character artist was a Dell UltraSharp 27-inch monitor that supports up to sRGB.

In this way, different variable values were set depending on whether the color space was supported or not. So, the method I came up with was to simulate the Display-P3 Color matrix in sRGB mode. For more information on color vision, etc., we recommend that you read the link.

아이폰7에서 시작한 새로운 색공간의 기준 DCI-P3, Display P3 : 네이버 블로그 (naver.com)

Color Correction should normally be done in the post-process area. I won’t go into deep learning about this in this topic, but I’ll show you how to simply check how the rendering you’re working on looks like on Display p3.

The simplest way is to use Photoshop’s icc profile.

In particular, there are more differences in the photorealistic style that is more affected by juxtaposition blending, that is, photos or more realistic results.

Well, you might want to read something like this.

Let’s compare Display P3 and sRGB color vision. Let’s assume that your monitor only supports sRGB. The 2018 Dell 27-inch Ultra Sharp base model only supports sRGB mode and user mode, and does not support Wide Gamut.

Examples of various wide-gamut images (webkit.org)

Open the image saved in sRGB mode on the web page above in Photoshop and select Assign Profile ⇒ Profile ⇒ image P3.

The left is the original sRGB and the right is the image P3 profile applied. Let’s take a look at colour vision again.

If you look at the difference in colour vision, the green deviation is the largest, followed by red, then blue. If you look at the colour vision deviations and then look at the comparison image above again, it will be easier to understand which shades make the difference.

There are cases where you forget the purpose of why you need to look at this while reading the article. Because of this, you are looking at the rendering result in a state where the final result deviation has occurred.
In addition, from Android Q (operating system 10.0 or later), the Android camp also applies display p3.

In conclusion, for example, clothes or hair with a calm red tone may look a little stronger red tone, and a skin tone that you think is appropriate may strangely look more reddish on the iPhone.

Shouldn’t the developer be sensitive to these colour results?

### Monitor information for correct colour calibration.

This supports a wide color gamut from professional models with at least the products below. 这至少支持以下产品的专业型号的宽色域。

Monitors and Accessories | Dell India

I am using the monitor below when working from home. 我在家工作时使用下面的显示器。

SW321C｜32-inch 4K AdobeRGB USB-C Photographer Monitor | BenQ US

This is because we believe that it is correct to develop a rendering workflow in an environment that supports the full-color gamut of the end user’s output device as much as possible.