콘텐츠로 건너뛰기

Stylized toon Urp update.

草稿日期。
2021년 7월 13일.

Preface

After leaving the company after a long time, I am going to research some rendering techniques and post them on Github. By writing this pages, I think I will be able to remind myself of my old memories.

So… I want to say that I didn’t want to do anything too difficult from the start. It seemed that I would get tired first if I dealt with difficult things from the beginning. 😢 Back to the main text… We try to explain each effect section as simply as possible.

As I have been paying attention to team management for several years, personally, my ability to explain in detail technical aspects seems to have decreased, so I will keep the attitude of reminding myself.

Most of the explanations are not for programmers, but for artists interested in shading.

Rather than relying entirely on rendering programmers or technical artists, if you understand the implementation aspects, artists will be able to organize their thoughts before communicating with them.

离开公司很久之后,我打算研究一些渲染技术并发布在Github上。 通过写这几页,我想我将能够提醒自己我的旧记忆。 所以……我想说,我从一开始就不想做任何太难的事情。 好像从一开始就处理困难的事情,我会先感到疲倦。 😢 回到正文… 我们尝试尽可能简单地解释每个效果部分。 由于我多年来一直关注团队管理,我个人对技术方面的详细解释能力似乎有所下降,所以我会保持提醒自己的态度。 大多数解释不是针对程序员,而是针对对着色感兴趣的艺术家。 与其完全依赖渲染程序员或技术美工,如果您了解实现方面,美工将能够在与他们交流之前组织他们的想法。

What we can learn from this content. 我们可以从这个内容中学到什么。

  • You can understand the simple Toon Shader processing method. 可以理解简单的Toon Shader处理方法。
  • You will get to know the roughness of the PBR lighting model. 您将了解 PBR 照明模型的粗糙度。
  • You can understand the use of NodtL. 可以理解NodtL的使用。
  • You can see what the vertex attribute is. 你可以看到顶点属性是什么。
  • You will also learn the spatial transformation process. 您还将学习空间转换过程。
  • You can understand a little bit of the very common HLSL shading syntax and structure. 您可以了解一些非常常见的 HLSL 着色语法和结构。
  • You can use Desmos. 您可以使用 Desmos。

基本准备

As an example, I used a character from XRD obtained from the Internet. Of course, you can guess that Normal is edited in the DCC tool. We’re not going to use the Normal information we computed directly by Unity. There are some things to check in the mesh’s inspector information. Whenever possible, I’ll use Import .

例如,我使用了从互联网上获得的 XRD 中的一个字符。 当然,你可以猜到Normal是在DCC工具中编辑的。 我们不会使用 Unity 直接计算的 Normal 信息。 有一些事情需要检查网格的检查器信息。 只要有可能,我就会使用 Import 。

Since we will not be building our shader with multipass shading we will need 2 materials.

由于我们不会使用多通道着色来构建着色器,因此我们将需要 2 种材质。

  1. OutlineMat.mat for outline rendering. OutlineMat.mat 用于轮廓渲染
  2. ToonShadingMat.mat for real character shading. ToonShadingMat.mat 用于真实字符着色.

Add these two materials to the Assets directory. When you are ready, register two materials in one mesh as shown in the picture below.

将这两种材质添加到 Assets 目录中。 准备好后,在一个网格中注册两种材料,如下图所示。

The order of the materials doesn’t matter, as the shaders applied to the OutlineMat will be rendered as Cull Front. 材质的顺序无关紧要,因为应用于 OutlineMat 的着色器将渲染为 Cull Front。

Multi-pass implementation in one shader is equivalent to applying two materials like this. It could be simply conceptually called Multi-pass. I’m going to render the same Mesh Entity twice anyway… Personally, I prefer this method rather than Multi-pass when shading for character effects or other effects.

在一个着色器中实现多通道相当于应用两种这样的材质。 它可以在概念上简单地称为多通道。 无论如何,我将渲染相同的网格实体两次……就我个人而言,在为角色效果或其他效果着色时,我更喜欢这种方法而不是多通道。

Let’s get some good information from TA Jongpil Jeong’s very friendly URP shader course.
让我们从 TA Jongpil Jeong 非常友好的 URP 着色器课程中获取一些很好的信息。

Created Outline Rendering shader. 创建轮廓渲染着色器。

Simply put, there are three major outline processing techniques. 简单地说,轮廓加工技术主要有3种。

  1. Offset the vertex of the mesh in the normal vector direction (direction pointed by the normal) and fill it with color.
  2. How to apply the rim light technique.
  3. How to use Post Process (edge detection processing using depth normal information + utilization of Sobel filter).

1.在法向量方向(法线指向的方向)偏移网格的顶点并用颜色填充它。 2. 如何应用边缘光技术。 3.如何使用Post Process(使用深度法线信息的边缘检测处理+Sobel filter的使用)。

You can categorize them like this: Once you know that you are doing things like these above. I’ll just implement it in method 1.

您可以将它们分类如下: 一旦你知道你正在做上面这些事情。 我将在方法 1 中实现它。

Debug shading:: Light-Space Outline width variant result debug.

I’m going to create something like the one above. Intermediate interim auxiliary theories will add external links. The internet is full of good resources.

我将创建类似上面的东西。 中级临时辅助理论将添加外部链接。 互联网上充满了很好的资源。

执行。

URP Toon Outline.shader

Shader "LightSpaceToon2/Outline LightSpace"
{
    Properties
    {
        
        [Space(8)]
        [Enum(UnityEngine.Rendering.CompareFunction)] _ZTest ("ZTest", Int) = 4
        [Enum(UnityEngine.Rendering.CullMode)] _Cull ("Culling", Float) = 1

        [Header(Outline)]
        _Color ("Color", Color) = (0,0,0,1)
        _Border ("Width", Float) = 3
        [Toggle(_COMPENSATESCALE)]
        _CompensateScale            ("     Compensate Scale", Float) = 0
        [Toggle(_OUTLINEINSCREENSPACE)]
        _OutlineInScreenSpace       ("     Calculate width in Screen Space", Float) = 0
        _OutlineZFallBack ("     Calculate width Z offset", Range(-20 , 0)) = 0

    }
    SubShader
    {
        Tags
        {
            "RenderPipeline" = "UniversalPipeline"
            "RenderType"="Opaque"
            "Queue"= "Geometry+1"
        }
        Pass
        {
            Name "StandardUnlit"
            Tags{"LightMode" = "UniversalForward"}

            Blend SrcAlpha OneMinusSrcAlpha
            Cull[_Cull]
            ZTest [_ZTest]
        //  Make sure we do not get overwritten
            ZWrite On

            HLSLPROGRAM
            // Required to compile gles 2.0 with standard srp library
            #pragma prefer_hlslcc gles
            #pragma exclude_renderers d3d11_9x
            #pragma target 2.0

            #pragma shader_feature_local _COMPENSATESCALE
            #pragma shader_feature_local _OUTLINEINSCREENSPACE

            // -------------------------------------
            // Lightweight Pipeline keywords

            // -------------------------------------
            // Unity defined keywords
            #pragma multi_compile_fog

            //--------------------------------------
            // GPU Instancing
            #pragma multi_compile_instancing
            // #pragma multi_compile _ DOTS_INSTANCING_ON // needs shader target 4.5
            
            #pragma vertex vert
            #pragma fragment frag

            // Lighting include is needed because of GI
            #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"

            CBUFFER_START(UnityPerMaterial)
                half4 _Color;
                half _Border;
                half _OutlineZFallBack;
            CBUFFER_END

            struct VertexInput
            {
                float4 vertex : POSITION;
                float3 normal : NORMAL;
                UNITY_VERTEX_INPUT_INSTANCE_ID
            };


            struct VertexOutput
            {
                float4 position : POSITION;
                half fogCoord : TEXCOORD0;

                UNITY_VERTEX_INPUT_INSTANCE_ID
                UNITY_VERTEX_OUTPUT_STEREO
            };

            VertexOutput vert (VertexInput v)
            {
                VertexOutput o = (VertexOutput)0;
                UNITY_SETUP_INSTANCE_ID(v);
                UNITY_TRANSFER_INSTANCE_ID(v, o);
                UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);

                half ndotlHalf = dot(v.normal , _MainLightPosition)*0.5+0.5;
                 
            
            //  Extrude
                #if !defined(_OUTLINEINSCREENSPACE)
                    #if defined(_COMPENSATESCALE)
                        float3 scale;
                        scale.x = length(float3(UNITY_MATRIX_M[0].x, UNITY_MATRIX_M[1].x, UNITY_MATRIX_M[2].x));
                        scale.y = length(float3(UNITY_MATRIX_M[0].y, UNITY_MATRIX_M[1].y, UNITY_MATRIX_M[2].y));
                        scale.z = length(float3(UNITY_MATRIX_M[0].z, UNITY_MATRIX_M[1].z, UNITY_MATRIX_M[2].z));
                    #endif
                    v.vertex.xyz += v.normal * 0.001 * (_Border * ndotlHalf);
                    #if defined(_COMPENSATESCALE) 
                        / scale
                    #endif
                    ;
                #endif

                o.position = TransformObjectToHClip(v.vertex.xyz);
                o.fogCoord = ComputeFogFactor(o.position.z);

            //  Extrude
                #if defined(_OUTLINEINSCREENSPACE)
                    if (_Border > 0.0h) {
                        float3 normal = mul(UNITY_MATRIX_MVP, float4(v.normal, 0)).xyz; // to clip space
                        float2 offset = normalize(normal.xy);
                        float2 ndc = _ScreenParams.xy * 0.5;
                        o.position.xy += ((offset * (_Border * ndotlHalf)) / ndc * o.position.w);
                    }
                #endif

                
                o.position.z += _OutlineZFallBack * 0.0001;
                return o;
            }

            half4 frag (VertexOutput input ) : SV_Target
            {
                UNITY_SETUP_INSTANCE_ID(input);
                UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(input);
                _Color.rgb = MixFog(_Color.rgb, input.fogCoord);
                return half4(_Color);
            }
            ENDHLSL
        }
    }
    FallBack "Hidden/InternalErrorShader"
}

代码比我想象的要长……

Create an outline rendering mask. 创建轮廓渲染蒙版。

Concept: Nothing special to say. Usually, outline thickness treatment is painted in one of the vertex colours to make it thicker or not rendered at all.

概念:没什么特别的。 通常,轮廓粗细处理以顶点颜色之一绘制,以使其更厚或根本不渲染。

struct Attributes//appdata
{
    float4 vertex : POSITION;
    float3 normal : NORMAL;
    float4 color  : COLOR;// Related of vertex color rgba attribute from mesh.
		UNITY_VERTEX_INPUT_INSTANCE_ID
};


struct Varyings//v2f
{
    float4 position : POSITION;
    float4 color    : COLOR;// Related of vertex color rgba attribute data delivering to vertex stage.
half fogCoord   : TEXCOORD0;
    half4 dubugColor : TEXCOORD1;
    half3 dotBlend   : TEXCOORD2;
    UNITY_VERTEX_INPUT_INSTANCE_ID
};

Let’s review the code briefly.让我们简要回顾一下代码。

struct Attributes//appdata
{
float4 vertex : POSITION;
float3 normal : NORMAL;
float4 color : COLOR;// Related of vertex color rgba attribute from mesh.
UNITY_VERTEX_INPUT_INSTANCE_ID
};

struct stands for struct. Attributes means information to pass from the modeled mesh to the shader.

struct 代表结构。 属性是指从建模网格传递到着色器的信息。

It’s like this. 就像这样。

In Unity3D, we create a structure like this and get information of mesh data, such as vertices, normals, and UV information. 在Unity3D中,我们创建一个这样的结构并获取网格数据的信息,例如顶点、法线和UV信息。

When talking to programmers, when we talk about vertex attributes, it means that we use them as an industry-standard so that there is no problem in communicating with each other. No matter the engine…
和程序员说话,当我们说vertex 属性的时候,就意味着我们把它们作为一个行业标准,这样互相交流就没有问题了。 不管发动机…

If you look at Attributes, is there a color? What color is the mesh? If you think about it, do you know the vertex color we are familiar with? That’s the guy. You need to put the vertex color attribute in the structure so that it can be delivered to the vertex stage. That’s easy, right? Creating a structure defines which Attributes to make into an Attributes package. If you put it in the Attributes package, you should actually make it a package that can be transferred to the Shader Stage, right?

如果您查看属性,是否有颜色? 网格是什么颜色的? 想想看,你知道我们熟悉的顶点颜色吗? 就是那个人。 您需要将顶点颜色属性放入结构中,以便将其传递到顶点阶段。 这很容易,对吧? 创建一个结构定义了将哪些属性放入一个 Attributes 包中。 如果放在Attributes包里,其实应该是做成一个可以传送到Shader Stage的包吧?

For a brief explanation, please take a look at the picture below. 有关简要说明,请看下面的图片。

  1. Let’s see how the vertex shader and pixel shader go through the process of drawing an image to the screen. Let’s assume that there is a rectangular Mesh as shown in the picture. Assuming that this is drawn as an image on the screen, it must first pass through the Vertex Attribute processing unit, that is, Vertex – Shader. You can also add a ripple effect to the vertices using Sin() in the Vertex – Shader process. To make it easier to understand conceptually, I tried to think of each process unit in the factory. Vertex Attribute is packaged in Packet (Bundle, Bundle) in Vertex-Shader process. At this time, new location values or other information will be packetized. At this time, the position value (Position) among the properties of each vertex is transmitted as a required property. 让我们看看顶点着色器和像素着色器如何完成将图像绘制到屏幕的过程。 让我们假设有一个如图所示的矩形网格。 假设这是在屏幕上绘制的图像,首先要经过Vertex Attribute处理单元,即Vertex – Shader。 您还可以在 Vertex – Shader 过程中使用 Sin() 向顶点添加涟漪效果。 为了在概念上更容易理解,我尝试考虑工厂中的每个工艺单元。 Vertex-Shader过程中将Vertex Attribute封装在Packet(Bundle,Bundle)中。 此时,新的位置值或其他信息将被打包。 此时,将每个顶点的属性之间的位置值(Position)作为所需属性进行传输。
  2. Packetized data goes through the Rasterizer Stage in the Vertex-Output stage (Stage). To put it simply, you can think of the Rasterizer Stage as a pixelation stage. More precisely, image information is composed of pixels in a two-dimensional array, and one image information is expressed by combining these dots and pixels at regular intervals. In other words, it can be said that it is a set of consecutive pixels in one line, and processing this is called a rasterizer. If a triangle is drawn as shown in the figure above, the rasterizer collects three (XYZ) positions of the vertices one by one to make a triangle, and then Find the pixels that will fit inside. 打包后的数据经过顶点输出阶段(Stage)中的光栅化阶段。 简单地说,您可以将 Rasterizer Stage 视为像素化阶段。 更准确地说,图像信息由二维阵列的像素组成,通过将这些点和像素以一定间隔组合来表示一个图像信息。 换句话说,可以说它是一行中的一组连续像素,对其进行处理的称为光栅化器。 如果如上图绘制一个三角形,光栅化器将顶点的三个(XYZ)位置一个一个地收集起来构成一个三角形,然后找到适合里面的像素。
  3. Then, the Rasterizer-Output is sent to the Fragment stage, and finally the Pixel-Shader performs the calculation to determine the final color. 然后,将 Rasterizer-Output 送到 Fragment 阶段,最后由 Pixel-Shader 进行计算,确定最终的颜色。

It is also recommended that you refer to the PPT that I have prepared separately. 也建议大家参考我单独准备的PPT。

Google Slides 로드 중

What you create at this time is a struct Varyings structure. 你此时创建的是一个 struct Varyings 结构。

struct Varyings//v2f
{
float4 position : POSITION;
float4 color : COLOR;// Related of vertex color rgba attribute data delivering to vertex stage.
half fogCoord : TEXCOORD0;
half4 dubugColor : TEXCOORD1;
half3 dotBlend : TEXCOORD2;
UNITY_VERTEX_INPUT_INSTANCE_ID
};

Does creating a struct mean that you can use this struct as a type? (This is a bit difficult, isn’t it?) It’s easier to just memorize it at this time…. It means you can make a Varyings type vertex stage. Anyway, you can define a type using a structure like this, so make it a Varyings structure type when creating a vertex shader stage as shown below. And it is defined as Attributes type input as a list of arguments and delivered. Let’s interpret the Varyings vert (Attributes input)…

创建结构是否意味着您可以将此结构用作类型? (这有点困难,不是吗?) 这个时候直接记住比较容易….这意味着你可以制作一个Varyings类型的顶点阶段。 无论如何,您可以使用这样的结构定义类型,因此在创建顶点着色器阶段时将其设为 Varyings 结构类型,如下所示。 并且它被定义为 Attributes 类型输入作为参数列表并传递。 让我们解释一下 Varyings vert(属性输入)…

It can be understood that an input list of type Attributes is passed to the vert function of type Varyings.
可以理解为一个Attributes类型的输入列表传递给Varyings类型的vert函数。

Varyings vert (Attributes input)
{
    Varyings o = (Varyings)0;
    UNITY_SETUP_INSTANCE_ID(v);
    UNITY_TRANSFER_INSTANCE_ID(v, o);
    UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
    float ndotlLine = dot(input.normal , _MainLightPosition);

//Color mask
		half vertexColorMask = input.color.a;//여기에 어트리뷰트 중에서 받아온 컬러 중에서 A체널을 넣어줍니다.
		input.vertex.xyz += input.normal * 0.001 * lerp(_BorderMin , _BorderMax , 1);
    input.vertex.xzy *= vertexColorMask;//버택스 컬러 마스크를 곱해줍니다.
		o.position = TransformObjectToHClip(input.vertex.xyz);
    o.position.z += _OutlineZSmooth * 0.0001;
    return o;
}

Add a variable called half vertexColorMask and put input.color.a here. If this value is multiplied by input.vertex.xzy *= vertexColorMask in this way, the value of 0 to 1 stored in the vertexColorMask variable is multiplied by outline thickness processing, so the vertex color part colored with 0 value is the return value. Since it is 0, the outline thickness will be 0, right?

添加一个名为 half vertexColorMask 的变量并将 input.color.a 放在这里。 如果以这种方式将该值乘以input.vertex.xzy *= vertexColorMask,则vertexColorMask变量中存储的0到1的值乘以轮廓粗细处理,所以用0值着色的顶点颜色部分就是返回值。既然是0,那么轮廓粗细就会是0吧?

Grasp the characteristics of outer contour line thickness: 掌握外轮廓线粗细的特点:

  • Vertex color use for Line variable: The outer contour line is recorded in the A channel of Vertex Color. The closer it is to white, the thicker it is, and the closer it is to black, the thinner it is. Line 变量的顶点颜色使用:外部轮廓线记录在顶点颜色的 A 通道中。 越接近白色越厚,越接近黑色越薄。
input.vertex.xyz += input.normal * 0.001 * lerp(_BorderMin * vertexColorMask , _BorderMax , 1);

In the above format, you can multiply _BorderMin by vertexColorMask or multiply _BorderMax by vertexColorMask according to your purpose.
在上述格式中,您可以根据您的目的将 _BorderMin 乘以 vertexColorMask 或将 _BorderMax 乘以 vertexColorMask。

Applied results for the exam.

Light Space Outline Width implementation. 光空间轮廓宽度实现。

Concept 概念

When drawing a picture using lines, like when drawing a cartoon, plaster drawing, or line drawing using various writing instruments, the line on the parts receiving the light is thinly drawn or omitted, and vice versa. You can express your own three-dimensional effect by drawing darker or thicker on the sides.
This was done so that the expression part could be processed according to the lighting direction.
用线条画画时,如用各种书写工具画卡通、石膏画或线条画时,受光部分的线条被细画或省略,反之亦然。 您可以通过在侧面绘制更深或更厚的颜色来表达您自己的 3D 效果。
这样做是为了可以根据照明方向处理表情部分。

The reference image above is an image from a book called How to draw sold on Amazon.
上面的参考图片来自一本名为 How to draw 在亚马逊上出售的书。

Here are some cartoon character drawing references that are much easier to understand! Anyway, this is how it is expressed. 这里有一些更容易理解的卡通人物绘图参考! 无论如何,这就是它的表达方式。

Implementation

Let’s take a look at which part of the above code is related to the Light Space outline. This is how the implementation will look like.

我们来看看上面代码的哪一部分与Light Space的轮廓有关。 这就是实现的样子。

Naturally, the outline is being processed in the vertex stage. Let’s look at the code below first.

自然,轮廓是在顶点阶段进行处理的。 我们先看下面的代码。

Varyings vert (Attributes input)
{
    Varyings o = (Varyings)0;
    UNITY_SETUP_INSTANCE_ID(v);
    UNITY_TRANSFER_INSTANCE_ID(v, o);
    UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
    float ndotlLine = dot(input.normal , _MainLightPosition);
    //Light Space Outline mask here.
    o.dotBlend = ndotlLine;
    //Color mask
    half vertexColorMask = input.color.a; //Put the A channel among the colors received from the attributes here.
    input.vertex.xyz += input.normal * 0.001 * lerp(_BorderMin * vertexColorMask , _BorderMax , (ndotlLine ));
    o.position = TransformObjectToHClip(input.vertex.xyz);
    o.position.z += _OutlineZSmooth * 0.0001;
    return o;
}

If you look at the code, it also contains junk code that has been added to work. A lot more than you think, you will use the ndotl operation to create a mask or weight value.

如果您查看代码,它还会包含已添加到工作中的垃圾代码。 比您想象的要多得多,您将使用 ndotl 操作来创建掩码或权重值。

主题下有一篇关于 NdotL 的更详细的文章。

Don’t think of the above image as a 3D rendering, but think of it as a Quick Mask in Photoshop. It can be easily understood by thinking that the weight closer to white has a weight of 1, and the weight closer to black converges to 0. In the end, if you put the above result value in the blending weight weight of the lerp function, which is a linear interpolation, the result value of lerp(A , B , blending Weight) will be returned according to the weight. If we interpret the above figure, the value of A gets closer to 1 as we go to the left in the circular shape, and the value of B gets closer to 1 as we go to the right. Because I put ndotl value in blending weight. Let’s look at the code below again.

不要将上图视为 3D 渲染,而应将其视为 Photoshop 中的快速蒙版。 可以很容易的理解为靠近白色的权重的权重为1,靠近黑色的权重收敛到0。 最后,如果把上面的结果值放在lerp函数的混合权重中,这是一个线性插值,会根据权重返回lerp(A,B,混合权重)的结果值。 如果我们解释上图,当我们在圆形中向左移动时,A 的值会越来越接近 1,而随着我们向右移动,B 的值会越来越接近 1。 因为我把 ndotl 值放在了混合权重中。 让我们再看看下面的代码。

I think the most important part of the code is the float ndotl-Line part.
我认为代码中最重要的部分是 float ndotl-Line 部分。

float ndotlLine = dot(input.normal , _MainLightPosition);

Outline thickness consists of two values, a minimum and a maximum. We will mix these two values. The ndotl-Line value will be used as the weight.

轮廓厚度由两个值组成,最小值和最大值。 我们将混合这两个值。 ndotl-Line 值将用作权重。

input.vertex.xyz += input.normal * 0.001 * lerp(_BorderMin , _BorderMax , ndotlLine);

The vertex position is offset in the normal direction. At this time, linear interpolation is performed by adding a weight ndotl-Line between the two values of _BorderMin and _BorderMax. The reason why _BorderMin and _BorderMax are separated is to allow the artist to flexibly set the change values of the thin and thick sides when implementing the function of changing the line thickness according to the distance.

顶点位置在法线方向偏移。 这时通过在_BorderMin和_BorderMax这两个值之间加上一个权重ndotl-Line来进行线性插值。 _BorderMin和_BorderMax之所以分开,是为了让美工在实现根据距离改变线条粗细的功能时,可以灵活设置细边和粗边的变化值。

In this case, you can directly use input.normal, that is, object space normal.
There is no need to change this to world space.
这种情况下,可以直接使用 input.normal ,即对象空间 normal。 无需将其更改为世界空间。

Work in clip space 在剪辑空间工作

It is to convert the vertex position and normal vector to clip space before transforming the vertex position. This allows you to counteract object resizing (as long as you normalize the Normal after transformation) by bypassing the model transformation into world space. The order should transform normal to world space. This is because we need to convert it to world space before converting it to View Projection space.

就是在变换顶点位置之前,先将顶点位置和法向量转换到裁剪空间。 这允许您通过绕过模型转换到世界空间来抵消对象调整大小(只要您在转换后标准化法线)。 该顺序应将法线转换为世界空间。 这是因为我们需要先将其转换为世界空间,然后再将其转换为视图投影空间。

full code

Varyings vert (Attributes input)
{
	Varyings o = (Varyings)0;
	UNITY_SETUP_INSTANCE_ID(v);
	UNITY_TRANSFER_INSTANCE_ID(v, o);
	UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
	float ndotlLine = dot(input.normal , _MainLightPosition)  * 0.5 + 0.5;
	
	//Generated ClipNormals
	o.normalWS = TransformObjectToWorldNormal(input.normal).xyz;
	half3 normalVS = TransformWorldToViewDir(float4(o.normalWS,0) , 1 ).xyz;
	float2 clipNormals = TransformWorldToHClipDir(float4(normalVS,0) , 1 ).xy;


	//Light Space Outline mask here.
	o.dotBlend = ndotlLine;
	//Color mask
	half vertexColorMask = input.color.a;//Put the A channel among the colors received from the attributes here.//将A通道放在从这里的属性接收到的颜色中。
	o.position = TransformObjectToHClip(input.vertex.xyz);
	
  half2 offset = ((_BorderMax * vertexColorMask) * o.position.w) / (_ScreenParams.xy / 2.0);
  offset *= o.dotBlend;
  o.position.xy += clipNormals.xy * offset * 5;
  o.position.z += _OutlineZSmooth * 0.0001;
  return o;
}

Vertex matrix transformation 顶点矩阵变换

When the attributes of the mesh are packaged and entered into the vertex stage, spatial transformation must be performed right before the rasterizer stage.

当网格的属性被打包并进入顶点阶段时,必须在光栅化阶段之前进行空间变换。

Vertex Transformation – OpenGL Wiki (khronos.org)

There is a very well-organized blog, so I linked it. 有一个组织非常好的博客,所以我链接了它。

Incidentally, when all vertex attributes leave the vertex stage, they exist in clip space, which is also called NDS or Normalized devices space.
顺便说一句,当所有顶点属性离开顶点阶段时,它们存在于剪辑空间中,也称为 NDS 或规范化设备空间。

If you look at the interview, the word nds sometimes come up, but you can understand that clip space is a normalized device space.
如果你看采访,有时会出现nds这个词,但你可以理解剪辑空间是一个规范化的设备空间。

A note on world-space space transformation.

Use the built-in function:

Let’s take a look at SpaceTransforms.hlsl.

float3 normalWS = TransformObjectToWorldNormal(input.normalOS);

Internally this is the same as:

float3 TransformObjectToWorld(float3 positionOS)
{
#if defined(SHADER_STAGE_RAY_TRACING)
return mul(ObjectToWorld3x4(), float4(positionOS, 1.0)).xyz;
#else
return mul(GetObjectToWorldMatrix(), float4(positionOS, 1.0)).xyz;
#endif
}

TransformObjectToWorld

In the code above, the SHADER_STAGE_RAY_TRACING part is the branch used when ray tracing is enabled, and the method normally used in TransformObjectToWorld() is mul(GetObjectToWorldMatrix(), float4(positionOS, 1.0)).xyz .

上面代码中,SHADER_STAGE_RAY_TRACING部分是启用光线追踪时使用的分支,TransformObjectToWorld()中通常使用的方法是mul(GetObjectToWorldMatrix(), float4(positionOS, 1.0)).xyz。

//Generated ClipNormals
o.normalWS = TransformObjectToWorldNormal(input.normal).xyz;
half3 normalVS = TransformWorldToViewDir(float4(o.normalWS,0) , 1 ).xyz;
float2 clipNormals = TransformWorldToHClipDir(float4(normalVS,0) , 1 ).xy;

clipNormals

Maintain thickness according to camera distance..

The thickness of the Shader’s inner and outer outlines are now maintained with camera distance. In fact, it’s better to deal with the game and fix it yourself within the development team. You need to test if it’s better to control the script (component) or better control it in the shader.

着色器的内部和外部轮廓的厚度现在与相机距离保持一致。 事实上,最好在开发团队内处理游戏并自己修复它。 你需要测试是控制脚本(组件)更好还是在着色器中更好地控制它。

Then let’s implement it directly in the code. 那我们直接在代码中实现吧。

Unity – Manual: Built-in shader variables (unity3d.com)

_ScreenParams.xy

Use to linearize the Z-buffer values. 用于线性化 Z 缓冲区值。

x is (1-far/near), y is (far/near), z is (x/far), w is (y/far).

half2 offset = ((_BorderMax * vertexColorMask) * o.position.w) / (_ScreenParams.xy / 2.0);

What is o.position.w here? Would you like to get more detailed knowledge? Then we need to understand Homogeneous Coordinates.

这里的 o.position.w 是什么? 您想获得更详细的知识吗? 然后我们需要了解齐次坐标。

Z-Correction.

When rendering outlines, you can correct the part of the line overlapping inside the object by adding an offset function to the Z-axis in NDS. 在渲染轮廓时,您可以通过在 NDS 中向 Z 轴添加偏移函数来校正对象内部重叠的部分线。

o.position = TransformObjectToHClip(input.vertex.xyz); //NDS space
o.position.z += _OutlineZSmooth * 0.0001; //Z correction 은 여기서 간단하게 …

OUTLINE WIDTH VARIATION BY GRAZING ANGLE ( WIP )

Diffuse Ramp shading 漫反射渐变着色

NdotL Ramp Shading

For training, we usually use the Lambert equation, but we need two vectors. 对于训练,我们通常使用兰伯特方程,但我们需要两个向量。

  • Normal vector. ( Normal Vector : Normal direction ) 法线向量。 (法线向量:法线方向)
  • lights vector. ( Light Vector : light direction ) 灯矢量。 (光矢量:光方向)
  • Dot product 点积
  • You can also use the Step or SmothStep function. 您还可以使用 Step 或 SmothStep 功能。

Find the dot product between two vectors. Let’s check the expression max(0,dot(n,l)) in desmos .

求两个向量之间的点积。 让我们检查 desmos 中的表达式 max(0,dot(n,l)) 。

dot(x,y) is a function that finds the dot product. The max(0 , X ) function discards values less than or equal to 0.
dot(x,y) 是一个求点积的函数。 max(0 , X ) 函数丢弃小于或等于 0 的值。

Want to know more about NdotL Lambert brdf? 想了解更多关于 NdotL Lambert brdf 的信息?

GLSL vertex-by-vertex lighting – Programmer Sought

Assume that vector n and vector l are both normalized to 1. What is vector normalization? If you are curious about this, you can see more from the link below.

假设向量 n 和向量 l 都归一化为 1。 什么是向量归一化? 如果您对此感到好奇,可以从下面的链接中查看更多信息。

벡터 크기와 정규화 (개념 이해하기) | 심화 JS: 내추럴 시뮬레이션 | Khan Academy

The essence of this topic is not intended to be exhaustive, either mathematically or in many ways, so you can skip it. 本主题的本质并非详尽无遗,无论是在数学上还是在许多方面,因此您可以跳过它。

diffuse lambert dot (desmos.com)

Personally, I tend to use ndotl in many places. This is because the value obtained by the dot product is very useful. I use this as mask information.
If you’re a fast-witted artist, you’d probably already understand. Anyway, if you look at the 2D graph curve shown above, can you see it as a weight?
The lerp weight can be a very important input when using lerp in shaders.
You don’t know what lerp is? Then you can continue reading this article.
就个人而言,我倾向于在很多地方使用 ndotl。 这是因为点积得到的值非常有用。 我将其用作掩码信息。
如果你是一个机智的艺术家,你可能已经明白了。 无论如何,如果您查看上面显示的二维图形曲线,您是否可以将其视为权重?
在着色器中使用 lerp 时,lerp 权重可能是一个非常重要的输入。
你不知道什么是lerp? 然后你可以继续阅读这篇文章。

If NdotL is added, you can use the Step( ) function to make it look like a cartoon effect. Freelance TA Madumpa’s excellent explanation will come first. He explains in great detail the steps used in cartoon rendering.

如果添加了 NdotL,则可以使用 Step( ) 函数使其看起来像卡通效果。 自由职业者 TA Madumpa 的精彩解释将首先出现。 他非常详细地解释了卡通渲染中使用的步骤。

Debug shading:: LightSpace Outline width Toon Diffuse ramp variant result debug.

Diffuse Ramp add 漫反射坡道添加

Added half lambert Wrapped lighting. 添加了半兰伯特包裹照明。

Distinguish between Diffuse and Shadow areas. 区分漫反射和阴影区域。

half4 LitPassFragment(Varyings input, half facing : VFACE) : SV_Target
{
...
Light mainLight = GetMainLight(shadowCoord);
float3 halfDir = normalize(viewDirWS + mainLight.direction);
float NdotL = (dot(normalDir,mainLight.direction));
float NdotH = max(0,dot(normalDir,halfDir));
float halfLambertForToon = NdotL * 0.5 + 0.5;
half atten = mainLight.shadowAttenuation * mainLight.distanceAttenuation;

half3 brightCol = mainTex.rgb * ( halfLambertForToon) *  _BrightAddjustment;
...
}

Added PBR specular reflection 添加了 PBR 镜面反射。

If your overall lighting model is not PBR, this may not make much sense.
A traditional Blinn-Phong Specular NDF will suffice as it doesn’t even cover Glossy Reflection. However, I dared to use BeckMann-NDF for learning.
如果您的整体照明模型不是 PBR,这可能没有多大意义。
传统的 Blinn-Phong Specular NDF 就足够了,因为它甚至不包括光泽反射。 但是,我敢于使用 BeckMann NDF 进行学习。

For a cartoon-style Specular expression, learn what the SmoothStep function is through the link below. 对于卡通风格的镜面反射,请通过以下链接了解 SmoothStep 函数是什么。

The Book of Shaders

PBR Backmann Specular Add.

Just take a look at my previous post. 看看我之前的帖子就知道了。

开发皮肤着色器的注意事项。 2019版本。 – MY NAME IS JP (leegoonz.blog)

Add a definition for _PI . 添加 _PI 的定义。

define _PI 3.14159265359

This has to do with the energy conservation of the Specular method. 这与 Specular 方法的能量守恒有关。

See below for related topics on energy conservation. 有关节能的相关主题,请参见下文。

Energy conserved specular blinn-phong. – MY NAME IS JP (leegoonz.blog)

贝克曼 NDF 实现。

Graphic Rants: Specular BRDF Reference

// Beckmann normal distribution function here for Specualr
half NDFBeckmann(float roughness, float NdotH)
{
float roughnessSqr = max(1e-4f, roughness * roughness);
float NdotHSqr = NdotH * NdotH;
return max(0.000001,(1.0 / (_PI * roughnessSqr * NdotHSqr * NdotHSqr)) * exp((NdotHSqr-1)/(roughnessSqr * NdotHSqr)));
}
Result of Debugging only BeckMann NDF.

First of all, we are developing Toon shading, so forget about the environmental reflection and consider the above result only as a weight. The white part becomes the glossy part. Using the NDFBeckmann function appropriately would be one way, as adding a specular color rather than adding the specular white of the toon shading might be a bit more natural. (Don’t forget to use it as the weight Value.)

首先,我们正在开发 Toon 着色,所以忘记环境反射,将上述结果仅视为权重。 白色部分成为光泽部分。 适当地使用 NDFBeckmann 函数是一种方法,因为添加镜面反射颜色而不是添加卡通阴影的镜面反射白色可能更自然一些。 (不要忘记将其用作权重值。)

For example, adding specular directly to Diffuse ramp shading would result in something like this:

例如,将镜面反射直接添加到漫反射渐变着色会导致如下结果:

finCol += smoothstep( 0.1 , _SpecEdgeSmoothness, spec ); test in format. (Result with tone mapping applied)

Specular texture creation. 镜面纹理创建。

How it is implemented or what is decided in this section can be very personal for a given situation or purpose.
Depending on the genre of the game or limited circumstances, the range that can be changed may be too diverse.
In this chapter, I think it would be good to see it as a study case for some understanding.
对于给定的情况或目的,它的实施方式或本节中的决定可能非常个人化。
根据游戏的类型或有限的情况,可以更改的范围可能过于多样化。
在本章中,我认为最好将其视为一个研究案例以进行一些理解。

half SpecularWeight = smoothstep( 0.1 , _SpecEdgeSmoothness, spec );

Add a variable with an appropriate variable name and include smoothstep( 0.1 , _SpecEdgeSmoothness, spec ) .
添加具有适当变量名称的变量并包括 smoothstep( 0.1 , _SpecEdgeSmoothness, spec ) 。

Let’s open Substance Designer and do a simulation in advance. To simulate, we need two rendering passes: a Diffuse pass and a pass for SpecularWeight.

让我们打开 Substance Designer 并提前进行模拟。 为了模拟,我们需要两个渲染通道:一个漫反射通道和一个 SpecularWeight 通道。

In fact, there is no need to capture individual passes of the render buffer, just use a suitable screen capture tool to set the window capture setting and use the screen capture. At this time, in the pixel shader stage of the shader code, after an appropriate break line, return half4() is the return value. If you throw it to the screen, you will get a result similar to the desired pass.
实际上,不需要捕获渲染缓冲区的单个pass,只需使用合适的屏幕捕获工具设置窗口捕获设置并使用屏幕捕获即可。 这时候在shader代码的pixel shader阶段,在适当的break line之后,return half4()就是返回值。 如果你把它扔到屏幕上,你会得到一个类似于你想要的传球的结果。

Diffuse Pass. ( without Tone map )
SpecularWeight ( without tone map )

The texture size of the two passes was changed in advance by modifying the canvas size in Photoshop in the Power of Two format. Because Substance Designer only supports Graph canvas in Power of Two format.
通过在 Photoshop 中以 Power of Two 格式修改画布大小,提前更改了两次通道的纹理大小。 因为 Substance Designer 仅支持 Power of Two 格式的图形画布。

Two Bitmaps like this (Diffuse and SpecularWeight captured in Unity)

像这样的两个位图(在 Unity 中捕获的 Diffuse 和 SpecularWeight)

Specular Color map for testing. 用于测试的高光颜色图。

Substance Designer Node Scheme. 物质设计器节点方案。

I used the Replace Color Range node to generate the specular color map.

我使用“替换颜色范围”节点来生成镜面反射颜色贴图。

In Blend Node, I checked the Mode with Copy still fixed. Shall we just use the basic Lerp ? Right.

在混合节点中,我检查了复制仍然固定的模式。 我们应该只使用基本的 Lerp 吗? 对。

When the two passes are combined, I think this kind of feeling should be output. Now let’s modify the Shader.

两个pass结合的时候,我觉得应该是输出这种感觉。 现在让我们修改着色器。

Added Specular color map sampler.添加了高光颜色贴图采样器。

Specular Map

Code.

Properties
  {
      [Header(Surface Inputs)]
      [Space(5)]
      [MainTexture]
      _MainTex ("Diffuse Map", 2D) = "white" {}
	    _SpecColorTex ("Specular Color Map", 2D) = "white" {} // Added Specular Color map descriptor property.
		  _SSSTex("SSS (RGB)", 2D) = "white" {}
		  _ILMTex("ILM (RGB)", 2D) = "white" {}
		  [Space(5)]

Confirm the addition of the property UI. 确认添加属性 UI。

sampler2D _MainTex;
sampler2D _SpecColorTex;

添加了 Sampler2D。

float4 mainTex = tex2D(_MainTex,input.uv);
float4 specClorTex = tex2D(_SpecColorTex, input.uv);

Connect the specular color texture sampler descriptor in the pixel shader stage.

在像素着色器阶段连接镜面颜色纹理采样器描述符。

So now we can lerp mainTex and specColorTex, right? Let’s do it. If you can’t remember about lerp, please see [here] (https://www.notion.so/Stylized-Toon-WIP-82208f2bd96f45968981ae6306908476) once more.

所以现在我们可以 lerp mainTex 和 specColorTex,对吧? 我们开始做吧。 如果您不记得 lerp,请再次查看 [此处] (https://www.notion.so/Stylized-Toon-WIP-82208f2bd96f45968981ae6306908476)。

Code implementation. ( Pixel shader stage ) 代码实现。 (像素着色器阶段)

half spec = NDFBeckmann(_Roughness , NdotH);
float SpecularMask = ilmTex.b;
half SpecularWeight = smoothstep( 0.1 , _SpecEdgeSmoothness,  spec );
float shadowContrast = step(shadowThreshold * _ShadowRecieveThresholdWeight,NdotL * atten);
half3 ToonDiffuse = brightCol * shadowContrast;
half3 mergedDiffuseSpecular = lerp(ToonDiffuse , specClorTex , SpecularWeight * (_SpecularPower * SpecularMask));

finCol.rgb = lerp(shadowCol,mergedDiffuseSpecular ,shadowContrast);
For Tone Fertilizer, we temporarily deactivated the addition of mainLight.color.rgb and compared it. It shows the same result as predicted by Substance Designer.  对于 Tone Fertilizer,我们暂时停用了 mainLight.color.rgb 的添加并进行了比较。 它显示的结果与 Substance Designer 预测的结果相同。

For Tone Fertilizer, we temporarily deactivated the addition of mainLight.color.rgb and compared it. It shows the same result as predicted by Substance Designer. 对于 Tone Fertilizer,我们暂时停用了 mainLight.color.rgb 的添加并进行了比较。 它显示的结果与 Substance Designer 预测的结果相同。

Enabled directional light intensity 2.0. 启用定向光强度 2.0。

Testing variable values after combining specular. (The preview test above was executed after adding the tone mapping below).  结合镜面反射后测试变量值。 (上面的预览测试是在添加下面的色调映射后执行的)

Testing variable values after combining specular. (The preview test above was executed after adding the tone mapping below). 结合镜面反射后测试变量值。 (上面的预览测试是在添加下面的色调映射后执行的)

Now we need to add tone mapping. 现在我们需要添加色调映射。

Simple Tone mapping applied. 应用了简单的色调映射。

When working on a project at the company, there are areas where there are frequent discussions between the scene team, the background team, and the effects development department. Usually, when Tone mapping is applied, the original colour of the background, character, and effect tends to be discoloured (?) because it is processed with the rendering buffer in the post-processing stage. Please read the attached two documents for detailed information.

Tone Map simple tone mapping. (shadertoy.com)

However, I will not use Real style hdr tone mapping, where color discoloration is too severe because it is pursuing cartoon-style rendering. I’ll handle it very simply inside the shader.

但是,我不会使用Real style hdrtone mapping,因为它追求的是卡通风格的渲染,这里的颜色变色太严重了。 我将在着色器中非常简单地处理它。

Tone mapping implementation.色调映射实现。

//Simple Tone mapping
finCol.rgb = finCol.rgb /(finCol.rgb + 1);

You can also visualize the tone map curve. Try the Shadertoy mentioned above.

您还可以可视化色调映射曲线。 试试上面提到的Shadertoy。

Light intensity value: 2.0 with bloom with No tone-mapped
Light intensity value: 2.0 with bloom within Simple tone-mapped

I checked the overall feeling in the simulator mode. The overall color and tone look complete.

我在模拟器模式下检查了整体感觉。 整体颜色和色调看起来很完整。

Added Cast Shadow. 添加了投射阴影。

Unity does not render shadows unless there is a pass called Shadow-Caster inside the Shader. I will add one more pass to URP Toon.shder.

Unity 不会渲染阴影,除非在 Shader 中有一个叫做 Shadow-Caster 的过程。 我将再向 URP Toon.shder 添加一个传递。

Just add pass in this structure. 只需在此结构中添加 pass 即可。

The Unity engine has already decided for what purpose it will be rendered according to the tag type. This is to use “LightMode” = “ShadowCaster” . That’s why the Pass Name “ShadowCaster” doesn’t really matter.
Unity 引擎已经根据标签类型决定了渲染的目的。 这是使用 “LightMode” = “ShadowCaster” 。 这就是为什么通行证名称“ShadowCaster”并不重要的原因。

You can simply add the ShadowCaster pass inside the URP Toon.shader.

您可以简单地在 URP Toon.shader 中添加 ShadowCaster 通道。

Pass//Shadow Caster Pass
{
   Name "ShadowCaster"
    Tags
   {
       "LightMode" = "ShadowCaster"
   }

    ZWrite On
    ZTest LEqual
    Cull Off

    HLSLPROGRAM
    #pragma exclude_renderers gles gles3 glcore
    #pragma target 2.0

   #pragma multi_compile_instancing
   #pragma multi_compile_vertex _ _CASTING_PUNCTUAL_LIGHT_SHADOW

    #pragma vertex ShadowPassVertex
    #pragma fragment ShadowPassFragment

    #include "Packages/com.unity.render-pipelines.universal/Shaders/LitInput.hlsl"
    #include "Packages/com.unity.render-pipelines.universal/Shaders/ShadowCasterPass.hlsl"
    ENDHLSL
}

If you add pass , the shader actually works by calling the code below.

如果添加 pass ,着色器实际上是通过调用下面的代码来工作的。

That is, the #included “Packages/com.unity.render-pipelines.universal/Shaders/ShadowCasterPass.hls code is compiled together.

也就是说,#included “Packages/com.unity.render-pipelines.universal/Shaders/ShadowCasterPass.hls 代码被编译在一起。

The code is below. 代码如下。

#ifndef UNIVERSAL_SHADOW_CASTER_PASS_INCLUDED
#define UNIVERSAL_SHADOW_CASTER_PASS_INCLUDED

#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Shadows.hlsl"

float3 _LightDirection;
float3 _LightPosition;

struct Attributes
{
    float4 positionOS   : POSITION;
    float3 normalOS     : NORMAL;
    float2 texcoord     : TEXCOORD0;
    UNITY_VERTEX_INPUT_INSTANCE_ID
};

struct Varyings
{
    float2 uv           : TEXCOORD0;
    float4 positionCS   : SV_POSITION;
};

float4 GetShadowPositionHClip(Attributes input)
{
    float3 positionWS = TransformObjectToWorld(input.positionOS.xyz);
    float3 normalWS = TransformObjectToWorldNormal(input.normalOS);

#if _CASTING_PUNCTUAL_LIGHT_SHADOW
    float3 lightDirectionWS = normalize(_LightPosition - positionWS);
#else
    float3 lightDirectionWS = _LightDirection;
#endif

    float4 positionCS = TransformWorldToHClip(ApplyShadowBias(positionWS, normalWS, lightDirectionWS));

#if UNITY_REVERSED_Z
    positionCS.z = min(positionCS.z, UNITY_NEAR_CLIP_VALUE);
#else
    positionCS.z = max(positionCS.z, UNITY_NEAR_CLIP_VALUE);
#endif

    return positionCS;
}

Varyings ShadowPassVertex(Attributes input)
{
    Varyings output;
    UNITY_SETUP_INSTANCE_ID(input);

    output.uv = TRANSFORM_TEX(input.texcoord, _BaseMap);
    output.positionCS = GetShadowPositionHClip(input);
    return output;
}

half4 ShadowPassFragment(Varyings input) : SV_TARGET
{
    Alpha(SampleAlbedoAlpha(input.uv, TEXTURE2D_ARGS(_BaseMap, sampler_BaseMap)).a, _BaseColor, _Cutoff);
    return 0;
}

#endif

If you want to modify ShadowCasterPass or implement additional implementations such as the engine team, you can implement it here. In the case of an artist, I think it is only necessary to roughly understand this structural mechanism.

如果你想修改ShadowCasterPass或者实现引擎团队等额外的实现,可以在这里实现。 就艺术家而言,我认为只需大致了解这种结构机制即可。

Shadow Caster added result.暗影施法者添加结果。

Added ShadowCaster. You can see it casting shadows on the floor or other objects. If you can’t see the shadows cast, there are two things to check and move on.

添加了 ShadowCaster。 您可以看到它在地板或其他物体上投射阴影。 如果您看不到投射的阴影,则有两件事需要检查并继续。

  1. Are the shadows on the lights turned on? 灯上的阴影是否打开?
  2. Are shadows turned on in the URP Rendering setting? 在 URP 渲染设置中是否打开了阴影?

Added Received Shadow. 添加接收阴影。

Well, a normal shadow can be divided into two types: a cast shadow, which is a shadow cast, and a received shadow, which is a surface that receives a shadow. Usually, it is said that Received Shadow is processed as Self Shadow.

好吧,普通阴影可以分为两种类型:投射阴影,即阴影投射,以及接收阴影,即接收阴影的表面。 通常,将 Received Shadow 处理为 Self Shadow。

Added Self shadow Receive. 添加了自影接收。

Code implementation. ( Pixel shader stage ) 代码实现。 (像素着色器阶段)

#if defined(MAIN_LIGHT_CALCULATE_SHADOWS)
    float3 positionWS = input.positionWS.xyz;
#endif

#if defined(REQUIRES_VERTEX_SHADOW_COORD_INTERPOLATOR)
   float4 shadowCoord = input.shadowCoord;
#elif defined(MAIN_LIGHT_CALCULATE_SHADOWS)
   float4 shadowCoord = TransformWorldToShadowCoord(positionWS);
#else
   float4 shadowCoord = float4(0, 0, 0, 0);
#endif

   Light mainLight = GetMainLight(shadowCoord);
   half atten = mainLight.shadowAttenuation * mainLight.distanceAttenuation;

Self shadow Threshold implementation. 自阴影阈值实现。

half shadowContrast = step(shadowThreshold * _ShadowRecieveThresholdWeight,NdotL * atten);

Translucent function added. 添加了半透明功能。

Well, in general, this is a function that is often unnecessary in cartoon rendering. I will not actually use this function for the Translucent scattering effect, but I plan to use the mask obtained using this function to transform the shadow color, etc. For example, I would like to use lerp(resultColor,resultColor * saturation,thisFunction) in this way.

嗯,总的来说,这是一个在卡通渲染中经常不需要的功能。 对于半透明散射效果,我实际上不会使用此功能,但我计划使用使用此功能获得的遮罩来转换阴影颜色等。 例如,我想以这种方式使用 lerp(resultColor,resultColor *饱和度,thisFunction)。

GDC Vault – Approximating Translucency for a Fast, Cheap and Convincing Subsurface Scattering Look

Translucent function added. 添加了半透明功能。

Well, in general, this is a function that is often unnecessary in cartoon rendering. I will not actually use this function for the Translucent scattering effect, but I plan to use the mask obtained using this function to transform the shadow color, etc. For example, I would like to use lerp(resultColor,resultColor * saturation,thisFunction) in this way.

如果您想了解更多关于 Fast Translucent 的信息,上面的 GDC 演示会很有帮助。

Result of debugging the function. 功能调试结果。
before application. 申请前。
function finCol.rgb = lerp(finCol , finCol + (shadowCol *2 ), scatterOut); The result of applying the format.
函数 finCol.rgb = lerp(finCol , finCol + (shadowCol *2 ), scatterOut); 应用格式的结果。

Now we need to create one more function to control the saturation of the color.

现在我们需要再创建一个函数来控制颜色的饱和度。

And, as you may already know, there is no Thickness value yet. The property of the part through which light can pass must create a Thickness value. I’ll make it inside the Shader. I haven’t really paid attention to the optimization phase yet.

而且,您可能已经知道,还没有厚度值。 光可以通过的零件的属性必须创建一个厚度值。 我会在着色器中制作它。 我还没有真正关注优化阶段。

The complete code for the implementation of the LitPassVertex function.

LitPassVertex 函数实现的完整代码。

//--------------------------------------
    //  Vertex shader

Varyings LitPassVertex(Attributes input)
        {
            Varyings output = (Varyings)0;
UNITY_SETUP_INSTANCE_ID(input);
            UNITY_TRANSFER_INSTANCE_ID(input, output);

            float3 positionWS = TransformObjectToWorld(input.positionOS.xyz);
            float3 viewDirWS = GetCameraPositionWS() - positionWS;
            output.uv = TRANSFORM_TEX(input.texCoord, _MainTex);
float3 normalWS = TransformObjectToWorldNormal(input.normalOS);
            output.normalWS = normalWS;
            output.viewDirWS = viewDirWS;

        #if defined(REQUIRES_WORLD_SPACE_POS_INTERPOLATOR)
output.positionWS = float4(positionWS , 0);
        #endif

        #if defined(REQUIRES_VERTEX_SHADOW_COORD_INTERPOLATOR)
                output.shadowCoord = GetShadowCoord(vertexInput);
        #endif

           output.positionCS = TransformWorldToHClip(positionWS);
            output.color = input.color;
            return output;
        }

Extended Shader function complete code. 扩展Shader功能完整代码。

//--------------------------------------
     //  shader and functions

#define _PI 3.14159265359
// Beckmann normal distribution function here for Specualr
half NDFBeckmann(float roughness, float NdotH)
{
    float roughnessSqr = max(1e-4f, roughness * roughness);
    float NdotHSqr = NdotH * NdotH;
    return max(0.000001,(1.0 / (_PI * roughnessSqr * NdotHSqr * NdotHSqr))  * exp((NdotHSqr-1)/(roughnessSqr * NdotHSqr)));
}

// Fast back scatter distribution function here for virtual back lighting
half3 LightScatterFunction ( half3 surfaceColor , half3 normalWS ,  half3 viewDir , Light light , half distortion , half power , half scale)
         {
          half3 lightDir = light.direction;
           half3 normal = normalWS;
           half3 H = lightDir + (normal * distortion);
           float VdotH = pow(saturate(dot(viewDir, -H)), power) * scale;
           half3 col = light.color * VdotH;
           return col;
         }

LitPassFragment function part implementation complete code.

LitPassFragment 函数部分实现完整代码。

half4 LitPassFragment(Varyings input, half facing : VFACE) : SV_Target
         {
             UNITY_SETUP_INSTANCE_ID(input);

//  Apply lighting
float4 finCol = 1;//initializing

float4 mainTex = tex2D(_MainTex,input.uv);
           float4 specClorTex = tex2D(_SpecColorTex, input.uv);
   float4 sssTex = tex2D(_SSSTex,input.uv);
   float4 ilmTex = tex2D(_ILMTex,input.uv);

           float shadowThreshold = ilmTex.g;
   shadowThreshold *= input.color.r;
   shadowThreshold = 1- shadowThreshold + _ShadowShift;

   float3 normalDir = normalize(input.normalWS);
   float3 lightDir = _MainLightPosition.xyz;//normalize(_WorldLightDir.xyz);
float3 viewDirWS = GetWorldSpaceViewDir(input.positionWS.xyz);
   float3 halfDir = normalize(viewDirWS + lightDir);

   float NdotL = (dot(normalDir,lightDir));
   float NdotH = max(0,dot(normalDir,halfDir));
           float halfLambertForToon = NdotL * 0.5 + 0.5;
           halfLambertForToon = saturate(halfLambertForToon);
#if defined(MAIN_LIGHT_CALCULATE_SHADOWS)
           float3 positionWS = input.positionWS.xyz;
         #endif

         #if defined(REQUIRES_VERTEX_SHADOW_COORD_INTERPOLATOR)
   float4 shadowCoord = input.shadowCoord;
#elif defined(MAIN_LIGHT_CALCULATE_SHADOWS)
   float4 shadowCoord = TransformWorldToShadowCoord(positionWS);
#else
   float4 shadowCoord = float4(0, 0, 0, 0);
#endif

   Light mainLight = GetMainLight(shadowCoord);
           half atten = mainLight.shadowAttenuation * mainLight.distanceAttenuation;
           half3 brightCol = mainTex.rgb * ( halfLambertForToon) *  _BrightAddjustment;
   half3 shadowCol =  mainTex.rgb * sssTex.rgb;
           half3 scatterOut = LightScatterFunction(shadowCol.xyz , normalDir.xyz , viewDirWS , mainLight , _Distortion , _Power ,_Scale);

   half spec = NDFBeckmann(_Roughness , NdotH);
   half SpecularMask = ilmTex.b;
           half SpecularWeight = smoothstep( 0.1 , _SpecEdgeSmoothness,  spec );
   half  shadowContrast = step(shadowThreshold * _ShadowRecieveThresholdWeight,NdotL * atten);
           half3 ToonDiffuse = brightCol * shadowContrast;
           half3 mergedDiffuseSpecular = lerp(ToonDiffuse , specClorTex , SpecularWeight * (_SpecularPower * SpecularMask));

           finCol.rgb = lerp(shadowCol,mergedDiffuseSpecular ,shadowContrast);

           finCol.rgb = lerp(finCol.rgb , finCol.rgb + (shadowCol.rgb * shadowCol.rgb) , scatterOut.rgb);
           finCol.rgb *= mainLight.color.rgb;
           float DetailLine = ilmTex.a;
           DetailLine = lerp(DetailLine,_DarkenInnerLine,step(DetailLine,_DarkenInnerLine));
           finCol.rgb *= DetailLine;

//Simple Tone mapping
finCol.rgb = finCol.rgb /(finCol.rgb + _Exposure);
             return finCol;
         }

URP Toon.shader Complete code.

Shader "LightSpaceToon2/ToonBase"
{
    Properties
    {
      [Header(Surface Inputs)]
      [Space(5)]
      [MainTexture]
      _MainTex ("Diffuse Map", 2D) = "white" {}
    	_SpecColorTex ("Specular Color Map", 2D) = "white" {}
			_SSSTex("SSS (RGB)", 2D) = "white" {}
			_ILMTex("ILM (RGB)", 2D) = "white" {}
    	[Space(5)]
    	[Header(Toon Surface Inputs)]
    	_ShadowShift("Shadow Shift", Range(-2,1)) = 1
			_DarkenInnerLine("Darken Inner Line", Range(0, 1)) = 0.2
    	_BrightAddjustment("Bright Addjustment", Range(0.5,2)) = 1.0
    	[Space(5)]
    	[Header(Toon Specular Inputs)]
    	_Roughness ("Roughness", Range (0.2, 0.85)) = 0.5
			_SpecEdgeSmoothness ("Specular Edge Smoot",Range(0.1,1)) = 0.5
    	_SpecularPower("Specular Power", Range(0.01,2)) = 1
    	[Space(5)]
    	[Header(Scatter Input)]
    	_Distortion("Distortion",Float) = 0.28
    	_Power("Power",Float)=1.43
    	_Scale("Scale",Float)=0.49
    	[Space(5)]
    	[Header(Tone Mapped)]
    	_Exposure ( " Tone map Exposure ", Range(0 , 1)) = 0.5
    	[Header(Render Queue)]
        [Space(8)]
        [IntRange] _QueueOffset     ("Queue Offset", Range(-50, 50)) = 0
    	[ToggleOff(_RECEIVE_SHADOWS_OFF)] _ReceiveShadowsOff ("Receive Shadows", Float) = 1
    	_ShadowRecieveThresholdWeight ("SelfShadow Threshold", Range (0.001, 2)) = 0.25

    //  Needed by the inspector
        [HideInInspector] _Culling  ("Culling", Float) = 0.0
        [HideInInspector] _AlphaFromMaskMap  ("AlphaFromMaskMap", Float) = 1.0
    }

    SubShader
    {
        Tags
        {
            "RenderPipeline" = "UniversalPipeline"
            "RenderType" = "Opaque"
            "Queue" = "Geometry"
        }
        LOD 100

        Pass // Toon Shading Pass
        {
            Name "ForwardLit"
            Tags{"LightMode" = "UniversalForward"}

            HLSLPROGRAM
            // Required to compile gles 2.0 with standard SRP library
            #pragma prefer_hlslcc gles
            #pragma exclude_renderers d3d11_9x

        //  Shader target needs to be 3.0 due to tex2Dlod in the vertex shader or VFACE
            #pragma target 3.0

            // -------------------------------------
            // Material Keywords
            #pragma shader_feature_local _RECEIVE_SHADOWS_OFF

            // -------------------------------------
            // Universal Pipeline keywords
            #pragma multi_compile _ _MAIN_LIGHT_SHADOWS _MAIN_LIGHT_SHADOWS_CASCADE _MAIN_LIGHT_SHADOWS_SCREEN
            #pragma multi_compile_fragment _ _SHADOWS_SOFT
            
            // -------------------------------------
            // Unity defined keywords
            #pragma multi_compile_fog

            //--------------------------------------
            // GPU Instancing
            #pragma multi_compile_instancing
            #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
            #include "Packages/com.unity.render-pipelines.core/ShaderLibrary/Color.hlsl"
            #include "Packages/com.unity.render-pipelines.core/ShaderLibrary/UnityInstancing.hlsl"
            #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Lighting.hlsl"
            
    sampler2D _MainTex;
    sampler2D _SpecColorTex;
    sampler2D _SSSTex;
    sampler2D _ILMTex;

    //  Material Inputs
    CBUFFER_START(UnityPerMaterial)
        half4  _MainTex_ST;
    //  Toon
        half	_ShadowShift;
        half    _DarkenInnerLine;
        half    _SpecEdgeSmoothness;
        half    _Roughness;
				half	_BrightAddjustment;
				half	_SpecularPower;
				half	_ShadowRecieveThresholdWeight;
    //  Scatter
				half _Distortion;
				half _Power;
				half _Scale;
    //  Tone Map
				half _Exposure;
            CBUFFER_END

            #pragma vertex LitPassVertex
            #pragma fragment LitPassFragment
            
			struct Attributes //appdata
			{
				float4 positionOS : POSITION;
            	float3 normalOS : NORMAL;
				float4 color : COLOR; //Vertex color attribute input.
            	float2 texCoord : TEXCOORD0;
            	UNITY_VERTEX_INPUT_INSTANCE_ID
			};

			struct Varyings //v2f
			{
				float4 positionCS : SV_POSITION;
				float2 uv : TEXCOORD0;
				float4 color : COLOR;
				float3 normalWS : NORMAL;
				float4 vertex : TEXCOORD1;
			    float3 viewDirWS : TEXCOORD2;
			#if defined(REQUIRES_VERTEX_SHADOW_COORD_INTERPOLATOR)
				float4 shadowCoord    : TEXCOORD3; // compute shadow coord per-vertex for the main light
			#endif
				float4 positionWS : TEXCOORD4;
				UNITY_VERTEX_INPUT_INSTANCE_ID
			};
        //--------------------------------------
        //  Vertex shader

            Varyings LitPassVertex(Attributes input)
            {
                Varyings output = (Varyings)0;
				UNITY_SETUP_INSTANCE_ID(input);
                UNITY_TRANSFER_INSTANCE_ID(input, output);
                 
                float3 positionWS = TransformObjectToWorld(input.positionOS.xyz);
                float3 viewDirWS = GetCameraPositionWS() - positionWS;
                output.uv = TRANSFORM_TEX(input.texCoord, _MainTex);
				float3 normalWS = TransformObjectToWorldNormal(input.normalOS);
                output.normalWS = normalWS;
                output.viewDirWS = viewDirWS;
                
            #if defined(REQUIRES_WORLD_SPACE_POS_INTERPOLATOR)
				output.positionWS = float4(positionWS , 0);
            #endif

            #if defined(REQUIRES_VERTEX_SHADOW_COORD_INTERPOLATOR)
                    output.shadowCoord = GetShadowCoord(vertexInput);
            #endif
				
            	output.positionCS = TransformWorldToHClip(positionWS);
                output.color = input.color;
                return output;
            }

        //--------------------------------------
        //  shader and functions
            
			#define _PI 3.14159265359
			// Beckmann normal distribution function here for Specualr
            half NDFBeckmann(float roughness, float NdotH)
			{
			    float roughnessSqr = max(1e-4f, roughness * roughness);
			    float NdotHSqr = NdotH * NdotH;
			    return max(0.000001,(1.0 / (_PI * roughnessSqr * NdotHSqr * NdotHSqr))  * exp((NdotHSqr-1)/(roughnessSqr * NdotHSqr)));
			}
            
			// Fast back scatter distribution function here for virtual back lighting
            half3 LightScatterFunction ( half3 surfaceColor , half3 normalWS ,  half3 viewDir , Light light , half distortion , half power , half scale)
            {
	            half3 lightDir = light.direction;
            	half3 normal = normalWS;
            	half3 H = lightDir + (normal * distortion);
            	float VdotH = pow(saturate(dot(viewDir, -H)), power) * scale;
            	half3 col = light.color * VdotH;
            	return col;
            }
            

        //--------------------------------------
        //  Fragment shader and functions


            half4 LitPassFragment(Varyings input, half facing : VFACE) : SV_Target
            {
                UNITY_SETUP_INSTANCE_ID(input);

            //  Apply lighting
                float4 finCol = 1; //initializing
            	
				float4 mainTex = tex2D(_MainTex,input.uv);
            	float4 specClorTex = tex2D(_SpecColorTex, input.uv);
				float4 sssTex = tex2D(_SSSTex,input.uv);
				float4 ilmTex = tex2D(_ILMTex,input.uv);
				
            	float shadowThreshold = ilmTex.g;
				shadowThreshold *= input.color.r;
				shadowThreshold = 1- shadowThreshold + _ShadowShift;
				
				float3 normalDir = normalize(input.normalWS);
				float3 viewDirWS = GetWorldSpaceViewDir(input.positionWS.xyz);
				
			#if defined(MAIN_LIGHT_CALCULATE_SHADOWS)
            	float3 positionWS = input.positionWS.xyz;
            #endif

            #if defined(REQUIRES_VERTEX_SHADOW_COORD_INTERPOLATOR)
				float4 shadowCoord = input.shadowCoord;
			#elif defined(MAIN_LIGHT_CALCULATE_SHADOWS)
				float4 shadowCoord = TransformWorldToShadowCoord(positionWS);
			#else
				float4 shadowCoord = float4(0, 0, 0, 0);
			#endif

				Light mainLight = GetMainLight(shadowCoord);
            	float3 halfDir = normalize(viewDirWS + mainLight.direction);
            	float NdotL = (dot(normalDir,mainLight.direction));
				float NdotH = max(0,dot(normalDir,halfDir));
            	float halfLambertForToon = NdotL * 0.5 + 0.5;
            	half atten = mainLight.shadowAttenuation * mainLight.distanceAttenuation;
            	
            	half3 brightCol = mainTex.rgb * ( halfLambertForToon) *  _BrightAddjustment;
				half3 shadowCol =  mainTex.rgb * sssTex.rgb;
            	half3 scatterOut = LightScatterFunction(shadowCol.xyz , normalDir.xyz , viewDirWS , mainLight , _Distortion , _Power ,_Scale);
				
            	
            	halfLambertForToon = saturate(halfLambertForToon);
				half spec = NDFBeckmann(_Roughness , NdotH);
				half SpecularMask = ilmTex.b;
            	half SpecularWeight = smoothstep( 0.1 , _SpecEdgeSmoothness,  spec );
				half  shadowContrast = step(shadowThreshold * _ShadowRecieveThresholdWeight,NdotL * atten);
            	half3 ToonDiffuse = brightCol * shadowContrast;
            	half3 mergedDiffuseSpecular = lerp(ToonDiffuse , specClorTex , SpecularWeight * (_SpecularPower * SpecularMask));
            	
            	finCol.rgb = lerp(shadowCol,mergedDiffuseSpecular ,shadowContrast);
            	
            	finCol.rgb = lerp(finCol.rgb , finCol.rgb + (shadowCol.rgb * shadowCol.rgb) , scatterOut.rgb);
            	finCol.rgb *= mainLight.color.rgb;
            	float DetailLine = ilmTex.a;
            	DetailLine = lerp(DetailLine,_DarkenInnerLine,step(DetailLine,_DarkenInnerLine));
            	finCol.rgb *= DetailLine;

            	//Simple Tone mapping
				finCol.rgb = finCol.rgb /(finCol.rgb + _Exposure);
                return finCol;
            }
            ENDHLSL
        }
    	
    	
    	Pass //Shadow Caster Pass
		{
		    Name "ShadowCaster"
		    Tags
			{
		    	"LightMode" = "ShadowCaster"
		    }

		    ZWrite On
		    ZTest LEqual
		    Cull Off

		    HLSLPROGRAM
		    #pragma exclude_renderers gles gles3 glcore
		    #pragma target 2.0

			#pragma multi_compile_instancing
			#pragma multi_compile_vertex _ _CASTING_PUNCTUAL_LIGHT_SHADOW

		    #pragma vertex ShadowPassVertex
		    #pragma fragment ShadowPassFragment

		    #include "Packages/com.unity.render-pipelines.universal/Shaders/LitInput.hlsl"
		    #include "Packages/com.unity.render-pipelines.universal/Shaders/ShadowCasterPass.hlsl"
		    
		    ENDHLSL
		}
	

    }
    FallBack "Hidden/InternalErrorShader"
}

Thickness map production for translucency mask. 半透明遮罩的厚度图制作。

What is a thickness map or a curvature map? It is necessary for the operation anyway, and among the information related to the operation result, the weight I mentioned is also used here. In fact, you can calculate it mathematically, or you can get information in the form of a baked texture using ray tracing as a pre-calculation method. In any case, this is all used as a weight input. Easy, right? Unless it is a special mathematical computational process, weight information is really used in many places, and even if you know how to create weight information, there is a lot of scope that you can utilize, and you can also do your own special shading processing.

什么是厚度图或曲率图? 反正操作是必须的,而且在与操作结果相关的信息中,这里也用到了我提到的权重。 实际上,您可以通过数学方式对其进行计算,也可以使用光线追踪作为预计算方法以烘焙纹理的形式获取信息。 在任何情况下,这都用作权重输入。 很简单,对吧? 除非是特殊的数学计算过程,权重信息真的用在很多地方,即使你知道如何创建权重信息,也有很多可以利用的范围,你也可以自己做特殊的着色处理 .

Seriously, it really matters. However, in fact, it is essential to use it extensively, although there is a pitfall that the wider the mathematical knowledge, the better.
说真的,这真的很重要。 但是,实际上,广泛使用它是必不可少的,尽管存在一个陷阱,即数学知识越广越好。

Added Toon Ramp effect using 2D LUT. 添加了使用 2D LUT 的 Toon Ramp 效果。

If you have completed the above chapter, you will also learn about Toon Ramp using 2D LUT as an appendix. We will copy the Shader we created and rename it. Copy URP Toon.shader and rename it to URP Toon Lut.shader.

如果您已完成上述章节,您还将了解使用 2D LUT 作为附录的 Toon Ramp。 我们将复制我们创建的着色器并重命名它。 复制 URP Toon.shader 并将其重命名为 URP Toon Lut.shader。

The reason we separated the shader is simple. In practice, there are cases where general-purpose external plug-ins are used, and there are cases where Unity internal functions are used… I try not to use it. Uber shader types can give artists flexibility, but there are certainly times when they can be very complex and confusing for artists. Also, because of the large number of branches, the amount of memory taken up by the shader is often a serious problem. For example, there is no need to have a fog-related multi-compile in the LOD0 shader that never goes inside the fog. This is because, if you accurately understand the scene visibility, depth of fog, and other related situations, you should focus on uniqueness rather than versatility and optimize memory.

我们分离着色器的原因很简单。 在实践中,有使用通用外部插件的情况,也有使用Unity内部功能的情况……我尽量不使用。 Uber 着色器类型可以为艺术家提供灵活性,但肯定有时它们对艺术家来说非常复杂和混乱。 另外,由于分支数量众多,着色器占用的内存量往往是一个严重的问题。 例如,不需要在 LOD0 着色器中进行与雾相关的多重编译,它永远不会进入雾中。 这是因为,如果准确了解场景可见度、雾的深度等相关情况,您应该专注于唯一性而不是通用性并优化内存。

执行。

URP Toon Lut.shader

函数片段。Function fragment.

half3 ToonRamp(half halfLambertLightWrapped)
{
	half3 rampmap = tex2D(_RampTex , _RampOffset + ((halfLambertLightWrapped.xx - 0.5) * _RampScale) + 0.5).rgb;
 	return rampmap;
}

A function that processes the ramp texture with Toon Ramp shading. 使用 Toon Ramp 着色处理渐变纹理的函数。

Debug the result of applying 2D Ramp texture and ndotl light wrapped as UV coordinates. 调试应用 2D Ramp 纹理和包裹为 UV 坐标的 ndotl 光的结果。

Debug the result of applying 2D Ramp texture and ndotl light wrapped as UV coordinates. 调试应用 2D Ramp 纹理和包裹为 UV 坐标的 ndotl 光的结果。

Ramp toon texture map made with a width of 256 pixels and a height of 2 pixels.

Ramp toon 纹理贴图,宽度为 256 像素,高度为 2 像素。

You can create it in Substance designer or simply in Photoshop. 您可以在 Substance Designer 中创建它,也可以直接在 Photoshop 中创建。

Experimental result.

I tried to debug shading using halfLambert diffuse and Toon Ramp together.

我尝试同时使用 halfLambert 漫反射和 Toon Ramp 调试着色。

half3 debugShading = rampToon * (halfLambertForToon * halfLambertForToon * 1.25) ;
return half4(debugShading,1);

Shading is not smooth because it uses Vertex Normal without using normal map. 着色不平滑,因为它使用顶点法线而不使用法线贴图。

Mesh segmentation processing for shading. 用于着色的网格分割处理。

Appendix 附录

Color Correction. color correction. 色彩校正。 色彩校正

The concept of Color Correction mentioned in this topic refers to the correction of the color space representation of the actual device. To put it simply, the monitor I’m working on supports only the sRGB color space, but in fact, iPhones and latest Android phones use the Display p3 color space. I think it will be easier to understand if you look at the concept of a computer-generated image that is printed in a different color. There are reasons why I take color correction so seriously.

本主题中提到的色彩校正概念是指对实际设备的色彩空间表示的校正。 简单来说,我正在做的显示器只支持sRGB色彩空间,但实际上,iPhone和最新的Android手机使用的是Display p3色彩空间。 我认为如果您查看以不同颜色打印的计算机生成图像的概念,会更容易理解。 我如此认真地对待色彩校正是有原因的。

When developing an MMORPG in 2018, we found a big problem. The bandwidth settings for monitor colors were all different between the operators. Even the color gamut of the PC screen and the Smart-Phone screen was different. We had a hard time deciding which color was the right choice for our expression.

2018年开发一款MMORPG的时候,发现了一个大问题。 运营商之间监视器颜色的带宽设置都不同。 甚至PC屏幕和智能手机屏幕的色域也不同。 我们很难决定哪种颜色最适合我们的表情。

So I saw that the lighting artist had bought a new monitor. It was the latest Dell monitor that supports AdobeRGB and DCI-P3, and the character artist was a Dell UltraSharp 27-inch monitor that supports up to sRGB.

所以我看到灯光师买了一个新的显示器。 它是支持 AdobeRGB 和 DCI-P3 的最新戴尔显示器,角色艺术家是支持高达 sRGB 的戴尔 UltraSharp 27 英寸显示器。

In this way, different variable values were set depending on whether the color space was supported or not. So, the method I came up with was to simulate the Display-P3 Color matrix in sRGB mode. For more information on color vision, etc., we recommend that you read the link.

这样,根据是否支持颜色空间来设置不同的变量值。 所以,我想出的方法是在 sRGB 模式下模拟 Display-P3 颜色矩阵。 有关色觉等的更多信息,我们建议您阅读链接。

아이폰7에서 시작한 새로운 색공간의 기준 DCI-P3, Display P3 : 네이버 블로그 (naver.com)

Color Correction should normally be done in the post-process area. I won’t go into deep learning about this in this topic, but I’ll show you how to simply check how the rendering you’re working on looks like on Display p3.

The simplest way is to use Photoshop’s icc profile.

色彩校正通常应在后处理区域进行。 我不会在本主题中深入学习这方面的知识,但我将向您展示如何简单地检查您正在处理的渲染在 Display p3 上的外观。

最简单的方法是使用 Photoshop 的 icc 配置文件。

Click on it to enlarge it and see the difference. Since the color vision of Display P3 is more striking in Red and Green, you can see a clear difference by checking the red and yellow areas with the Photoshop eyedropper in the above result.
单击它以放大它并查看差异。 由于 Display P3 的色觉在红色和绿色中更为显着,因此您可以通过使用 Photoshop 滴管检查上述结果中的红色和黄色区域,看到明显的差异。

In particular, there are more differences in the photorealistic style that is more affected by juxtaposition blending, that is, photos or more realistic results.

尤其是照片写实风格的差异比较大,受并置混合影响比较大,也就是照片或者比较写实的效果。

Well, you might want to read something like this.

Let’s compare Display P3 and sRGB color vision. Let’s assume that your monitor only supports sRGB. The 2018 Dell 27-inch Ultra Sharp base model only supports sRGB mode and user mode, and does not support Wide Gamut.

让我们比较 Display P3 和 sRGB 色觉。 假设您的显示器仅支持 sRGB。 2018款戴尔27英寸Ultra Sharp基础款只支持sRGB模式和用户模式,不支持Wide Gamut。

Examples of various wide-gamut images (webkit.org)

Open the image saved in sRGB mode on the web page above in Photoshop and select Assign Profile ⇒ Profile ⇒ image P3.

在 Photoshop 中打开上面网页上以 sRGB 模式保存的图像,然后选择分配配置文件 ⇒ 配置文件 ⇒ 图像 P3。

The left is the original sRGB and the right is the image P3 profile applied. Let’s take a look at colour vision again.

左侧是原始 sRGB,右侧是应用的图像 P3 配置文件。 让我们再看看色觉。

If you look at the difference in colour vision, the green deviation is the largest, followed by red, then blue. If you look at the colour vision deviations and then look at the comparison image above again, it will be easier to understand which shades make the difference.

如果看色觉的差异,绿色偏差最大,其次是红色,然后是蓝色。 如果您查看色觉偏差,然后再次查看上面的比较图像,就会更容易理解哪些色调会产生差异。

There are cases where you forget the purpose of why you need to look at this while reading the article. Because of this, you are looking at the rendering result in a state where the final result deviation has occurred.
In addition, from Android Q (operating system 10.0 or later), the Android camp also applies display p3.
在某些情况下,您在阅读本文时忘记了为什么需要查看此内容的目的。 因此,您正在查看最终结果出现偏差的状态下的渲染结果。
另外,从Android Q(操作系统10.0以上)开始,Android阵营也应用了显示p3。

In conclusion, for example, clothes or hair with a calm red tone may look a little stronger red tone, and a skin tone that you think is appropriate may strangely look more reddish on the iPhone.

Shouldn’t the developer be sensitive to these colour results?

总而言之,例如,带有沉稳红色调的衣服或头发可能看起来更强烈一些红色调,而您认为合适的肤色在 iPhone 上可能会奇怪地显得偏红。

开发人员不应该对这些颜色结果敏感吗?

Monitor information for correct colour calibration.

显示器信息以进行正确的颜色校准。

This supports a wide color gamut from professional models with at least the products below. 这至少支持以下产品的专业型号的宽色域。

Monitors and Accessories | Dell India

I am using the monitor below when working from home. 我在家工作时使用下面的显示器。

SW321C|32-inch 4K AdobeRGB USB-C Photographer Monitor | BenQ US

This is because we believe that it is correct to develop a rendering workflow in an environment that supports the full-color gamut of the end user’s output device as much as possible.

这是因为我们认为在尽可能支持最终用户输出设备全色域的环境中开发渲染工作流是正确的。

Reference index.

(1)URP Multi-pass shading.

(2)Dot Product calculation between two vectors.

(3)Tone Mapping curve visualizer code.

Stylized Toon by Unity URP. Project Source | JP Lee on Patreon ( PATERON MEMBER ONLY )

태그:

댓글 남기기

%d 블로거가 이것을 좋아합니다: