ImageVerifierCode 换一换
格式:DOCX , 页数:14 ,大小:148.50KB ,
资源ID:22800692      下载积分:3 金币
快捷下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.bdocx.com/down/22800692.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(完整版延迟渲染的一些思考.docx)为本站会员(b****1)主动上传,冰豆网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知冰豆网(发送邮件至service@bdocx.com或直接QQ联系客服),我们立即给予删除!

完整版延迟渲染的一些思考.docx

1、完整版延迟渲染的一些思考延迟渲染的一些思考最近一直在准备开题,打算做延迟渲染(Deferred Shading)相关的实现。Deferred Shading是目前实现多动态光源实时渲染的最主流的技术了。其实Deferred Shading后面还有Deferred Lighting和Inferred Lighting这两种更先进一点的改进技术,可师兄师姐们的经验告诉我,开题不能开太难了。如果开题太难了,最后实现的时候实现不出来,那就是给自己找麻烦了。 其实Deferred Shading技术在国外已经提出近20年了,在游戏中实际应用也有近10年了,近几年出的几个好的游戏引擎中几乎都用到了它。在国

2、外,DS应该说是相当成熟了,可国内运用这种技术的引擎好像还没有听说过。我想并不是国内没有人会这种技术,网上很多博客都提到DS了,也有人自己实现了DS效果。只是这种技术对硬件的要求比较高,国内的游戏要照顾大多数的玩家。国外的玩家有可能因为想玩某个很酷的新游戏而升级硬件,国内几乎没有哪一款游戏有这种魅力。我想这也是为什么国内的游戏技术水平与国外的相差的不止一个级别的原因之一。 国内有些研究过DS的朋友在网上公布了他们的研究结果,不过好像并没有想像中那么好。他们的实现中要么就没有考虑阴影(比如燕良的博客中的他实现的源代码和效果图,我还没有认真研究他的源码,不过从他的效果图中看不到有阴影的迹象。 因为

3、这个问题,我纠结了很长时间,DS算法说起来是比较简单,空间换时间。可为什么大部分相关文章在谈到DS的优点的时候,都说可以实现多动态光源,去很少提到阴影的具体实现方法,好像阴影是理所当然的事。GPU精粹2中有一篇文章S.T.A.L.K.E.R.中的延期着色(Deferred Shading in S.T.A.L.K.E.R.),其中介绍的阴影也是用shadowmap来实现的。GPU精粹3中有一篇专门的介绍延迟渲染的,Tabula Rasa中的延迟着色技术,其实网上就有这篇文章的翻译:(“天堂里的死神”童鞋很给力,GPU精粹3的中文版是10年6月才出版的,他09年就翻译了这篇文章。),在Tabul

4、a Rasa中,阴影也是用shadowmap实现的。他还有一篇相关的博客“杀戮地带2”中的延迟渲染 2中也是用的shadowmap来实现的。 这么多的经典游戏中都是用的shadowmap,看来shadowmap是比较好的选择了。不过也有文章提到用DS实现阴影体渲染的,最后将会附上原文,但没有使用多个光源,如果真的有多个动态光源,估计计数量也够呛。 好了,开题的事就基本定下来了,题目就定为“多光源动态光照的实时渲染”之类的吧,反正是基于延迟渲染的,即使实现了延迟渲染之后,想进一步做Deferred Lighting和Inferred Lighting,也在题目范围之内。也算了了一大心事了,开题报

5、告就好说了。此事告一段落,等导师让我们准备开题答辩时,再写开题报告吧。从此可以全心地准备找实习了,腾讯5月8来来汉笔试,期待中。希望能去游戏开发岗位实习。 这是OpenGL Shading Language(2nd Edition)第13章的一节,讲的是用延迟渲染实现阴影体。(最近比较忙,时间有限,我只把这篇文章中基本关键部分翻译出来了的,水平有限,仅供参考。)13.3. Deferred Shading for Volume ShadowsWith contributions by Hugh Malan and Mike WeiblenOne of the disadvantages of

6、shadow mapping as discussed in the previous section is that the performance depends on the number of lights in the scene that are capable of casting shadows. With shadow mapping, a rendering pass must be performed for each of these light sources. These shadow maps are utilized in a final rendering p

7、ass. All these rendering passes can reduce performance, particularly if a great many polygons are to be rendered.It is possible to do higher-performance shadow generation with a rendering technique that is part of a general class of techniques known as DEFERRED SHADING. With deferred shading, the id

8、ea is to first quickly determine the surfaces that will be visible in the final scene and apply complex and time-consuming shader effects only to the pixels that make up those visible surfaces. In this sense, the shading operations are deferred until it can be established just which pixels contribut

9、e to the final image. A very simple and fast shader can render the scene into an offscreen buffer with depth buffering enabled. During this initial pass, the shader stores whatever information is needed to perform the necessary rendering operations in subsequent passes. Subsequent rendering operatio

10、ns are applied only to pixels that are determined to be visible in the high-performance initial pass. This technique ensures that no hardware cycles are wasted performing shading calculations on pixels that will ultimately be hidden.To render soft shadows with this technique, we need to make two pas

11、ses. In the first pass, we do two things:1. We use a shader to render the geometry of the scene without shadows or lighting into the frame buffer.用一个着色器把场景中除了阴影和光照之外的几何形状信息到渲染到帧缓存中。2.We use the same shader to store a normalized camera depth value for each pixel in a separate buffer. (This separate b

12、uffer is accessed as a texture in the second pass for the shadow computations.)用同一个着色器把每个像素点的规一化照相机深度值存到一个单独的缓存中。(在阴影计算的第二个Pass中,这个单独缓存将会当作一张纹理。)In the second pass, the shadows are composited with the existing contents of the frame buffer. To do this compositing operation, we render the shadow volum

13、e (i.e., the region in which the light source is occluded) for each shadow casting object. In the case of a sphere, computing the shadow volume is relatively easy. The spheres shadow is in the shape of a truncated cone, where the apex of the cone is at the light source. One end of the truncated cone

14、 is at the center of the sphere (see Figure 13.2). (It is somewhat more complex to compute the shadow volume for an object defined by polygons, but the same principle applies.)Figure 13.2. The shadow volume for a sphereWe composite shadows with the existing geometry by rendering the polygons that de

15、fine the shadow volume. This allows our second pass shader to be applied only to regions of the image that might be in shadow.通过渲染定义阴影体的多边形,我们把阴影与现有的几何信息合成。这可以让我们执行第二个pass的着色器仅仅只对图像上那些可能处在阴影中的区域进行渲染。To draw a shadow, we use the texture map shown in Figure 13.3. This texture map expresses how much a

16、visible surface point is in shadow relative to a shadow-casting object (i.e., how much its value is attenuated) based on a function of two values: 1) the squared distance from the visible surface point to the central axis of the shadow volume, and 2) the distance from the visible surface point to th

17、e center of the shadow-casting object. The first value is used as the s coordinate for accessing the shadow texture, and the second value is used as the t coordinate. The net result is that shadows are relatively sharp when the shadow-casting object is very close to the fragment being tested and the

18、 edges become softer as the distance increases.Figure 13.3. A texture map used to generate soft shadowsIn the second pass of the algorithm, we do the following:在算法的第二个pass中,我们做如下的操作:1.Draw the polygons that define the shadow volume. Only the fragments that could possibly be in shadow are accessed du

19、ring this rendering operation.画定义阴影体的多边形。在这次渲染操作中,只有那些可能处在阴影中的像素受影响。2.For each fragment rendered,对每一个渲染的像素,a.Look up the camera depth value for the fragment as computed in the first pass.查询些像素在第一个pass中计算出的照相机深度值。b.Calculate the coordinates of the visible surface point in the local space of the shado

20、w volume. In this space, the z axis is the axis of the shadow volume and the origin is at the center of the shadow-casting object. The x component of this coordinate corresponds to the distance from the center of the shadow-casting object and is used directly as the second coordinate for the shadow

21、texture lookup.在阴影体的局部空间内,计算可见面的点的坐标。在此空间中,Z轴是阴影体的轴线,原点在被投影物体的中心。些坐标的X分量对应着该点到被投影物体的中心的的距离,它可以直接用作查找阴影纹理时的第二个坐标。c.Compute the squared distance between the visible surface point and the z axis of the shadow volume. This value becomes the first coordinate for the texture lookup.计算可见面上点到阴影体的Z轴的距离的平方。这个

22、值将会作为查找阴影纹理时的第一个坐标。d.Access the shadow texture by using the computed index values to retrieve the light attenuation factor and store this in the output fragments alpha value. The red, green, and blue components of the output fragment color are each set to 0.用前面计算出的索引值从阴影纹理中读取光照衰减因子,并把它存在输出像素的alpha值中

23、。把输出像素的R,G,B分量分别设为0。e.Compute for the fragment the light attenuation factor that will properly darken the existing frame buffer value. For the computation, enable fixed functionality blending, set the blend mode source function to GL_SRC_ALPHA, and set the blend destination function to GL_ONE.计算像素的光

24、照衰减因子,它可能使现有的帧缓存中的值变暗。计算时,打开固定功能混合,设置混合模式源函数为GL_SRC_ALPHA,设置混合目标函数为GL_ONE。Because the shadow (second pass) shader is effectively a 2D compositing operation, the texel it reads from the depth texture must exactly match the pixel in the framebuffer it affects. So the texture coordinate and other quant

25、ities must be bilinearly interpolated without perspective correction. We interpolate by ensuring that w is constant across the polygondividing x, y, and z by w and then setting w to 1.0 does the job. Another issue is that when the viewer is inside the shadow volume, all faces are culled.We handle th

26、is special case by drawing a screen-sized quadrilateral since the shadow volume would cover the entire scene.13.3.1. Shaders for First PassThe shaders for the first pass of the volume shadow algorithm are shown in Listings 13.8 and 13.9. In the vertex shader, to accomplish the standard rendering of

27、the geometry (which in this specific case is all texture mapped), we just callftransformand pass along the texture coordinate. The other lines of code compute the normalized value for the depth from the vertex to the camera plane. The computed value, CameraDepth, is stored in a varying variable so t

28、hat it can be interpolated and made available to the fragment shader.To render into two buffers by using a fragment shader, the application must call glDrawBuffers and pass it a pointer to an array containing symbolic constants that define the two buffers to be written. In this case, we might pass t

29、he symbolic constant GL_BACK_LEFT as the first value in the array and GL_AUX0 as the second value. This means that gl_FragData0 will be used to update the value in the soon-to-be-visible framebuffer (assuming we are double-buffering) and the value for gl_FragData1 will be used to update the value in

30、 auxiliary buffer number 0. Thus,the fragment shader for the first pass of our algorithm contains just two lines of code (Listing 13.9).Listing 13.8. Vertex shader for first pass of soft volume shadow algorithmuniform vec3CameraPos;uniform vec3CameraDir;uniform float DepthNear;uniform float DepthFar

31、;varying float CameraDepth;/ normalized camera depthvarying vec2 TexCoord;void main()/ offset = vector to vertex from cameras positionvec3 offset = (gl_Vertex.xyz / gl_Vertex.w) - CameraPos;/ z = distance from vertex to camera planefloat z = -dot(offset, CameraDir);/ Depth from vertex to camera, mappe

copyright@ 2008-2022 冰豆网网站版权所有

经营许可证编号:鄂ICP备2022015515号-1